πŸ¦€ Functional Rust

467: Epoch-Based Garbage Collection

Difficulty: 3 Level: Intermediate Safe memory reclamation for lock-free data structures β€” no GC, no reference counting overhead.

The Problem This Solves

Lock-free data structures avoid mutexes, but they create a hard memory problem: you can't free a node the moment you unlink it, because another thread might still hold a pointer to it. With a GC language this is automatic. In Rust, with raw `AtomicPtr`, you need to know when it's safe to deallocate. Reference counting (`Arc`) works but adds atomic increments/decrements to every read β€” exactly the overhead you avoided by going lock-free. Epoch-based reclamation (EBR) is the industry solution: threads declare when they're inside a critical section by pinning the current epoch. A retired object is only freed when the global epoch has advanced past every thread that could have seen it. Read-side cost is a single atomic load β€” almost free.

The Intuition

Imagine a library that recycles returned books. You can only recycle a book once every borrower from that checkout period has returned it. The library uses semester numbers (epochs): when a new semester starts and all old borrowers are gone, books from two semesters ago can be safely recycled. Threads "check in" to the current epoch when they start a read operation and "check out" when done. Retired objects wait until all threads have advanced past the epoch where the object was retired.

How It Works in Rust

1. Global epoch counter β€” a shared `AtomicUsize` advances periodically. 2. Thread-local state β€” each thread tracks its current pinned epoch. 3. Pin on enter:
let guard = epoch::pin(); // from crossbeam-epoch
// now safe to read lock-free structures
4. Defer deallocation:
guard.defer_destroy(ptr); // runs when epoch advances past current
5. Epoch advance β€” crossbeam-epoch advances the global epoch when all threads have moved on; deferred destructors run. The `crossbeam-epoch` crate implements this with carefully tuned memory orderings. The `crossbeam-deque` work-stealing queues use it internally.

What This Unlocks

Key Differences

ConceptOCamlRust
Memory reclamationGarbage collectorEpoch-based (`crossbeam-epoch`)
Read-side overheadGC write barriersSingle atomic load (pin)
Reclaim timingGC decidesWhen all threads past epoch
APIAutomaticExplicit `guard.defer_destroy()`
// 467. Epoch-based garbage collection concept
use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::{Arc, Mutex};
use std::collections::VecDeque;
use std::thread;
use std::time::Duration;

struct EpochMgr {
    epoch: AtomicU64,
    retired: Mutex<VecDeque<(u64, String)>>,
    pinned: Mutex<Vec<u64>>,
}

impl EpochMgr {
    fn new() -> Self {
        EpochMgr { epoch: AtomicU64::new(0), retired: Mutex::new(VecDeque::new()), pinned: Mutex::new(Vec::new()) }
    }
    fn pin(&self) -> u64 {
        let e = self.epoch.load(Ordering::Acquire);
        self.pinned.lock().unwrap().push(e); e
    }
    fn unpin(&self, e: u64) {
        let mut p = self.pinned.lock().unwrap();
        if let Some(i) = p.iter().position(|&x| x==e) { p.remove(i); }
    }
    fn retire(&self, desc: &str) {
        let e = self.epoch.load(Ordering::Relaxed);
        self.retired.lock().unwrap().push_back((e, desc.to_string()));
    }
    fn collect(&self) {
        let new_e = self.epoch.fetch_add(1, Ordering::AcqRel) + 1;
        let min_active = self.pinned.lock().unwrap().iter().cloned().min().unwrap_or(new_e);
        let safe_before = min_active.saturating_sub(1);
        let mut r = self.retired.lock().unwrap(); let mut n=0;
        while r.front().map(|(e,_)| *e<=safe_before).unwrap_or(false) {
            let (_,d)=r.pop_front().unwrap(); println!("  freed: {}", d); n+=1;
        }
        println!("epoch→{}; freed {}; deferred {}", new_e, n, r.len());
    }
}

fn main() {
    let m = Arc::new(EpochMgr::new());
    let rs: Vec<_> = (0..3).map(|id|{let m=Arc::clone(&m); thread::spawn(move || {
        let e=m.pin(); println!("reader {} pinned epoch {}", id, e);
        thread::sleep(Duration::from_millis(10*(id as u64+1)));
        m.unpin(e); println!("reader {} unpinned", id);
    })}).collect();
    m.retire("old-A"); m.retire("old-B");
    thread::sleep(Duration::from_millis(5));
    println!("first collect:"); m.collect();
    for r in rs { r.join().unwrap(); }
    println!("second collect:"); m.collect();
}

#[cfg(test)]
mod tests {
    use super::*;
    #[test] fn test_epoch_advances() { let m=EpochMgr::new(); m.collect(); assert_eq!(m.epoch.load(Ordering::SeqCst),1); }
    #[test] fn test_retire()        { let m=EpochMgr::new(); m.retire("x"); assert_eq!(m.retired.lock().unwrap().len(),1); }
}
(* 467. Epoch GC concept – OCaml *)
(* OCaml has a GC; this demonstrates the concept of deferred reclamation *)

type 'a retired = { value: 'a; epoch: int }

let global_epoch = ref 0
let retired_list : 'a retired list ref = ref []
let mutex = Mutex.create ()

let retire v =
  Mutex.lock mutex;
  let e = !global_epoch in
  retired_list := { value=v; epoch=e } :: !retired_list;
  Mutex.unlock mutex

let collect min_safe_epoch =
  Mutex.lock mutex;
  let (safe, keep) = List.partition (fun r -> r.epoch < min_safe_epoch) !retired_list in
  retired_list := keep;
  Mutex.unlock mutex;
  Printf.printf "reclaiming %d objects\n" (List.length safe)

let advance () =
  Mutex.lock mutex; incr global_epoch; Mutex.unlock mutex

let () =
  retire "node-A"; retire "node-B"; retire "node-C";
  Printf.printf "retired 3, epoch=%d\n" !global_epoch;
  advance ();
  Printf.printf "advanced to epoch=%d\n" !global_epoch;
  collect 1;  (* safe epoch: reclaim anything from epoch < 1 *)
  Printf.printf "remaining: %d\n" (List.length !retired_list)