🦀 Functional Rust

444: Arc<RwLock<T>> — Multiple Readers, One Writer

Difficulty: 3 Level: Intermediate Allow many threads to read shared data simultaneously, while guaranteeing exclusive access for writes — better throughput than `Mutex` for read-heavy workloads.

The Problem This Solves

`Mutex` is correct but conservative: one thread at a time, period. If your data is read 100 times for every write — a configuration map, a DNS cache, a lookup table — `Mutex` serialises all those reads unnecessarily. Four threads trying to read the same immutable snapshot of data block each other for no reason. `RwLock` captures the natural distinction: reading doesn't mutate, so reads don't need to exclude each other. Multiple readers holding `RwLockReadGuard` simultaneously is safe; they all see a consistent snapshot. Only writes require exclusivity — `write()` blocks until all current readers finish, and new readers block until the writer releases. The failure mode of getting this wrong in other languages is subtle. A `HashMap` read in Java while another thread writes it causes `ConcurrentModificationException` — at runtime. In Python, the GIL happens to protect you for pure CPython, but that disappears with extensions or alternative runtimes. Rust gives you the `RwLock<T>` invariant at compile time: you cannot call `write()` and hold a `ReadGuard` at the same time in the same thread, because that would be a deadlock the borrow checker catches.

The Intuition

Think of `RwLock` as a library book: many people can read it at once, but if someone wants to write in the margins, everyone else has to put their copy down first. `Mutex` is the same book but with a rule that only one person can look at it at a time even to read. In Java you'd use `ReentrantReadWriteLock`. In Go, `sync.RWMutex`. The Rust version wraps the data directly — the same "data inside the lock" guarantee as `Mutex`. You get `RwLockReadGuard` (shared, like `&T`) or `RwLockWriteGuard` (exclusive, like `&mut T`). The guard types make the access pattern visible in code.

How It Works in Rust

use std::sync::{Arc, RwLock};
use std::collections::HashMap;
use std::thread;

let cfg: Arc<RwLock<HashMap<&str, &str>>> =
 Arc::new(RwLock::new(HashMap::from([("host", "localhost")])));

// Spawn 4 readers — all run concurrently, no blocking between them
let readers: Vec<_> = (0..4).map(|id| {
 let c = Arc::clone(&cfg);
 thread::spawn(move || {
     let guard = c.read().unwrap(); // shared — many readers OK at once
     let _ = guard.get("host");
     // guard drops here — read lock released
     println!("reader {} done", id);
 })
}).collect();

// Writer runs concurrently — blocks until all readers finish
let writer = {
 let c = Arc::clone(&cfg);
 thread::spawn(move || {
     let mut guard = c.write().unwrap(); // exclusive — waits for readers
     guard.insert("host", "example.com");
     // guard drops — write lock released, pending readers unblock
 })
};

for r in readers { r.join().unwrap(); }
writer.join().unwrap();
Prefer `RwLock` only when reads genuinely dominate. On Linux (pthreads), `RwLock` has slightly higher overhead than `Mutex` per operation. The win comes only when concurrent reads happen frequently enough to offset that cost.

What This Unlocks

Key Differences

ConceptOCamlRust
Multiple readersblocked by any locksimultaneous — all hold `RwLockReadGuard`
Exclusive writesame as Mutex`write()` waits for all readers to release
Guard typesone Mutex guard`RwLockReadGuard` / `RwLockWriteGuard`
Writer starvationpossiblepossible on some platforms — OS-dependent
When to useN/Areads >> writes; otherwise prefer `Mutex`
// 444. Arc<RwLock<T>> read-write sharing
use std::sync::{Arc, RwLock};
use std::collections::HashMap;
use std::thread;
use std::time::Duration;

fn main() {
    let cfg: Arc<RwLock<HashMap<&str,&str>>> = Arc::new(RwLock::new({
        let mut m = HashMap::new(); m.insert("host","localhost"); m.insert("port","8080"); m
    }));

    // Many readers simultaneously
    let readers: Vec<_> = (0..4).map(|id| {
        let c = Arc::clone(&cfg);
        thread::spawn(move || {
            for _ in 0..3 {
                let g = c.read().unwrap();  // shared read — no blocking between readers
                let _ = g.get("host");
                drop(g);
                thread::sleep(Duration::from_millis(5));
            }
            println!("Reader {} done", id);
        })
    }).collect();

    // One writer
    let writer = { let c = Arc::clone(&cfg); thread::spawn(move || {
        thread::sleep(Duration::from_millis(10));
        c.write().unwrap().insert("host", "example.com");  // exclusive
        println!("Writer updated");
    })};

    for r in readers { r.join().unwrap(); }
    writer.join().unwrap();
    println!("host = {}", cfg.read().unwrap().get("host").unwrap());
}

#[cfg(test)]
mod tests {
    use super::*;
    #[test] fn test_concurrent_reads() {
        let d = Arc::new(RwLock::new(vec![1,2,3]));
        let hs: Vec<_>=(0..4).map(|_|{ let d=Arc::clone(&d); thread::spawn(move || d.read().unwrap().iter().sum::<i32>()) }).collect();
        for h in hs { assert_eq!(h.join().unwrap(), 6); }
    }
    #[test] fn test_write() { let d=RwLock::new(0u32); *d.write().unwrap()=42; assert_eq!(*d.read().unwrap(),42); }
}
(* 444. RwLock pattern – OCaml *)
(* OCaml stdlib has no RwLock; simulate with Mutex *)
let config = ref [("host","localhost");("port","8080")]
let mutex = Mutex.create ()

let read_config k =
  Mutex.lock mutex;
  let v = List.assoc_opt k !config in
  Mutex.unlock mutex; v

let write_config k v =
  Mutex.lock mutex;
  config := (k,v) :: List.filter (fun (a,_) -> a<>k) !config;
  Mutex.unlock mutex

let () =
  let readers = List.init 3 (fun _ ->
    Thread.create (fun () ->
      for _ = 1 to 2 do
        let h = Option.value ~default:"?" (read_config "host") in
        Printf.printf "host=%s\n%!" h
      done) ()
  ) in
  let writer = Thread.create (fun () ->
    Thread.delay 0.01; write_config "host" "example.com"
  ) () in
  List.iter Thread.join readers; Thread.join writer;
  Printf.printf "final: %s\n" (Option.value ~default:"?" (read_config "host"))