๐Ÿฆ€ Functional Rust
๐ŸŽฌ Fearless Concurrency Threads, Arc>, channels โ€” safe parallelism enforced by the compiler.
๐Ÿ“ Text version (for readers / accessibility)

โ€ข std::thread::spawn creates OS threads โ€” closures must be Send + 'static

โ€ข Arc> provides shared mutable state across threads safely

โ€ข Channels (mpsc) enable message passing โ€” multiple producers, single consumer

โ€ข Send and Sync marker traits enforce thread safety at compile time

โ€ข Data races are impossible โ€” the type system prevents them before your code runs

342: Arc<Mutex<T>> Pattern

Difficulty: 3 Level: Advanced Thread-safe shared mutable state: `Arc` gives shared ownership across threads, `Mutex` ensures only one thread modifies at a time.

The Problem This Solves

Sometimes you genuinely need multiple threads to share and modify the same piece of data โ€” a counter, a cache, a connection pool. Channels work when you can model the problem as message passing, but not everything fits that mold. A shared hit counter updated by dozens of concurrent request handlers needs shared mutable state. Rust's ownership model normally prevents this entirely: one owner, or many readers, but never multiple mutable references. `Arc<Mutex<T>>` is the escape hatch โ€” explicitly opt into shared mutability with guaranteed safety. The compiler forces you to lock before accessing, making data races impossible. If you forget to lock, it won't compile. The pattern appears everywhere in Rust code dealing with concurrency: web servers maintaining connection state, databases with connection pools, caches shared across request handlers.

The Intuition

Two separate problems, two separate types: `Arc<T>` โ€” shared ownership: Multiple threads need to hold the same data. `Rc<T>` does this for single-threaded code, but isn't safe across threads. `Arc` (Atomic Reference Count) uses atomic operations for the reference count โ€” thread-safe. When the last Arc is dropped, the allocation is freed. `Mutex<T>` โ€” exclusive access: Multiple threads need to modify the same data, but not simultaneously. `Mutex` is a lock: only one thread can hold it at a time. In Rust, the data lives inside the mutex โ€” you can't access it without locking. This is different from many other languages where the mutex and the data are separate.
# Python: lock and data are separate โ€” easy to access data without locking
self.lock = threading.Lock()
self.data = []
# Oops: self.data.append(x) โ€” forgot to lock, race condition

# Rust: data is inside the mutex โ€” can't access without locking
let shared = Arc::new(Mutex::new(Vec::new()));
shared.lock().unwrap().push(x);  // lock() required, no way around it

How It Works in Rust

use std::sync::{Arc, Mutex};
use std::thread;

fn main() {
 let counter = Arc::new(Mutex::new(0i32));

 let handles: Vec<_> = (0..10).map(|_| {
     let counter = Arc::clone(&counter);  // clone the Arc (cheap: just ref count++)
     thread::spawn(move || {
         let mut val = counter.lock().unwrap();  // lock โ€” blocks if another thread holds it
         *val += 1;
         // lock released here when `val` (MutexGuard) is dropped
     })
 }).collect();

 for h in handles { h.join().unwrap(); }

 println!("Final: {}", *counter.lock().unwrap());  // prints 10
}
`counter.lock().unwrap()` returns a `MutexGuard<T>` โ€” a smart pointer that dereferences to `T` and releases the lock when dropped (RAII). The `unwrap()` handles the poisoned mutex case: if a thread panicked while holding the lock, future `lock()` calls return `Err`. Avoid holding the lock across awaits in async code โ€” that blocks the entire async thread. In async contexts, use `tokio::sync::Mutex` instead of `std::sync::Mutex`.

What This Unlocks

Key Differences

ConceptOCamlRust
Shared ownership`ref` / `Hashtbl` / manual`Arc<T>` โ€” atomic ref count
Mutual exclusion`Mutex.create ()` (separate from data)`Mutex<T>` (data inside the lock)
Lock acquisition`Mutex.lock m``arc.lock().unwrap()` โ†’ `MutexGuard<T>`
Auto-unlockexplicit `Mutex.unlock m`automatic on `MutexGuard` drop (RAII)
Shared mutable state`ref` with manual synchronization`Arc<Mutex<T>>` โ€” compiler-enforced
//! # Arc<Mutex<T>> Pattern
//! Thread-safe shared mutable state: Arc gives shared ownership, Mutex ensures exclusive access.

use std::sync::{Arc, Mutex};
use std::thread;

pub fn shared_counter(num_threads: usize) -> i32 {
    let counter = Arc::new(Mutex::new(0));
    let handles: Vec<_> = (0..num_threads).map(|_| {
        let c = Arc::clone(&counter);
        thread::spawn(move || { *c.lock().unwrap() += 1; })
    }).collect();
    for h in handles { h.join().unwrap(); }
    let result = *counter.lock().unwrap();
    result
}

pub struct ThreadSafeCache<T> { data: Arc<Mutex<Vec<T>>> }

impl<T: Clone> ThreadSafeCache<T> {
    pub fn new() -> Self { Self { data: Arc::new(Mutex::new(Vec::new())) } }
    pub fn push(&self, item: T) { self.data.lock().unwrap().push(item); }
    pub fn get_all(&self) -> Vec<T> { self.data.lock().unwrap().clone() }
    pub fn len(&self) -> usize { self.data.lock().unwrap().len() }
}

impl<T: Clone> Default for ThreadSafeCache<T> {
    fn default() -> Self { Self::new() }
}

#[cfg(test)]
mod tests {
    use super::*;
    #[test] fn counter_works() { assert_eq!(shared_counter(10), 10); }
    #[test] fn cache_thread_safe() {
        let cache = Arc::new(ThreadSafeCache::<i32>::new());
        let handles: Vec<_> = (0..5).map(|i| {
            let c = Arc::clone(&cache);
            thread::spawn(move || c.push(i))
        }).collect();
        for h in handles { h.join().unwrap(); }
        assert_eq!(cache.len(), 5);
    }
}
(* 342: Arc<Mutex<T>> Pattern *)