🦀 Functional Rust
🎬 Rust Ownership in 30 seconds Visual walkthrough of ownership, moves, and automatic memory management.
📝 Text version (for readers / accessibility)

• Each value in Rust has exactly one owner — when the owner goes out of scope, the value is dropped

• Assignment moves ownership by default; the original binding becomes invalid

• Borrowing (&T / &mut T) lets you reference data without taking ownership

• The compiler enforces: many shared references OR one mutable reference, never both

• No garbage collector needed — memory is freed deterministically at scope exit

109: Arc\<T\> — Thread-Safe Shared Ownership

Difficulty: 2 Level: Intermediate `Arc<T>` shares immutable data across threads safely — atomic reference counting ensures the value lives until every thread is done with it.

The Problem This Solves

Multithreaded programming in C is a minefield. You spawn a thread, pass it a pointer, and the parent thread might free the data before the child thread finishes reading it. The result is a use-after-free data race: unpredictable crashes, corrupted data, security vulnerabilities. Adding a mutex helps with mutation, but the lifetime problem — ensuring the data outlives all the threads using it — is separate and still manual. In Java or Go, the GC handles this: data lives as long as any thread holds a reference. But you're paying GC overhead for all your objects, and data races on mutation are still possible (Java's `synchronized`, Go's `sync.Mutex`). Rust's `Arc<T>` (Atomic Reference Count) gives you exactly what you need: the data lives until the last thread drops its `Arc`. The atomic reference counting is thread-safe. And because `Arc<T>` gives you shared immutable access (`&T` semantics), there are no data races on reads. For mutation, combine with `Mutex<T>` — the compiler requires it.

The Intuition

`Arc<T>` is `Rc<T>` with atomic (thread-safe) reference counting — clone it to share ownership across threads, and the data is freed only after every thread drops its clone.

How It Works in Rust

use std::sync::Arc;
use std::thread;

fn demo_read_sharing() {
 let data = Arc::new(vec![1, 2, 3, 4, 5]);
 
 let mut handles = vec![];
 for i in 0..3 {
     let data_clone = Arc::clone(&data); // clone the Arc, not the data
     let handle = thread::spawn(move || {
         // Each thread gets its own Arc pointing to the same Vec
         println!("Thread {}: sum = {}", i, data_clone.iter().sum::<i32>());
     });
     handles.push(handle);
 }
 
 for h in handles { h.join().unwrap(); }
 println!("Main still has data: {:?}", data); // still valid
}

// For mutation across threads: Arc<Mutex<T>>
use std::sync::Mutex;

fn demo_shared_mutation() {
 let counter = Arc::new(Mutex::new(0));
 
 let mut handles = vec![];
 for _ in 0..10 {
     let counter = Arc::clone(&counter);
     handles.push(thread::spawn(move || {
         let mut n = counter.lock().unwrap(); // lock before writing
         *n += 1;
         // lock released when `n` drops
     }));
 }
 
 for h in handles { h.join().unwrap(); }
 println!("Final count: {}", *counter.lock().unwrap()); // 10
}

// The compiler prevents sending non-thread-safe types across threads
// Rc<T> is NOT Send — can't move it to another thread
// Arc<T> IS Send — designed for cross-thread sharing

What This Unlocks

Key Differences

ConceptOCamlRust
Thread-safe shared dataGC manages lifetime automatically`Arc<T>` — opt-in atomic reference counting
Shared mutation across threads`Mutex.t` wrapping`Arc<Mutex<T>>` — type encodes the pattern
Preventing data racesProgrammer's responsibilityCompiler refuses to send non-`Sync` types across threads
Difference from single-thread sharingN/A (GC is always safe)`Rc` → single-threaded; `Arc` → multi-threaded
PerformanceGC overhead on all valuesAtomic ops only on `Arc` clones/drops; reads are free
// Example 109: Arc<T> for Thread-Safe Sharing
//
// Arc = Atomic Reference Counting. Like Rc but thread-safe.
// Use Arc when sharing immutable data across threads.

use std::sync::Arc;
use std::thread;

// Approach 1: Shared data across threads
fn approach1() {
    let data: Arc<Vec<i32>> = Arc::new((1..=100).collect());
    
    let data1 = Arc::clone(&data);
    let handle1 = thread::spawn(move || {
        data1[..50].iter().sum::<i32>()
    });
    
    let data2 = Arc::clone(&data);
    let handle2 = thread::spawn(move || {
        data2[50..].iter().sum::<i32>()
    });
    
    let sum1 = handle1.join().unwrap();
    let sum2 = handle2.join().unwrap();
    let total = sum1 + sum2;
    assert_eq!(total, 5050);
    println!("Total: {}", total);
}

// Approach 2: Map-reduce with threads
fn parallel_map_reduce<T, R, F, G>(data: Vec<T>, mapper: F, reducer: G, init: R) -> R
where
    T: Send + 'static,
    R: Send + 'static + Copy,
    F: Fn(T) -> R + Send + Sync + 'static,
    G: Fn(R, R) -> R,
{
    let mapper = Arc::new(mapper);
    let handles: Vec<_> = data.into_iter().map(|item| {
        let mapper = Arc::clone(&mapper);
        thread::spawn(move || mapper(item))
    }).collect();
    
    let mut result = init;
    for h in handles {
        result = reducer(result, h.join().unwrap());
    }
    result
}

fn approach2() {
    let data = vec![1, 2, 3, 4, 5];
    let result = parallel_map_reduce(data, |x| x * x, |a, b| a + b, 0);
    assert_eq!(result, 55);
    println!("Sum of squares: {}", result);
}

// Approach 3: Shared config across worker threads
fn approach3() {
    let texts = Arc::new(vec![
        "hello world".to_string(),
        "foo bar baz".to_string(),
        "one".to_string(),
    ]);
    
    let mut handles = vec![];
    for i in 0..texts.len() {
        let texts = Arc::clone(&texts);
        handles.push(thread::spawn(move || {
            texts[i].split_whitespace().count()
        }));
    }
    
    let total: usize = handles.into_iter()
        .map(|h| h.join().unwrap())
        .sum();
    assert_eq!(total, 6);
    println!("Total words: {}", total);
}

fn main() {
    println!("=== Approach 1: Parallel Sum ===");
    approach1();
    println!("\n=== Approach 2: Map-Reduce ===");
    approach2();
    println!("\n=== Approach 3: Shared Config ===");
    approach3();
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_arc_across_threads() {
        let data = Arc::new(42);
        let data2 = Arc::clone(&data);
        let handle = thread::spawn(move || *data2);
        assert_eq!(handle.join().unwrap(), 42);
    }

    #[test]
    fn test_arc_strong_count() {
        let a = Arc::new("hello");
        let b = Arc::clone(&a);
        assert_eq!(Arc::strong_count(&a), 2);
        drop(b);
        assert_eq!(Arc::strong_count(&a), 1);
    }

    #[test]
    fn test_parallel_map_reduce() {
        let result = parallel_map_reduce(vec![1, 2, 3], |x| x * 2, |a, b| a + b, 0);
        assert_eq!(result, 12);
    }

    #[test]
    fn test_shared_vec_threads() {
        let v = Arc::new(vec![10, 20, 30]);
        let handles: Vec<_> = (0..3).map(|i| {
            let v = Arc::clone(&v);
            thread::spawn(move || v[i])
        }).collect();
        let sum: i32 = handles.into_iter().map(|h| h.join().unwrap()).sum();
        assert_eq!(sum, 60);
    }
}
(* Example 109: Arc<T> — Thread-Safe Sharing *)

(* OCaml's GC is thread-safe by default. Sharing data between
   threads (via Domain in OCaml 5) just works. *)

(* Approach 1: Shared immutable data across "workers" *)
let process_chunk data start len =
  let sum = ref 0 in
  for i = start to start + len - 1 do
    sum := !sum + data.(i)
  done;
  !sum

let approach1 () =
  let data = Array.init 100 (fun i -> i + 1) in
  let sum1 = process_chunk data 0 50 in
  let sum2 = process_chunk data 50 50 in
  let total = sum1 + sum2 in
  assert (total = 5050);
  Printf.printf "Total: %d\n" total

(* Approach 2: Map-reduce pattern *)
let map_reduce mapper reducer init data =
  let mapped = List.map mapper data in
  List.fold_left reducer init mapped

let approach2 () =
  let data = [1; 2; 3; 4; 5] in
  let result = map_reduce (fun x -> x * x) ( + ) 0 data in
  assert (result = 55);
  Printf.printf "Sum of squares: %d\n" result

(* Approach 3: Parallel word count simulation *)
let count_words text =
  let words = String.split_on_char ' ' text in
  List.length (List.filter (fun w -> String.length w > 0) words)

let approach3 () =
  let texts = ["hello world"; "foo bar baz"; "one"] in
  let counts = List.map count_words texts in
  let total = List.fold_left ( + ) 0 counts in
  assert (total = 6);
  Printf.printf "Total words: %d\n" total

let () =
  approach1 ();
  approach2 ();
  approach3 ();
  Printf.printf "✓ All tests passed\n"

📊 Detailed Comparison

Comparison: Arc Thread-Safe Sharing

Sharing Data Across Workers

OCaml:

🐪 Show OCaml equivalent
let data = Array.init 100 (fun i -> i + 1) in
(* Just use data in any domain — GC handles sharing *)
let sum = process_chunk data 0 50

Rust:

let data = Arc::new((1..=100).collect::<Vec<i32>>());
let data_clone = Arc::clone(&data);
let handle = thread::spawn(move || {
 data_clone[..50].iter().sum::<i32>()
});

Why Not Rc?

Rust — `Rc` is not `Send`:

let data = Rc::new(42);
// thread::spawn(move || *data); // ERROR: Rc is not Send
let data = Arc::new(42);         // Use Arc instead
thread::spawn(move || *data);    // OK

Shared Function Across Threads

OCaml:

🐪 Show OCaml equivalent
let mapper x = x * x in
List.map mapper data  (* closures shared freely *)

Rust:

let mapper = Arc::new(|x: i32| x * x);
let m = Arc::clone(&mapper);
thread::spawn(move || m(42));