๐Ÿฆ€ Functional Rust
๐ŸŽฌ Fearless Concurrency Threads, Arc>, channels โ€” safe parallelism enforced by the compiler.
๐Ÿ“ Text version (for readers / accessibility)

โ€ข std::thread::spawn creates OS threads โ€” closures must be Send + 'static

โ€ข Arc> provides shared mutable state across threads safely

โ€ข Channels (mpsc) enable message passing โ€” multiple producers, single consumer

โ€ข Send and Sync marker traits enforce thread safety at compile time

โ€ข Data races are impossible โ€” the type system prevents them before your code runs

344: Structured Concurrency

Difficulty: 4 Level: Expert All spawned tasks are guaranteed to complete before the scope exits โ€” no task leaks, no dangling work.

The Problem This Solves

Traditional thread spawning with `thread::spawn` produces a "detached" thread โ€” it runs independently, with no guarantee about when it finishes. If the spawning code exits or panics before joining the thread, you have a race condition. Forgetting to `join()` a thread means it may still be running after the function that created it returns. With many spawned tasks, tracking all the handles becomes error-prone. Structured concurrency is a different model: spawned tasks are scoped to a lexical block. When the block exits, all tasks in that scope are guaranteed to have completed. This makes concurrent code as easy to reason about as sequential code โ€” you know exactly what's running at any point, and you can borrow from the enclosing scope safely (the borrow checker verifies this). `thread::scope` is Rust's built-in structured concurrency primitive since Rust 1.63. In async Rust, `tokio::task::JoinSet` and the `FuturesUnordered` stream provide similar guarantees.

The Intuition

Like Python's Trio nursery:
async with trio.open_nursery() as nursery:
 nursery.start_soon(task_a)
 nursery.start_soon(task_b)
# Both tasks are done here โ€” guaranteed
Or Java's structured concurrency (JEP 428):
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
 Future<A> fa = scope.fork(taskA);
 Future<B> fb = scope.fork(taskB);
 scope.join();
}
`thread::scope` in Rust gives the same guarantee, enforced at compile time through lifetimes.

How It Works in Rust

let results: Mutex<Vec<String>> = Mutex::new(Vec::new());

thread::scope(|s| {
 // Spawn threads that borrow from the outer scope
 s.spawn(|| {
     results.lock().unwrap().push("task-A".to_string());
 });
 s.spawn(|| {
     results.lock().unwrap().push("task-B".to_string());
 });
 // All threads are joined HERE โ€” the scope blocks until all finish
});

// Safe to access results โ€” all threads are definitely done
let mut r = results.lock().unwrap();
r.sort();
The key insight: threads spawned inside `thread::scope` can borrow from the enclosing stack frame because the scope guarantees they finish before the enclosing frame exits. The borrow checker verifies this โ€” something impossible with `thread::spawn` which requires `'static` bounds. Nested scopes work too: outer scope waits for inner scopes, creating a task hierarchy with clear parent-child relationships.

What This Unlocks

Key Differences

ConceptOCamlRust
Structured task scope`Lwt.join` / manual tracking`thread::scope` (1.63+) / `tokio::task::JoinSet`
Lifetime guaranteeConvention onlyEnforced by borrow checker
Borrow from parentMust clone or `Arc`-wrapDirect borrow โ€” scope guarantees lifetime
Nursery equivalentN/A in stdlib`thread::scope` for threads, `JoinSet` for tasks
use std::thread;
use std::sync::Mutex;

fn main() {
    let results: Mutex<Vec<String>> = Mutex::new(Vec::new());

    // thread::scope: all threads MUST finish before scope ends
    // The compiler enforces this โ€” no task leaks!
    thread::scope(|s| {
        s.spawn(|| {
            // Can borrow from outer scope โ€” lifetime guaranteed
            results.lock().unwrap().push("task-A".to_string());
        });

        s.spawn(|| {
            results.lock().unwrap().push("task-B".to_string());
        });

        s.spawn(|| {
            results.lock().unwrap().push("task-C".to_string());
        });
        // All threads joined here automatically
    });

    let mut r = results.lock().unwrap();
    r.sort();
    println!("All tasks completed: {r:?}");

    // Nested scopes: structured hierarchy
    let outer_data = vec![1i32, 2, 3, 4, 5];
    let partial_sums: Mutex<Vec<i32>> = Mutex::new(Vec::new());

    thread::scope(|outer| {
        for chunk in outer_data.chunks(2) {
            let partial_sums = &partial_sums;
            outer.spawn(move || {
                // Inner scope: further parallelism within each chunk task
                let sum: i32 = thread::scope(|inner| {
                    let halves: Vec<_> = chunk.iter()
                        .map(|&x| inner.spawn(move || x * 2))
                        .collect();
                    halves.into_iter().map(|h| h.join().unwrap()).sum()
                });
                partial_sums.lock().unwrap().push(sum);
            });
        }
    });

    let mut sums = partial_sums.lock().unwrap().clone();
    sums.sort();
    println!("Partial sums: {sums:?}");
}

#[cfg(test)]
mod tests {
    use super::*;
    #[test]
    fn scope_all_tasks_run() {
        let counter = Mutex::new(0);
        thread::scope(|s| {
            for _ in 0..5 {
                s.spawn(|| { *counter.lock().unwrap() += 1; });
            }
        });
        assert_eq!(*counter.lock().unwrap(), 5);
    }
    #[test]
    fn scope_can_borrow_outer() {
        let data = vec![1, 2, 3];
        let sum = Mutex::new(0);
        thread::scope(|s| {
            s.spawn(|| {
                *sum.lock().unwrap() = data.iter().sum::<i32>();
            });
        });
        assert_eq!(*sum.lock().unwrap(), 6);
    }
}
(* OCaml: structured concurrency via nested thread management *)

let run_scoped tasks =
  let handles = List.map (fun f -> Thread.create f ()) tasks in
  List.iter Thread.join handles

let with_resources setup teardown f =
  let r = setup () in
  (try f r with e -> teardown r; raise e);
  teardown r

let () =
  run_scoped [
    (fun () ->
      Thread.delay 0.01;
      Printf.printf "Task A done\n");
    (fun () ->
      Thread.delay 0.005;
      Printf.printf "Task B done\n");
    (fun () ->
      Printf.printf "Task C done\n");
  ];
  Printf.printf "All tasks in scope completed\n"