๐Ÿฆ€ Functional Rust
๐ŸŽฌ Fearless Concurrency Threads, Arc>, channels โ€” safe parallelism enforced by the compiler.
๐Ÿ“ Text version (for readers / accessibility)

โ€ข std::thread::spawn creates OS threads โ€” closures must be Send + 'static

โ€ข Arc> provides shared mutable state across threads safely

โ€ข Channels (mpsc) enable message passing โ€” multiple producers, single consumer

โ€ข Send and Sync marker traits enforce thread safety at compile time

โ€ข Data races are impossible โ€” the type system prevents them before your code runs

923: Thread Pool

Difficulty: 4 Level: Expert Spawn N threads once, reuse them for many tasks โ€” eliminate thread creation overhead and bound total resource usage.

The Problem This Solves

Spawning an OS thread for every task sounds reasonable until you're handling 10,000 requests per second. Thread creation costs 10โ€“100ยตs and ~1โ€“8MB of stack. Spawning a new thread per task becomes the bottleneck โ€” you spend more time creating and destroying threads than doing actual work. Thread pools fix this by amortizing thread creation: spawn N threads at startup, keep them alive forever, send work to them via a channel. A thread finishes one task and immediately starts the next. No creation overhead, predictable memory usage, controllable parallelism. The pool size is usually tied to CPU count (`num_cpus::get()` or `std::thread::available_parallelism()`). More threads than cores doesn't help for CPU-bound work โ€” you'd just be context switching. For I/O-bound work, you can go higher.

The Intuition

Thread pools exist in every language's standard library or ecosystem because this pattern is so fundamental: The Rust manual implementation is instructive because it shows exactly how thread pools work: a channel is the job queue, each thread is a loop pulling from that queue. The "pool" is just N threads all reading from the same channel.
Main thread:    [job1] [job2] [job3] [job4] [job5] [job6]
                โ†“ channel โ†“
Worker 1:     job1 โ”€โ”€โ”€โ”€ job4 โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€
Worker 2:     job2 โ”€โ”€โ”€ job5 โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€  (run in parallel)
Worker 3:     job3 โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€ job6 โ”€โ”€โ”€โ”€โ”€โ”€

How It Works in Rust

use std::sync::{mpsc, Arc, Mutex};
use std::thread;

type Job = Box<dyn FnOnce() + Send + 'static>;

struct ThreadPool {
 workers: Vec<thread::JoinHandle<()>>,
 sender: mpsc::Sender<Job>,
}

impl ThreadPool {
 fn new(size: usize) -> Self {
     let (sender, receiver) = mpsc::channel::<Job>();
     // Wrap receiver in Arc<Mutex> so all workers can share it
     let receiver = Arc::new(Mutex::new(receiver));

     let workers = (0..size).map(|id| {
         let rx = Arc::clone(&receiver);
         thread::spawn(move || {
             loop {
                 // Lock to receive a job, then release lock before running it
                 let job = rx.lock().unwrap().recv();
                 match job {
                     Ok(job) => { println!("worker {id} running job"); job(); }
                     Err(_) => break,  // sender dropped โ€” shut down
                 }
             }
         })
     }).collect();

     Self { workers, sender }
 }

 fn execute(&self, job: impl FnOnce() + Send + 'static) {
     self.sender.send(Box::new(job)).unwrap();
 }
}

impl Drop for ThreadPool {
 fn drop(&mut self) {
     // sender is dropped here โ†’ workers receive Err โ†’ break out of loop
 }
}
The `Arc<Mutex<Receiver>>` is the key insight: multiple threads share one receiver, but the Mutex ensures only one thread calls `recv()` at a time. Whichever thread gets the job runs it; the others wait for the next one. Releasing the lock before running the job (`let job = rx.lock().unwrap().recv()`) is important โ€” if you held the lock during execution, only one worker could run at a time, defeating the purpose.

What This Unlocks

Key Differences

ConceptOCamlRust
Thread pool`Domain_pool` (OCaml 5) or Domainslibmanual or `rayon::ThreadPool`
Shared work queue`Mutex` + `Queue.t``Arc<Mutex<Receiver<Job>>>`
Job type`unit -> unit` function`Box<dyn FnOnce() + Send + 'static>`
Graceful shutdownmanual signaldrop `Sender` โ†’ workers exit loop
Parallel iteratorsParmap`rayon::par_iter()` (uses thread pool)

Versions

DirectoryDescription
`std/`Standard library version using `std::sync`, `std::thread`
`tokio/`Tokio async runtime version using `tokio::sync`, `tokio::spawn`

Running

# Standard library version
cd std && cargo test

# Tokio version
cd tokio && cargo test
๐Ÿฆ€ Coming Soon

The Rust translation for this example is being generated. Check back shortly.

(* Thread Pool *)

๐Ÿ“Š Detailed Comparison

923-thread-pool โ€” Language Comparison

std vs tokio

Aspectstd versiontokio version
RuntimeOS threads via `std::thread`Async tasks on tokio runtime
Synchronization`std::sync::Mutex`, `Condvar``tokio::sync::Mutex`, channels
Channels`std::sync::mpsc` (unbounded)`tokio::sync::mpsc` (bounded, async)
BlockingThread blocks on lock/recvTask yields, runtime switches tasks
OverheadOne OS thread per taskMany tasks per thread (M:N)
Best forCPU-bound, simple concurrencyI/O-bound, high-concurrency servers