๐Ÿฆ€ Functional Rust
๐ŸŽฌ Closures in Rust Fn/FnMut/FnOnce, capturing environment, move closures, higher-order functions.
๐Ÿ“ Text version (for readers / accessibility)

โ€ข Closures capture variables from their environment โ€” by reference, mutable reference, or by value (move)

โ€ข Three traits: Fn (shared borrow), FnMut (mutable borrow), FnOnce (takes ownership)

โ€ข Higher-order functions like .map(), .filter(), .fold() accept closures as arguments

โ€ข move closures take ownership of captured variables โ€” essential for threading

โ€ข Closures enable functional patterns: partial application, composition, and strategy

515: Lazy Evaluation with OnceLock

Difficulty: 3 Level: Intermediate Initialize a value exactly once, on first access, and cache it forever.

The Problem This Solves

Some values are expensive to compute but only needed sometimes, and when needed, needed many times. Computing them eagerly wastes resources if they're never used. Recomputing them every access wastes time. The solution is lazy initialization: compute once on first access, cache the result, return the cached value on all subsequent accesses. In Rust, global statics can't be initialized with arbitrary runtime expressions. `OnceLock<T>` solves this: declare a `static` of type `OnceLock<T>`, and call `.get_or_init(|| ...)` to compute and store the value the first time it's accessed. Every subsequent call returns the cached value instantly. No `unsafe`, no external crates, thread-safe by design. The same pattern works for struct fields. A `LazyConfig` that parses its raw string only when `.items()` is first called โ€” not in the constructor โ€” uses `OnceLock` fields. The parse cost is paid once, on demand, and the result is cached for the struct's lifetime.

The Intuition

Think of `OnceLock<T>` as a box with a one-way lock. It starts empty. The first person to call `get_or_init` runs the initializer, puts the result in the box, and locks it. Everyone after that just reads from the box โ€” the lock prevents any second initialization. The box can never go back to empty.

How It Works in Rust

1. Global static โ€” `static VALUE: OnceLock<T> = OnceLock::new();` declares an uninitialized global. 2. `get_or_init` โ€” `.get_or_init(|| expensive_computation())` initializes on first call, returns a `&T`; subsequent calls return the same `&T` instantly. 3. Struct fields โ€” embed `OnceLock<T>` in a struct; initialize in a method that takes `&self`; the field is effectively a lazy-computed cached property. 4. Thread safety โ€” `OnceLock<T>` is `Sync`; multiple threads can race on `get_or_init`, only one will run the initializer, others will wait and then read the result. 5. `Arc<T>` compatible โ€” wrap a struct containing `OnceLock` in `Arc` for multi-threaded sharing; initialization is safe across threads.

What This Unlocks

Key Differences

ConceptOCamlRust
Lazy evaluation`lazy_t` library or `Lazy.t` (Jane Street); GC-managed`OnceLock<T>` in std; `once_cell::Lazy<T>` crate for more ergonomics
Global lazy values`let x = lazy (fun () -> ...)``static X: OnceLock<T> = OnceLock::new()` + `get_or_init`
Thread safetyGC handles memory; `Mutex` for concurrency`OnceLock` is `Sync`; initializes exactly once across threads
Lazy struct fieldsMutable record + option field manually`OnceLock<T>` field; computed in `&self` methods
//! # 515. Lazy Evaluation with OnceLock
//! Deferred computation using std::sync::OnceLock.

use std::sync::OnceLock;

/// Global lazy value โ€” initialized once on first access
static EXPENSIVE_VALUE: OnceLock<i64> = OnceLock::new();

fn get_expensive_value() -> i64 {
    *EXPENSIVE_VALUE.get_or_init(|| {
        println!("[computing expensive global value...]");
        (1..=1_000_000i64).sum()
    })
}

/// Lazy struct: computes fields only when accessed
struct LazyConfig {
    raw: String,
    // OnceLock lets us cache parsed values without RefCell
    parsed_items: OnceLock<Vec<String>>,
    item_count: OnceLock<usize>,
}

impl LazyConfig {
    fn new(raw: &str) -> Self {
        LazyConfig {
            raw: raw.to_string(),
            parsed_items: OnceLock::new(),
            item_count: OnceLock::new(),
        }
    }

    fn items(&self) -> &[String] {
        self.parsed_items.get_or_init(|| {
            println!("[parsing config...]");
            self.raw.split(',')
                .map(|s| s.trim().to_string())
                .collect()
        })
    }

    fn count(&self) -> usize {
        *self.item_count.get_or_init(|| {
            println!("[counting items...]");
            self.items().len()
        })
    }
}

/// Per-instance OnceLock for lazy computation on structs
struct ExpensiveComputation {
    data: Vec<i32>,
    result: OnceLock<i32>,
}

impl ExpensiveComputation {
    fn new(data: Vec<i32>) -> Self {
        ExpensiveComputation { data, result: OnceLock::new() }
    }

    fn compute(&self) -> i32 {
        *self.result.get_or_init(|| {
            println!("[running expensive computation on {} items]", self.data.len());
            self.data.iter().map(|&x| x * x).sum()
        })
    }
}

fn main() {
    // Global lazy value
    println!("Before first access:");
    let v1 = get_expensive_value();
    let v2 = get_expensive_value(); // instant โ€” already computed
    println!("Value (twice): {} == {}", v1, v2);

    // Lazy struct fields
    println!("\nLazy config:");
    let config = LazyConfig::new("alpha, beta, gamma, delta");
    println!("Accessing count (triggers parse)...");
    println!("Count: {}", config.count());
    println!("Count again (cached): {}", config.count());
    println!("Items: {:?}", config.items()); // already parsed

    // Per-instance lazy computation
    println!("\nPer-instance lazy:");
    let comp = ExpensiveComputation::new(vec![1, 2, 3, 4, 5]);
    println!("First call:");
    println!("Result: {}", comp.compute()); // 1+4+9+16+25=55
    println!("Second call (cached):");
    println!("Result: {}", comp.compute());

    // Thread-safe: OnceLock works across threads
    let comp = std::sync::Arc::new(ExpensiveComputation::new(vec![10, 20, 30]));
    let comp2 = comp.clone();
    let h = std::thread::spawn(move || comp2.compute());
    let r1 = comp.compute();
    let r2 = h.join().unwrap();
    println!("\nThread-safe: {} == {}", r1, r2);
}

#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_global_once_lock() {
        let v1 = get_expensive_value();
        let v2 = get_expensive_value();
        assert_eq!(v1, v2);
        assert_eq!(v1, 500_000_500_000);
    }

    #[test]
    fn test_lazy_config() {
        let c = LazyConfig::new("a, b, c");
        assert_eq!(c.count(), 3);
        assert_eq!(c.items(), &["a", "b", "c"]);
    }

    #[test]
    fn test_lazy_computation() {
        let c = ExpensiveComputation::new(vec![3, 4]);
        assert_eq!(c.compute(), 25); // 9 + 16
        assert_eq!(c.compute(), 25); // cached
    }
}
(* Lazy evaluation in OCaml using the built-in lazy keyword *)

(* lazy: defer computation *)
let expensive_value = lazy (
  Printf.printf "[computing expensive value...]\n";
  let sum = ref 0 in
  for i = 1 to 1000000 do sum := !sum + i done;
  !sum
)

(* Lazy fibonacci sequence *)
let make_lazy_fib () =
  let cache = Hashtbl.create 100 in
  let rec fib n =
    match Hashtbl.find_opt cache n with
    | Some v -> v
    | None ->
      let v = if n <= 1 then n else fib (n-1) + fib (n-2) in
      Hashtbl.add cache n v; v
  in
  fib

let () =
  Printf.printf "Before forcing lazy value\n";
  Printf.printf "Value: %d\n" (Lazy.force expensive_value);
  Printf.printf "Value again: %d\n" (Lazy.force expensive_value);  (* cached *)

  let fib = make_lazy_fib () in
  Printf.printf "fib(30) = %d\n" (fib 30);
  Printf.printf "fib(30) = %d (cached)\n" (fib 30)