๐Ÿฆ€ Functional Rust

989: One-Time Initialization

Difficulty: Beginner Category: Async / Concurrency FP Patterns Concept: Lazy initialization that runs exactly once, even under concurrent access Key Insight: `OnceLock<T>` is Rust's thread-safe equivalent of OCaml's `Lazy.t`; `get_or_init` is atomic โ€” concurrent callers block until one winner initializes, then all read the same value

Versions

DirectoryDescription
`std/`Standard library version using `std::sync`, `std::thread`
`tokio/`Tokio async runtime version using `tokio::sync`, `tokio::spawn`

Running

# Standard library version
cd std && cargo test

# Tokio version
cd tokio && cargo test
// 989: One-Time Initialization
// Rust: OnceLock<T> โ€” set once, read many times (thread-safe)

use std::sync::{Arc, Mutex, OnceLock};
use std::thread;

// --- Approach 1: OnceLock<T> for global one-time init ---
static CONFIG: OnceLock<String> = OnceLock::new();

fn get_config() -> &'static String {
    CONFIG.get_or_init(|| {
        // Only runs once, even with concurrent calls
        "production-config-v42".to_string()
    })
}

// --- Approach 2: OnceLock with expensive computation ---
static PRIMES: OnceLock<Vec<u32>> = OnceLock::new();

fn sieve(limit: usize) -> Vec<u32> {
    let mut is_prime = vec![true; limit + 1];
    is_prime[0] = false;
    if limit > 0 { is_prime[1] = false; }
    for i in 2..=limit {
        if is_prime[i] {
            let mut j = i * i;
            while j <= limit {
                is_prime[j] = false;
                j += i;
            }
        }
    }
    (2..=limit as u32).filter(|&n| is_prime[n as usize]).collect()
}

fn get_primes() -> &'static [u32] {
    PRIMES.get_or_init(|| sieve(100))
}

// --- Approach 3: Instance-level OnceLock (not just global) ---
struct LazyConfig {
    inner: OnceLock<String>,
    prefix: String,
}

impl LazyConfig {
    fn new(prefix: &str) -> Self {
        LazyConfig {
            inner: OnceLock::new(),
            prefix: prefix.to_string(),
        }
    }

    fn get(&self) -> &str {
        self.inner.get_or_init(|| {
            format!("{}-initialized", self.prefix)
        })
    }
}

// --- Approach 4: Thread-safe once across multiple threads ---
fn concurrent_once_init() -> usize {
    static INIT_COUNT: OnceLock<usize> = OnceLock::new();
    let call_count = Arc::new(Mutex::new(0usize));

    let handles: Vec<_> = (0..10).map(|_| {
        let count = Arc::clone(&call_count);
        thread::spawn(move || {
            INIT_COUNT.get_or_init(|| {
                *count.lock().unwrap() += 1;
                42
            });
        })
    }).collect();

    for h in handles { h.join().unwrap(); }
    let x = *call_count.lock().unwrap(); x // should be 1 โ€” init ran only once
}


#[cfg(test)]
mod tests {
    use super::*;

    #[test]
    fn test_get_config_same_value() {
        let c1 = get_config();
        let c2 = get_config();
        assert_eq!(c1, c2);
        assert!(std::ptr::eq(c1 as *const _, c2 as *const _)); // same allocation
    }

    #[test]
    fn test_primes_correctness() {
        let primes = get_primes();
        assert_eq!(&primes[..5], &[2, 3, 5, 7, 11]);
        assert!(!primes.contains(&4));
        assert!(!primes.contains(&100));
    }

    #[test]
    fn test_lazy_config_cached() {
        let lc = LazyConfig::new("test");
        let v1 = lc.get();
        let v2 = lc.get();
        assert_eq!(v1, "test-initialized");
        assert_eq!(v1, v2);
    }

    #[test]
    fn test_concurrent_once_init_runs_exactly_once() {
        // OnceLock guarantees init closure runs at most once
        // even with 10 concurrent threads
        let count = concurrent_once_init();
        assert_eq!(count, 1, "init should run exactly once");
    }

    #[test]
    fn test_oncelock_get_before_init() {
        let lock: OnceLock<i32> = OnceLock::new();
        assert!(lock.get().is_none());
        lock.get_or_init(|| 42);
        assert_eq!(lock.get(), Some(&42));
    }
}
(* 989: One-Time Initialization *)
(* OCaml: Lazy.t โ€” computed at most once, memoized *)

(* --- Approach 1: Lazy.t for deferred initialization --- *)

let expensive_config = lazy (
  (* Simulated expensive computation *)
  Printf.printf "(computing config...)\n";
  { contents = 42 }
)

let get_config () = Lazy.force expensive_config

let () =
  (* First access: computes *)
  let c1 = get_config () in
  (* Second access: returns cached value *)
  let c2 = get_config () in
  assert (c1 == c2);  (* physical equality: same object *)
  assert (!c1 = 42);
  Printf.printf "Approach 1 (Lazy.t): %d\n" !c1

(* --- Approach 2: Thread-safe once-init with Mutex + option --- *)

type 'a once = {
  mutable value: 'a option;
  m: Mutex.t;
}

let make_once () = { value = None; m = Mutex.create () }

let once_get once f =
  Mutex.lock once.m;
  let v = match once.value with
    | Some v -> v
    | None ->
      let v = f () in
      once.value <- Some v;
      v
  in
  Mutex.unlock once.m;
  v

let db_connection = make_once ()

let get_db () = once_get db_connection (fun () ->
  Printf.printf "(opening DB connection...)\n";
  "conn://localhost:5432"
)

let () =
  let c1 = get_db () in
  let c2 = get_db () in
  assert (c1 = c2);
  assert (c1 = "conn://localhost:5432");
  Printf.printf "Approach 2 (thread-safe once): %s\n" c1

(* --- Approach 3: Lazy initialization in module --- *)

let _initialized = lazy (
  (* Module-level initialization โ€” runs once *)
  Printf.printf "(module init)\n";
  true
)

let ensure_init () = Lazy.force _initialized

let () =
  let r1 = ensure_init () in
  let r2 = ensure_init () in
  assert (r1 = r2);
  Printf.printf "Approach 3 (module lazy): %b\n" r1

let () = Printf.printf "โœ“ All tests passed\n"

๐Ÿ“Š Detailed Comparison

One-Time Initialization โ€” Comparison

Core Insight

`Lazy.t` and `OnceLock` both implement deferred singleton: compute a value at most once, cache forever. The difference is thread safety โ€” OCaml's `Lazy.t` uses GC for safety; Rust's `OnceLock` uses atomics for lock-free concurrent initialization.

OCaml Approach

  • `lazy expr` wraps an expression; `Lazy.force` evaluates it (once)
  • Result is cached โ€” subsequent `Lazy.force` calls return immediately
  • Thread-safe in OCaml 5 (domains); not safe across threads in OCaml 4 without wrapping
  • Typical use: module-level initialization, expensive config loading
  • `Lazy.is_val` checks if already evaluated without forcing

Rust Approach

  • `OnceLock<T>` is in `std::sync` since Rust 1.70
  • `get_or_init(f)` runs `f` at most once, returns `&T` thereafter
  • `get()` returns `Option<&T>` โ€” `None` if not yet initialized
  • Works for `static` globals and instance fields
  • `set(v)` for explicit single-write (returns error if already set)
  • `LazyLock<T>` (Rust 1.80+) for lock-free lazy static

Comparison Table

ConceptOCamlRust
Declare lazy`let x = lazy (expr)``static X: OnceLock<T> = OnceLock::new()`
Force / initialize`Lazy.force x``X.get_or_init(\\expr)`
Check if ready`Lazy.is_val x``X.get().is_some()`
Thread-safeOCaml 5 onlyYes โ€” std guarantees
Instance level`let _ = lazy (...)` in struct`OnceLock<T>` field
Type annotation`'a Lazy.t``OnceLock<T>`
Return type`'a` (the value)`&'static T` (reference)

std vs tokio

Aspectstd versiontokio version
RuntimeOS threads via `std::thread`Async tasks on tokio runtime
Synchronization`std::sync::Mutex`, `Condvar``tokio::sync::Mutex`, channels
Channels`std::sync::mpsc` (unbounded)`tokio::sync::mpsc` (bounded, async)
BlockingThread blocks on lock/recvTask yields, runtime switches tasks
OverheadOne OS thread per taskMany tasks per thread (M:N)
Best forCPU-bound, simple concurrencyI/O-bound, high-concurrency servers