732-benchmarking-harness — Benchmarking Harness
Tutorial
The Problem
Micro-benchmarking is surprisingly hard to do correctly. The compiler may eliminate "dead" computations, the CPU may boost frequency during warmup, and a single outlier can skew the mean. Production benchmark frameworks like Criterion address these problems with warmup phases, statistical analysis, and outlier rejection. This example builds a Criterion-inspired harness using only std, demonstrating the core techniques that make benchmarks trustworthy.
🎯 Learning Outcomes
std::hint::black_box to prevent dead-code elimination of benchmark subjectsDuration samples and compute mean, min, max, and standard deviationBenchResult struct for comparison across runsCode Example
//! 732: Benchmarking Harness — Criterion-style, std-only.
//!
//! A small micro-benchmark runner with a warmup phase, per-iteration
//! [`Duration`] samples, and basic statistics (mean, min, max, stddev).
//! [`std::hint::black_box`] is used to defeat dead-code elimination so the
//! optimizer cannot delete work whose result is discarded.
use std::hint::black_box;
use std::time::{Duration, Instant};
/// Errors that can occur when running a benchmark.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum BenchError {
/// `iters` was zero — statistics would be undefined.
ZeroIterations,
}
impl std::fmt::Display for BenchError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
BenchError::ZeroIterations => write!(f, "iters must be greater than zero"),
}
}
}
impl std::error::Error for BenchError {}
/// Summary statistics from a benchmark run.
#[derive(Debug, Clone)]
pub struct BenchResult {
/// Human-readable identifier for the benchmarked workload.
pub label: String,
/// Arithmetic mean of the per-iteration wall-clock durations.
pub mean: Duration,
/// Fastest observed iteration.
pub min: Duration,
/// Slowest observed iteration.
pub max: Duration,
/// Population standard deviation of the samples, in nanoseconds.
pub stddev_ns: f64,
/// Number of measured iterations (excluding warmup).
pub iters: u64,
}
impl BenchResult {
/// Print a single aligned row summarising the result.
pub fn print(&self) {
println!(
"{:40} mean={:>10.2?} min={:>10.2?} max={:>10.2?} σ={:.0}ns (n={})",
self.label, self.mean, self.min, self.max, self.stddev_ns, self.iters,
);
}
}
/// Run `f` `warmup` times (untimed) and then `iters` times (timed), returning
/// aggregate statistics.
///
/// Each result is fed through [`black_box`] so the optimizer cannot elide the
/// work. Returns [`BenchError::ZeroIterations`] if `iters == 0`.
pub fn bench<F, R, L>(
label: L,
warmup: u64,
iters: u64,
mut f: F,
) -> Result<BenchResult, BenchError>
where
F: FnMut() -> R,
L: Into<String>,
{
if iters == 0 {
return Err(BenchError::ZeroIterations);
}
for _ in 0..warmup {
black_box(f());
}
let mut samples: Vec<Duration> = Vec::with_capacity(iters as usize);
for _ in 0..iters {
let t0 = Instant::now();
let result = f();
let elapsed = t0.elapsed();
black_box(result);
samples.push(elapsed);
}
let total_ns: u128 = samples.iter().map(Duration::as_nanos).sum();
let mean_ns = total_ns / u128::from(iters);
let mean = Duration::from_nanos(mean_ns as u64);
let min = *samples.iter().min().expect("iters > 0 checked above");
let max = *samples.iter().max().expect("iters > 0 checked above");
let mean_f = mean_ns as f64;
let variance_ns = samples
.iter()
.map(|d| {
let diff = d.as_nanos() as f64 - mean_f;
diff * diff
})
.sum::<f64>()
/ iters as f64;
Ok(BenchResult {
label: label.into(),
mean,
min,
max,
stddev_ns: variance_ns.sqrt(),
iters,
})
}
/// Naive O(n) sum of `0..n`.
pub fn sum_naive(n: u64) -> u64 {
(0..n).sum()
}
/// Closed-form O(1) sum of `0..n` using Gauss's formula.
pub fn sum_formula(n: u64) -> u64 {
if n == 0 {
0
} else {
n * (n - 1) / 2
}
}
/// Build a [`String`] of length `n` by repeatedly pushing `'x'`.
pub fn string_push(n: usize) -> String {
let mut s = String::with_capacity(n);
for _ in 0..n {
s.push('x');
}
s
}
/// Collect `0..n` into a [`Vec`].
pub fn vec_collect(n: usize) -> Vec<u64> {
(0..n as u64).collect()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn bench_runs_warmup_and_iters() {
let mut call_count = 0u64;
let result = bench("test", 5, 10, || {
call_count += 1;
call_count
})
.expect("iters > 0");
assert_eq!(call_count, 15);
assert_eq!(result.iters, 10);
assert_eq!(result.label, "test");
}
#[test]
fn bench_rejects_zero_iters() {
let err = bench("noop", 0, 0, || 1u64).unwrap_err();
assert_eq!(err, BenchError::ZeroIterations);
}
#[test]
fn bench_min_le_mean_le_max() {
let r = bench("sum_naive_100", 2, 20, || sum_naive(black_box(100))).unwrap();
assert!(r.min <= r.mean);
assert!(r.mean <= r.max);
assert!(r.stddev_ns >= 0.0);
r.print();
}
#[test]
fn sum_naive_correct() {
assert_eq!(sum_naive(5), 10);
assert_eq!(sum_naive(0), 0);
}
#[test]
fn sum_formula_matches_naive() {
for n in 0..=20u64 {
assert_eq!(sum_naive(n), sum_formula(n), "n={n}");
}
}
#[test]
fn string_push_produces_expected_length() {
let s = string_push(64);
assert_eq!(s.len(), 64);
assert!(s.chars().all(|c| c == 'x'));
}
#[test]
fn vec_collect_produces_range() {
let v = vec_collect(8);
assert_eq!(v, vec![0, 1, 2, 3, 4, 5, 6, 7]);
}
#[test]
fn constant_workload_has_finite_stats() {
let r = bench("constant", 1, 100, || black_box(42u64)).unwrap();
assert!(r.stddev_ns.is_finite());
assert!(r.max >= r.min);
}
#[test]
fn bench_error_displays() {
let msg = format!("{}", BenchError::ZeroIterations);
assert!(msg.contains("iters"));
}
}Key Differences
std::hint::black_box; OCaml's Sys.opaque_identity serves the same purpose in core_bench.criterion (statistical, HTML reports) and divan; OCaml has core_bench and bechamel.OCaml Approach
OCaml's standard library has no built-in benchmarking framework. The benchmark opam package and Jane Street's core_bench library fill this role. core_bench uses a similar warmup + sample approach with GC-pause awareness: it forces a minor GC collection before each measurement to reduce noise from accumulated garbage. OCaml's Sys.time and Unix.gettimeofday are the primitives; Mtime_clock provides monotonic wall-clock time similar to Instant.
Full Source
//! 732: Benchmarking Harness — Criterion-style, std-only.
//!
//! A small micro-benchmark runner with a warmup phase, per-iteration
//! [`Duration`] samples, and basic statistics (mean, min, max, stddev).
//! [`std::hint::black_box`] is used to defeat dead-code elimination so the
//! optimizer cannot delete work whose result is discarded.
use std::hint::black_box;
use std::time::{Duration, Instant};
/// Errors that can occur when running a benchmark.
#[derive(Debug, Clone, PartialEq, Eq)]
pub enum BenchError {
/// `iters` was zero — statistics would be undefined.
ZeroIterations,
}
impl std::fmt::Display for BenchError {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
match self {
BenchError::ZeroIterations => write!(f, "iters must be greater than zero"),
}
}
}
impl std::error::Error for BenchError {}
/// Summary statistics from a benchmark run.
#[derive(Debug, Clone)]
pub struct BenchResult {
/// Human-readable identifier for the benchmarked workload.
pub label: String,
/// Arithmetic mean of the per-iteration wall-clock durations.
pub mean: Duration,
/// Fastest observed iteration.
pub min: Duration,
/// Slowest observed iteration.
pub max: Duration,
/// Population standard deviation of the samples, in nanoseconds.
pub stddev_ns: f64,
/// Number of measured iterations (excluding warmup).
pub iters: u64,
}
impl BenchResult {
/// Print a single aligned row summarising the result.
pub fn print(&self) {
println!(
"{:40} mean={:>10.2?} min={:>10.2?} max={:>10.2?} σ={:.0}ns (n={})",
self.label, self.mean, self.min, self.max, self.stddev_ns, self.iters,
);
}
}
/// Run `f` `warmup` times (untimed) and then `iters` times (timed), returning
/// aggregate statistics.
///
/// Each result is fed through [`black_box`] so the optimizer cannot elide the
/// work. Returns [`BenchError::ZeroIterations`] if `iters == 0`.
pub fn bench<F, R, L>(
label: L,
warmup: u64,
iters: u64,
mut f: F,
) -> Result<BenchResult, BenchError>
where
F: FnMut() -> R,
L: Into<String>,
{
if iters == 0 {
return Err(BenchError::ZeroIterations);
}
for _ in 0..warmup {
black_box(f());
}
let mut samples: Vec<Duration> = Vec::with_capacity(iters as usize);
for _ in 0..iters {
let t0 = Instant::now();
let result = f();
let elapsed = t0.elapsed();
black_box(result);
samples.push(elapsed);
}
let total_ns: u128 = samples.iter().map(Duration::as_nanos).sum();
let mean_ns = total_ns / u128::from(iters);
let mean = Duration::from_nanos(mean_ns as u64);
let min = *samples.iter().min().expect("iters > 0 checked above");
let max = *samples.iter().max().expect("iters > 0 checked above");
let mean_f = mean_ns as f64;
let variance_ns = samples
.iter()
.map(|d| {
let diff = d.as_nanos() as f64 - mean_f;
diff * diff
})
.sum::<f64>()
/ iters as f64;
Ok(BenchResult {
label: label.into(),
mean,
min,
max,
stddev_ns: variance_ns.sqrt(),
iters,
})
}
/// Naive O(n) sum of `0..n`.
pub fn sum_naive(n: u64) -> u64 {
(0..n).sum()
}
/// Closed-form O(1) sum of `0..n` using Gauss's formula.
pub fn sum_formula(n: u64) -> u64 {
if n == 0 {
0
} else {
n * (n - 1) / 2
}
}
/// Build a [`String`] of length `n` by repeatedly pushing `'x'`.
pub fn string_push(n: usize) -> String {
let mut s = String::with_capacity(n);
for _ in 0..n {
s.push('x');
}
s
}
/// Collect `0..n` into a [`Vec`].
pub fn vec_collect(n: usize) -> Vec<u64> {
(0..n as u64).collect()
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn bench_runs_warmup_and_iters() {
let mut call_count = 0u64;
let result = bench("test", 5, 10, || {
call_count += 1;
call_count
})
.expect("iters > 0");
assert_eq!(call_count, 15);
assert_eq!(result.iters, 10);
assert_eq!(result.label, "test");
}
#[test]
fn bench_rejects_zero_iters() {
let err = bench("noop", 0, 0, || 1u64).unwrap_err();
assert_eq!(err, BenchError::ZeroIterations);
}
#[test]
fn bench_min_le_mean_le_max() {
let r = bench("sum_naive_100", 2, 20, || sum_naive(black_box(100))).unwrap();
assert!(r.min <= r.mean);
assert!(r.mean <= r.max);
assert!(r.stddev_ns >= 0.0);
r.print();
}
#[test]
fn sum_naive_correct() {
assert_eq!(sum_naive(5), 10);
assert_eq!(sum_naive(0), 0);
}
#[test]
fn sum_formula_matches_naive() {
for n in 0..=20u64 {
assert_eq!(sum_naive(n), sum_formula(n), "n={n}");
}
}
#[test]
fn string_push_produces_expected_length() {
let s = string_push(64);
assert_eq!(s.len(), 64);
assert!(s.chars().all(|c| c == 'x'));
}
#[test]
fn vec_collect_produces_range() {
let v = vec_collect(8);
assert_eq!(v, vec![0, 1, 2, 3, 4, 5, 6, 7]);
}
#[test]
fn constant_workload_has_finite_stats() {
let r = bench("constant", 1, 100, || black_box(42u64)).unwrap();
assert!(r.stddev_ns.is_finite());
assert!(r.max >= r.min);
}
#[test]
fn bench_error_displays() {
let msg = format!("{}", BenchError::ZeroIterations);
assert!(msg.contains("iters"));
}
}#[cfg(test)]
mod tests {
use super::*;
#[test]
fn bench_runs_warmup_and_iters() {
let mut call_count = 0u64;
let result = bench("test", 5, 10, || {
call_count += 1;
call_count
})
.expect("iters > 0");
assert_eq!(call_count, 15);
assert_eq!(result.iters, 10);
assert_eq!(result.label, "test");
}
#[test]
fn bench_rejects_zero_iters() {
let err = bench("noop", 0, 0, || 1u64).unwrap_err();
assert_eq!(err, BenchError::ZeroIterations);
}
#[test]
fn bench_min_le_mean_le_max() {
let r = bench("sum_naive_100", 2, 20, || sum_naive(black_box(100))).unwrap();
assert!(r.min <= r.mean);
assert!(r.mean <= r.max);
assert!(r.stddev_ns >= 0.0);
r.print();
}
#[test]
fn sum_naive_correct() {
assert_eq!(sum_naive(5), 10);
assert_eq!(sum_naive(0), 0);
}
#[test]
fn sum_formula_matches_naive() {
for n in 0..=20u64 {
assert_eq!(sum_naive(n), sum_formula(n), "n={n}");
}
}
#[test]
fn string_push_produces_expected_length() {
let s = string_push(64);
assert_eq!(s.len(), 64);
assert!(s.chars().all(|c| c == 'x'));
}
#[test]
fn vec_collect_produces_range() {
let v = vec_collect(8);
assert_eq!(v, vec![0, 1, 2, 3, 4, 5, 6, 7]);
}
#[test]
fn constant_workload_has_finite_stats() {
let r = bench("constant", 1, 100, || black_box(42u64)).unwrap();
assert!(r.stddev_ns.is_finite());
assert!(r.max >= r.min);
}
#[test]
fn bench_error_displays() {
let msg = format!("{}", BenchError::ZeroIterations);
assert!(msg.contains("iters"));
}
}
Exercises
p99 latency field to BenchResult by sorting samples and indexing at 0.99 * iters.compare function that takes two BenchResult values and prints the speedup ratio and whether the difference is statistically significant (|Δmean| > 2σ).