MIRR Compiler Benchmarks

Criterion benchmarks for measuring parser and full-pipeline performance.

Running

cargo bench

HTML reports are written to target/criterion/.

Tiers

TierSignalsGuardsReflexesPurpose
Small211Baseline latency
Medium844Typical usage
Large321616Stress test

Benchmark groups

  • parse/small, parse/medium, parse/largeparse_mirr() only
  • pipeline/small, pipeline/medium, pipeline/large — full run_pipeline() (parse + pattern expand + validate + typecheck + simplify + width + temporal)

Comparing changes

# Save a baseline
cargo bench -- --save-baseline before

# Make changes, then compare
cargo bench -- --baseline before

Results in target/criterion/ include statistical comparison with confidence intervals.