Use cargo bench to run your benchmark tests, which are located in files named benches/*.rs within your project's benches directory. Unlike standard tests, benchmarks require the --release flag by default to compile optimized code, and you must annotate your functions with #[bench] (for the legacy test crate) or use the criterion crate for more accurate, modern benchmarking.
For a quick start with the built-in test crate, create a benches/basic.rs file. Note that this approach is considered legacy and often produces noisy results due to compiler optimizations removing the benchmarked code.
// benches/basic.rs
use std::time::Instant;
#[bench]
fn bench_vector_push(b: &mut test::Bencher) {
let mut v = Vec::new();
b.iter(|| {
v.push(1);
v.push(2);
v.push(3);
});
}
Run the benchmarks with:
cargo bench
For production-grade results, the community standard is the criterion crate. Add criterion = "0.5" to your [dev-dependencies] in Cargo.toml and configure the [[bench]] section to use it.
# Cargo.toml
[dev-dependencies]
criterion = "0.5"
[[bench]]
name = "my_bench"
harness = false
Then, create benches/my_bench.rs using Criterion's macro:
// benches/my_bench.rs
use criterion::{black_box, criterion_group, criterion_main, Criterion};
fn add_numbers(n: u64) -> u64 {
(0..n).sum()
}
fn criterion_benchmark(c: &mut Criterion) {
c.bench_function("add_1000_numbers", |b| {
b.iter(|| add_numbers(black_box(1000)))
});
}
criterion_group!(benches, criterion_benchmark);
criterion_main!(benches);
Run the benchmarks with the same command:
cargo bench
This will generate a detailed HTML report in target/criterion/my_bench/report/index.html containing statistical analysis, regression detection, and plots. Always ensure you are running benchmarks in release mode (--release) to get meaningful performance data, as debug builds include assertions and lack optimizations that drastically skew timing results. If you need to benchmark a specific function, append its name to the command, e.g., cargo bench --bench my_bench add_1000_numbers.