Practical microbenchmarks for Python—tight iteration cycles, precise hot-path timing, and storage that makes comparisons and regression gates effortless.
Run benchmarks with one command:
pybench run examples/ [-k keyword] [-P key=value ...]- Simple API: use the
@bench(...)decorator or suites withBench+BenchContext.start()/end()to isolate the hot path. - Auto-discovery:
pybench run <dir>expands to**/*bench.py. - Powerful parameterization: generate Cartesian products with
params={...}or define per-caseargs/kwargs. - On-the-fly overrides:
-P key=valueadjustsn,repeat,warmup,group, or custom params without editing code. - Solid timing model: monotonic clock, warmup, GC control, and context fast-paths.
- Smart calibration: per-variant iteration tuning to hit a target budget.
- Rich reports: aligned tables with percentiles, iter/s, min…max, baseline markers, and speedups vs. base.
- HTML charts: export benchmarks as self-contained Chart.js dashboards with
--export chart. - History tooling: runs auto-save to
.pybenchx/; list, inspect stats, clean, or compare with--vs {name,last}.
- pip
pip install pybenchx
- uv
uv pip install pybenchx
See examples/strings_bench.py for both styles:
from pybench import bench, Bench, BenchContext
@bench(name="join", n=1000, repeat=10)
def join(sep: str = ","):
sep.join(str(i) for i in range(100))
suite = Bench("strings")
@suite.bench(name="join-baseline", baseline=True)
def join_baseline(b: BenchContext):
s = ",".join(str(i) for i in range(50))
b.start(); _ = ",".join([s] * 5); b.end()- Run all examples:
pybench run examples/ - Filter variants:
pybench run examples/ -k join - Override params at runtime:
pybench run examples/ -P repeat=5 -P n=10000
- CLI reference — discovery rules, profiles, overrides, exports, comparisons.
- Examples & cookbook — real-world patterns, CI recipes, history workflows.
- Behavior & internals — timing model, calibration, accuracy notes.
- API reference — decorators, suites, storage helpers, reporters.