Mojo module
benchmark
Implements the benchmark module for runtime benchmarking.
You can import these APIs from the benchmark
package. For example:
import benchmark
from time import sleep
import benchmark
from time import sleep
You can pass any fn
as a parameter into benchmark.run[...]()
, it will return
a Report
where you can get the mean, duration, max, and more:
fn sleeper():
sleep(.01)
var report = benchmark.run[sleeper]()
print(report.mean())
fn sleeper():
sleep(.01)
var report = benchmark.run[sleeper]()
print(report.mean())
0.012256487394957985
0.012256487394957985
You can print a full report:
report.print()
report.print()
---------------------
Benchmark Report (s)
---------------------
Mean: 0.012265747899159664
Total: 1.459624
Iters: 119
Warmup Mean: 0.01251
Warmup Total: 0.025020000000000001
Warmup Iters: 2
Fastest Mean: 0.0121578
Slowest Mean: 0.012321428571428572
---------------------
Benchmark Report (s)
---------------------
Mean: 0.012265747899159664
Total: 1.459624
Iters: 119
Warmup Mean: 0.01251
Warmup Total: 0.025020000000000001
Warmup Iters: 2
Fastest Mean: 0.0121578
Slowest Mean: 0.012321428571428572
Or all the batch runs:
report.print_full()
report.print_full()
---------------------
Benchmark Report (s)
---------------------
Mean: 0.012368649122807017
Total: 1.410026
Iters: 114
Warmup Mean: 0.0116705
Warmup Total: 0.023341000000000001
Warmup Iters: 2
Fastest Mean: 0.012295586956521738
Slowest Mean: 0.012508099999999999
Batch: 1
Iterations: 20
Mean: 0.012508099999999999
Duration: 0.250162
Batch: 2
Iterations: 46
Mean: 0.012295586956521738
Duration: 0.56559700000000002
Batch: 3
Iterations: 48
Mean: 0.012380562499999999
Duration: 0.59426699999999999
---------------------
Benchmark Report (s)
---------------------
Mean: 0.012368649122807017
Total: 1.410026
Iters: 114
Warmup Mean: 0.0116705
Warmup Total: 0.023341000000000001
Warmup Iters: 2
Fastest Mean: 0.012295586956521738
Slowest Mean: 0.012508099999999999
Batch: 1
Iterations: 20
Mean: 0.012508099999999999
Duration: 0.250162
Batch: 2
Iterations: 46
Mean: 0.012295586956521738
Duration: 0.56559700000000002
Batch: 3
Iterations: 48
Mean: 0.012380562499999999
Duration: 0.59426699999999999
If you want to use a different time unit you can bring in the Unit and pass it in as an argument:
from benchmark import Unit
report.print(Unit.ms)
from benchmark import Unit
report.print(Unit.ms)
---------------------
Benchmark Report (ms)
---------------------
Mean: 0.012312411764705882
Total: 1.465177
Iters: 119
Warmup Mean: 0.012505499999999999
Warmup Total: 0.025010999999999999
Warmup Iters: 2
Fastest Mean: 0.012015649999999999
Slowest Mean: 0.012421204081632654
---------------------
Benchmark Report (ms)
---------------------
Mean: 0.012312411764705882
Total: 1.465177
Iters: 119
Warmup Mean: 0.012505499999999999
Warmup Total: 0.025010999999999999
Warmup Iters: 2
Fastest Mean: 0.012015649999999999
Slowest Mean: 0.012421204081632654
The unit's are just aliases for StringLiteral
, so you can for example:
print(report.mean("ms"))
print(report.mean("ms"))
12.199145299145298
12.199145299145298
Benchmark.run takes four arguments to change the behaviour, to set warmup iterations to 5:
r = benchmark.run[sleeper](5)
r = benchmark.run[sleeper](5)
0.012004808080808081
0.012004808080808081
To set 1 warmup iteration, 2 max iterations, a min total time of 3 sec, and a max total time of 4 s:
r = benchmark.run[sleeper](1, 2, 3, 4)
r = benchmark.run[sleeper](1, 2, 3, 4)
Note that the min total time will take precedence over max iterations
Structs
-
Batch
: A batch of benchmarks, the benchmark.run() function works out how many iterations to run in each batch based the how long the previous iterations took. -
Report
: Contains the average execution time, iterations, min and max of each batch. -
Unit
: Time Unit used by Benchmark Report.
Functions
-
run
: Benchmarks the function passed in as a parameter.
Was this page helpful?
Thank you! We'll create more content like this.
Thank you for helping us improve!