The Benchmark::Timer class allows you to time portions of code conveniently, as well as benchmark code by allowing timings of repeated trials. It is perfect for when you need more precise information about the running time of portions of your code than the Benchmark module will give you, but don't want to go all out and profile your code.
The methodology is simple; create a Benchmark::Timer object, and wrap portions of code that you want to benchmark with 'start()' and 'stop()' method calls. You can supply a tag to those methods if you plan to time multiple portions of code. If you provide error and confidence values, you can also use 'need_more_samples()' to determine, statistically, whether you need to collect more data.
After you have run your code, you can obtain information about the running time by calling the 'results()' method, or get a descriptive benchmark report by calling 'report()'. If you run your code over multiple trials, the average time is reported. This is wonderful for benchmarking time-critical portions of code in a rigorous way. You can also optionally choose to skip any number of initial trials to cut down on initial case irregularities.
Package Version | Update ID | Released | Package Hub Version | Platforms | Subpackages |
---|---|---|---|---|---|
0.7112-bp156.3.1 info | GA Release | 2023-07-22 | 15 SP6 |
|
|
0.7112-bp155.2.8 info | GA Release | 2023-05-17 | 15 SP5 |
|
|
0.7112-bp154.1.13 info | GA Release | 2022-05-09 | 15 SP4 |
|
|
0.7107-bp153.1.14 info | GA Release | 2021-03-06 | 15 SP3 |
|
|
0.7107-bp152.3.13 info | GA Release | 2020-04-16 | 15 SP2 |
|
|
0.7107-bp151.3.1 info | GA Release | 2019-07-17 | 15 SP1 |
|
|
0.7107-bp151.2.13 info | GA Release | 2019-05-18 | 15 SP1 |
|
|
0.7107-bp150.2.4 info | GA Release | 2018-07-30 | 15 |
|
|