182

Is it possible to benchmark programs in Rust? If yes, how? For example, how would I get execution time of program in seconds?

0

9 Answers 9

165

For measuring time without adding third-party dependencies, you can use std::time::Instant:

fn main() {
    use std::time::Instant;
    let now = Instant::now();

    // Code block to measure.
    {
        my_function_to_measure();
    }

    let elapsed = now.elapsed();
    println!("Elapsed: {:.2?}", elapsed);
}
4
  • 2
    Is it as precise as time crate's precise_time_ns?
    – kolen
    Commented Apr 9, 2019 at 0:09
  • 5
    Note that you can also simply output a Duration via its Debug impl. Example: println!("{:.2?}", elapsed). Commented Aug 3, 2019 at 18:53
  • 4
    Why is my_function_to_measure(); in its own block (enclosed in { })? Is it necessary?
    – pt1
    Commented Oct 29, 2021 at 8:12
  • 1
    @pt1 it's not, added this for clarity, to show anything enclosed in that block is separate from time measurement (updated with a comment).
    – ideasman42
    Commented Jan 4, 2022 at 2:12
147

It might be worth noting two years later (to help any future Rust programmers who stumble on this page) that there are now tools to benchmark Rust code as a part of one's test suite.

(From the guide link below) Using the #[bench] attribute, one can use the standard Rust tooling to benchmark methods in their code.

extern crate test;
use test::Bencher;

#[bench]
fn bench_xor_1000_ints(b: &mut Bencher) {
    b.iter(|| {
        // Use `test::black_box` to prevent compiler optimizations from disregarding
        // Unused values
        test::black_box(range(0u, 1000).fold(0, |old, new| old ^ new));
    });
}

For the command cargo bench this outputs something like:

running 1 test
test bench_xor_1000_ints ... bench:       375 ns/iter (+/- 148)

test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured

Links:

5
  • 17
    This, apparently, is going away. I'm still searching for what I should use.
    – Cogman
    Commented Apr 25, 2016 at 23:41
  • 1
    Why do you think what is going away, exactly? Commented Jun 26, 2017 at 17:11
  • 3
    The link to the book should now be to what's called the "Nightly book": doc.rust-lang.org/nightly/unstable-book/library-features/… Commented Jun 26, 2017 at 17:12
  • 19
    Note the current version of Bencher is marked unstable and not usable from stable Rust. This is currently only available in the nightly version.
    – MaxB
    Commented May 16, 2018 at 16:02
  • 5
    Just updating, after A LONG time, the #[bench] is still considered unstable and there is speculation about it being depricated. As of right now, I would suggest to use criterion. If you want to know more about the #[bench], I would suggest by starting your way into the rabit hole on the (what I believe to be), first thread tracking this
    – TDiblik
    Commented Jan 29, 2022 at 18:35
83

There are several ways to benchmark your Rust program. For most real benchmarks, you should use a proper benchmarking framework as they help with a couple of things that are easy to screw up (including statistical analysis). Please also read the "Why writing benchmarks is hard" section at the very bottom!


Quick and easy: Instant and Duration from the standard library

To quickly check how long a piece of code runs, you can use the types in std::time. The module is fairly minimal, but it is fine for simple time measurements. You should use Instant instead of SystemTime as the former is a monotonically increasing clock and the latter is not. Example (Playground):

use std::time::Instant;

let before = Instant::now();
workload();
println!("Elapsed time: {:.2?}", before.elapsed());

The underlying platform-specific implementations of std's Instant are specified in the documentation. In short: currently (and probably forever) you can assume that it uses the best precision that the platform can provide (or something very close to it). From my measurements and experiences, this is typically approximately around 20 ns.

If std::time does not offer enough features for your case, you could take a look at chrono. However, for measuring durations, it's unlikely you need that external crate.


Using a benchmarking framework

Using frameworks is often a good idea, because they try to prevent you from making common mistakes.

Rust's built-in benchmarking framework (nightly only)

Rust has a convenient built-in benchmarking feature, which is unfortunately still unstable as of 2019-07. You have to add the #[bench] attribute to your function and make it accept one &mut test::Bencher argument:

#![feature(test)]

extern crate test;
use test::Bencher;

#[bench]
fn bench_workload(b: &mut Bencher) {
    b.iter(|| workload());
}

Executing cargo bench will print:

running 1 test
test bench_workload ... bench:      78,534 ns/iter (+/- 3,606)

test result: ok. 0 passed; 0 failed; 0 ignored; 1 measured; 0 filtered out

Criterion

The crate criterion is a framework that runs on stable, but it is a bit more complicated than the built-in solution. It does more sophisticated statistical analysis, offers a richer API, produces more information and can even automatically generate plots.

See the "Quickstart" section for more information on how to use Criterion.


Why writing benchmarks is hard

There are many pitfalls when writing benchmarks. A single mistake can make your benchmark results meaningless. Here is a list of important but commonly forgotten points:

  • Compile with optimizations: rustc -O3 or cargo build --release. When you are executing your benchmarks with cargo bench, Cargo will automatically enable optimizations. This step is important as there are often large performance difference between optimized and unoptimized Rust code.

  • Repeat the workload: only running your workload once is almost always useless. There are many things that can influence your timing: overall system load, the operating system doing stuff, CPU throttling, file system caches, and so on. So repeat your workload as often as possible. For example, Criterion runs every benchmarks for at least 5 seconds (even if the workload only takes a few nanoseconds). All measured times can then be analyzed, with mean and standard deviation being the standard tools.

  • Make sure your benchmark isn't completely removed: benchmarks are very artificial by nature. Usually, the result of your workload is not inspected as you only want to measure the duration. However, this means that a good optimizer could remove your whole benchmark because it does not have side-effects (well, apart from the passage of time). So to trick the optimizer, you have to somehow use your result value so that your workload cannot be removed. An easy way is to print the result. A better solution is something like black_box. This function basically hides a value from LLVM in that LLVM cannot know what will happen with the value. Nothing happens, but LLVM doesn't know. That is the point.

    Good benchmarking frameworks use a block box in several situations. For example, the closure given to the iter method (for both, the built-in and Criterion Bencher) can return a value. That value is automatically passed into a black_box.

  • Beware of constant values: similarly to the point above, if you specify constant values in a benchmark, the optimizer might generate code specifically for that value. In extreme cases, your whole workload could be constant-folded into a single constant, meaning that your benchmark is useless. Pass all constant values through black_box to avoid LLVM optimizing too aggressively.

  • Beware of measurement overhead: measuring a duration takes time itself. That is usually only tens of nanoseconds, but can influence your measured times. So for all workloads that are faster than a few tens of nanoseconds, you should not measure each execution time individually. You could execute your workload 100 times and measure how long all 100 executions took. Dividing that by 100 gives you the average single time. The benchmarking frameworks mentioned above also use this trick. Criterion also has a few methods for measuring very short workloads that have side effects (like mutating something).

  • Many other things: sadly, I cannot list all difficulties here. If you want to write serious benchmarks, please read more online resources.

5
  • The documentation for Instant says that times aren't guaranteed to be steady, which means that it's not reliable as a timer to see how long something took. Some interface to something like Linux's clock_gettime with CLOCK_MONOTONIC_RAW would be better.
    – nmichaels
    Commented Sep 14, 2019 at 21:07
  • 1
    Idiomatic way of performance evaluation? mentions some other pitfalls of failure to do warm-up runs, including soft page-fault overhead on the first pass through an array. Also delay in the CPU jumping up to max frequency, kind of the opposite problem from throttling after running at max turbo for some time. Commented Jun 8, 2020 at 10:12
  • before.elapsed() returns what? Seconds, Milliseconds, nanoseconds? Otherwise a great answer!
    – BitTickler
    Commented Jun 2, 2023 at 22:54
  • 1
    @BitTickler The documentation of Instant is linked right over the code snippet you refer to. There you can see that elapsed returns Duration, a custom type, and not just a float or something like that. The code as posted (using {:?} for formatting) also outputs said duration in a convenient unit, depending on how large the duration is. Commented Jun 4, 2023 at 8:28
  • @nmichaels If you look at the underlying system calls: clock_gettime is what is used, on a Unix OS. The call to clock_gettime itself (independent of Rust) takes variable time to execute (e.g. were the fetch instructions cached by the chip). I'm sure other elements also can impact small scales (e.g. heat), or special cases (e.g. full system suspension). -- Besides that, Instant is a cross-platform api. It's guarantees will not exceed the weakest of any platform. [though target specific compilation]
    – Ethan S-L
    Commented Jun 6 at 15:36
57
+50

If you simply want to time a piece of code, you can use the time crate. time meanwhile deprecated, though. A follow-up crate is chrono.

Add time = "*" to your Cargo.toml.

Add

extern crate time;
use time::PreciseTime;

before your main function and

let start = PreciseTime::now();
// whatever you want to do
let end = PreciseTime::now();
println!("{} seconds for whatever you did.", start.to(end));

Complete example

Cargo.toml

[package]
name = "hello_world" # the name of the package
version = "0.0.1"    # the current version, obeying semver
authors = [ "[email protected]" ]
[[bin]]
name = "rust"
path = "rust.rs"
[dependencies]
rand = "*" # Or a specific version
time = "*"

rust.rs

extern crate rand;
extern crate time;

use rand::Rng;
use time::PreciseTime;

fn main() {
    // Creates an array of 10000000 random integers in the range 0 - 1000000000
    //let mut array: [i32; 10000000] = [0; 10000000];
    let n = 10000000;
    let mut array = Vec::new();

    // Fill the array
    let mut rng = rand::thread_rng();
    for _ in 0..n {
        //array[i] = rng.gen::<i32>();
        array.push(rng.gen::<i32>());
    }

    // Sort
    let start = PreciseTime::now();
    array.sort();
    let end = PreciseTime::now();

    println!("{} seconds for sorting {} integers.", start.to(end), n);
}
3
  • 2
    The time crate is now apparently deprecated.
    – nbro
    Commented Nov 5, 2018 at 13:52
  • 3
    The chrono crate is probably what you are looking for, following deprecation of the time crate. * Do not confuse time crate with the std::time module from the standard library
    – ElazarR
    Commented Jan 3, 2019 at 17:20
  • 11
    This is a bit confusing: The first paragraph mentions that chrono should be used, but it looks like the code below still uses time?
    – bluenote10
    Commented Nov 9, 2019 at 10:53
19

This answer is outdated! The time crate does not offer any advantages over std::time in regards to benchmarking. Please see the answers below for up to date information.


You might try timing individual components within the program using the time crate.

4
  • 1
    I particularly like precise_time_ns and precise_time_s.
    – Eric Holk
    Commented Nov 20, 2012 at 22:28
  • @nbro Not for long! A full rewrite is in progress.
    – jhpratt
    Commented Nov 10, 2019 at 9:04
  • 2
    The time crate has now been released in version 0.2.0 and is not deprecated anymore. However, AFAIK it has no benefit over std::time when it comes to measuring for benchmarking purposes. Therefore I would prefer to keep the "This answer is outdated" warning at the top. There are many better answers below. Commented Dec 22, 2019 at 9:06
  • 2
    @LukasKalbertodt I am the author of the newly released time crate. There's zero reason to use it for benchmarking, especially given the built-in (albeit nightly) APIs.
    – jhpratt
    Commented Dec 25, 2019 at 23:17
15

A quick way to find out the execution time of a program, regardless of implementation language, is to run time prog on the command line. For example:

~$ time sleep 4

real    0m4.002s
user    0m0.000s
sys     0m0.000s

The most interesting measurement is usually user, which measures the actual amount of work done by the program, regardless of what's going on in the system (sleep is a pretty boring program to benchmark). real measures the actual time that elapsed, and sys measures the amount of work done by the OS on behalf of the program.

1
  • 1
    This assumes you're only timing the entire execution, sometimes you want to time spesific steps too - especially if the program is interactive.
    – ideasman42
    Commented Feb 17, 2019 at 6:15
5

Currently, there is no interface to any of the following Linux functions:

  • clock_gettime(CLOCK_PROCESS_CPUTIME_ID, &ts)
  • getrusage
  • times (manpage: man 2 times)

The available ways to measure the CPU time and hotspots of a Rust program on Linux are:

  • /usr/bin/time program
  • perf stat program
  • perf record --freq 100000 program; perf report
  • valgrind --tool=callgrind program; kcachegrind callgrind.out.*

The output of perf report and valgrind depends on the availability of debugging information in the program. It may not work.

2

I created a small crate for this (measure_time), which logs or prints the time until end of scope.

#[macro_use]
extern crate measure_time;
fn main() {
    print_time!("measure function");
    do_stuff();
}
1

The other solution of measuring execution time is creating a custom type, for example, a struct and implement Drop trait for it.

For example:

struct Elapsed(&'static str, std::time::SystemTime);

impl Drop for Elapsed {
    fn drop(&mut self) {
        println!(
            "operation {} finished for {} ms",
            self.0,
            self.1.elapsed().unwrap_or_default().as_millis()
        );
    }
}

impl Elapsed {
    pub fn start(op: &'static str) -> Elapsed {
        let now = std::time::SystemTime::now();

        Elapsed(op, now)
    }
}

And using it in some function:

fn some_heavy_work() {
  let _exec_time = Elapsed::start("some_heavy_work_fn");
  
  // Here's some code. 
}

When the function ends, the drop method for _exec_time will be called and the message will be printed.

Not the answer you're looking for? Browse other questions tagged or ask your own question.