I'm just wondering what the limits for time
are. I have a program that always takes exactly 20 ms, so I assume this is the lowest it can measure, but I want to see if there's some sort of documentation of this.
3 Answers
The shortest time interval it can measure is 1 jiffy, which is the inverse of the frequency specified in the build options for the kernel (CONFIG_HZ).
-
1
-
time(1)
leads totimes(2)
which reads "All times reported are in clock ticks." in the DESCRIPTION section. From there that leads to pages that talk aboutHZ
, but that config setting is made obsolete byCONFIG_NO_HZ
. So... not sure where from there. Commented Apr 15, 2010 at 0:43
I agree with Ignacio's response however I believe it misses a critical point. Although a jiffy is theoretically the smallest unit it can measure, sometimes very short durations are inaccurate because the underlying hardware does not measure changes in time that quickly. In my experience, anything less than one millisecond cannot accurately be compared to something else (although that figure could be as high as 5 or 10 milliseconds). If you are trying to benchmark a specific operation or program, consider having it run many hundreds or thousands of times then dividing that total time by the number of iterations to find a more accurate value.
-
Actually, though the jiffy is software, the underlying hardware does much better. grep resolution /proc/timer_list Commented Apr 15, 2010 at 1:50
Try this:
gcc -o timetest -x c - <<< "int main() {}"; time ./timetest
On my (old and slow) system, subsequent runs of:
time ./timetest
finish in as little as:
real 0m0.005s
user 0m0.004s
sys 0m0.000s
-
Note: This is as reported by Bash's builtin
time
. Using/usr/bin/time
only reports to hundredths of a second and says "0.00". The results from thezsh
builtin are similar. Theksh
builtin shows the lowest time (0.000 or 0.001 real). Commented Apr 15, 2010 at 4:29