53

How do I get a microseconds timestamp in C?

I'm trying to do:

struct timeval tv;
gettimeofday(&tv,NULL);
return tv.tv_usec;

But this returns some nonsense value that if I get two timestamps, the second one can be smaller or bigger than the first (second one should always be bigger). Would it be possible to convert the magic integer returned by gettimeofday to a normal number which can actually be worked with?

7
  • 11
    tv_usec is not "the current time in microseconds" but "current time in microseconds modulo 10^6".
    – ruslik
    Commented Apr 29, 2011 at 14:06
  • 1
    @ruslik How do I convert it to a "normal" number?
    – Kristina
    Commented Apr 29, 2011 at 14:08
  • 1
    @Nick Brooks: well.. "normal" numbers have a bad habbit of having range, so there will always be such a value for which there is no bigger value. I think you should review your alrogithm. Try (sec2 - sec1)*1000000 + (usec2 - usec1). Or, better, to avoid possible overflow for seconds, if you know that the period is smaller than a second, when the second value is smaller just add 1000000 to it.
    – ruslik
    Commented Apr 29, 2011 at 14:11
  • 1
    I just need a microsecond timestamp to calculate scrolling inertia. I need something like time() but for microseconds.
    – Kristina
    Commented Apr 29, 2011 at 14:13
  • 1
    related - stackoverflow.com/questions/361363/… Commented Apr 29, 2011 at 14:35

9 Answers 9

59

You need to add in the seconds, too:

unsigned long time_in_micros = 1000000 * tv.tv_sec + tv.tv_usec;

Note that this will only last for about 232/106 =~ 4295 seconds, or roughly 71 minutes though (on a typical 32-bit system).

3
  • 17
    Use a 64 bit integer to store microseconds, surely
    – Havoc P
    Commented Apr 29, 2011 at 16:47
  • 3
    PROBLEM WITH SOLUTION: Because you don't start with tv_sec at zero, time_in_micros could roll-over at ANY time! There's no getting around needing 64 bits of time, whether as a single uint64_t or as two 32-bit values. At the very least, when you start the timing, save the tv_sec value from the start time and subtract it from all future timing values.
    – Tom West
    Commented Jul 19, 2012 at 20:32
  • 1
    Letting the value to roll over in 32-bit system is not that bad if you know what you are doing. You can still get sane values from the calculation of current_microseconds-start_time if the interval never gets larger than 71 minutes in your implementation.
    – Zouppen
    Commented Mar 12, 2013 at 9:33
25

You have two choices for getting a microsecond timestamp. The first (and best) choice, is to use the timeval type directly:

struct timeval GetTimeStamp() {
    struct timeval tv;
    gettimeofday(&tv,NULL);
    return tv;
}

The second, and for me less desirable, choice is to build a uint64_t out of a timeval:

uint64_t GetTimeStamp() {
    struct timeval tv;
    gettimeofday(&tv,NULL);
    return tv.tv_sec*(uint64_t)1000000+tv.tv_usec;
}
3
  • 2
    why (uint64_t)1000000 instead of 1000000ull? Commented Jul 28, 2014 at 16:56
  • 7
    @Mooing Duck, to me (a less experienced programmer than you), (uint64_t)1000000 actually is more readable than 1000000ull. Now that I've seen them together, the latter makes just as much sense, but I would have recognized the first one even as a much younger coder, whereas the latter one would have required I look it up to verify what it means. Commented Aug 30, 2015 at 2:49
  • 5
    ull does not specify the exact same number of bits on all systems. uintXX_t is good safety that you'll get exactly what you want.
    – kylefinn
    Commented Jun 19, 2017 at 22:26
15

Get a timestamp in C in microseconds?

Here is a generic answer pertaining to the title of this question:

How to get a simple timestamp in C

  1. in milliseconds (ms) with function millis(),
  2. microseconds (us) with micros(), and
  3. nanoseconds (ns) with nanos()

Quick summary: if you're in a hurry and using a Linux or POSIX system, jump straight down to the section titled "millis(), micros(), and nanos()", below, and just use those functions. If you're using C11 not on a Linux or POSIX system, you'll need to replace clock_gettime() in those functions with timespec_get().

2 main timestamp functions in C:

  1. C11: timespec_get() is part of the C11 or later standard, but doesn't allow choosing the type of clock to use. It also works in C++17. See documentation for std::timespec_get() here. However, for C++11 and later, I prefer to use a different approach where I can specify the resolution and type of the clock instead, as I demonstrate in my answer here: Getting an accurate execution time in C++ (micro seconds).

    The C11 timespec_get() solution is a bit more limited than the C++ solution in that you cannot specify the clock resolution nor the monotonicity (a "monotonic" clock is defined as a clock that only counts forwards and can never go or jump backwards--ex: for time corrections). When measuring time differences, monotonic clocks are desired to ensure you never count a clock correction jump as part of your "measured" time.

    The resolution of the timestamp values returned by timespec_get(), therefore, since we can't specify the clock to use, may be dependent on your hardware architecture, operating system, and compiler. An approximation of the resolution of this function can be obtained by rapidly taking 1000 or so measurements in quick-succession, then finding the smallest difference between any two subsequent measurements. Your clock's actual resolution is guaranteed to be equal to or smaller than that smallest difference.

    I demonstrate this in the get_estimated_resolution() function of my timinglib.c timing library intended for Linux.

  2. Linux and POSIX: Even better than timespec_get() in C is the Linux and POSIX function clock_gettime() function, which also works fine in C++ on Linux or POSIX systems. clock_gettime() does allow you to choose the desired clock. You can read the specified clock resolution with clock_getres(), although that doesn't give you your hardware's true clock resolution either. Rather, it gives you the units of the tv_nsec member of the struct timespec. Use my get_estimated_resolution() function described just above and in my timinglib.c/.h files to obtain an estimate of the resolution.

So, if you are using C on a Linux or POSIX system, I highly recommend you use clock_gettime() over timespec_get().

C11's timespec_get() (ok) and Linux/POSIX's clock_gettime() (better):

Here is how to use both functions:

  1. C11's timespec_get()
    1. https://en.cppreference.com/w/c/chrono/timespec_get
    2. Works in C, but doesn't allow you to choose the clock to use.
    3. Full example, with error checking:
      #include <stdint.h> // `UINT64_MAX`
      #include <stdio.h>  // `printf()`
      #include <time.h>   // `timespec_get()`
      
      /// Convert seconds to nanoseconds
      #define SEC_TO_NS(sec) ((sec)*1000000000)
      
      uint64_t nanoseconds;
      struct timespec ts;
      int return_code = timespec_get(&ts, TIME_UTC);
      if (return_code == 0)
      {
          printf("Failed to obtain timestamp.\n");
          nanoseconds = UINT64_MAX; // use this to indicate error
      }
      else
      {
          // `ts` now contains your timestamp in seconds and nanoseconds! To 
          // convert the whole struct to nanoseconds, do this:
          nanoseconds = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
      }
      
  2. Linux/POSIX's clock_gettime() -- USE THIS ONE WHENEVER POSSIBLE!
    1. https://man7.org/linux/man-pages/man3/clock_gettime.3.html (best reference for this function) and:
    2. https://linux.die.net/man/3/clock_gettime
    3. Works in C on Linux or POSIX systems, and allows you to choose the clock to use!
      1. I choose the CLOCK_MONOTONIC_RAW clock, which is best for obtaining timestamps used to time things on your system.
      2. See definitions for all of the clock types here, too, such as CLOCK_REALTIME, CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, etc: https://man7.org/linux/man-pages/man3/clock_gettime.3.html
      3. Another popular clock to use is CLOCK_REALTIME. Do NOT be confused, however! "Realtime" does NOT mean that it is a good clock to use for "realtime" operating systems, or precise timing. Rather, it means it is a clock which will be adjusted to the "real time", or actual "world time", periodically, if the clock drifts. Again, do NOT use this clock for precise timing usages, as it can be adjusted forwards or backwards at any time by the system, outside of your control.
    4. Full example, with error checking:
      // This line **must** come **before** including <time.h> in order to
      // bring in the POSIX functions such as `clock_gettime() from <time.h>`!
      #define _POSIX_C_SOURCE 199309L
      
      #include <errno.h>  // `errno`
      #include <stdint.h> // `UINT64_MAX`
      #include <stdio.h>  // `printf()`
      #include <string.h> // `strerror(errno)`
      #include <time.h>   // `clock_gettime()` and `timespec_get()`
      
      /// Convert seconds to nanoseconds
      #define SEC_TO_NS(sec) ((sec)*1000000000)
      
      uint64_t nanoseconds;
      struct timespec ts;
      int return_code = clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
      if (return_code == -1)
      {
          printf("Failed to obtain timestamp. errno = %i: %s\n", errno, 
              strerror(errno));
          nanoseconds = UINT64_MAX; // use this to indicate error
      }
      else
      {
          // `ts` now contains your timestamp in seconds and nanoseconds! To 
          // convert the whole struct to nanoseconds, do this:
          nanoseconds = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
      }
      

millis(), micros(), and nanos():

Anyway, here are my millis(), micros(), and nanos() functions I use in C for simple timestamps and code speed profiling.

I am using the Linux/POSIX clock_gettime() function below. If you are using C11 or later on a system which does not have clock_gettime() available, simply replace all usages of clock_gettime(CLOCK_MONOTONIC_RAW, &ts) below with timespec_get(&ts, TIME_UTC) instead.

Get the latest version of my code here from my eRCaGuy_hello_world repo here:

  1. timinglib.h
  2. timinglib.c
// This line **must** come **before** including <time.h> in order to
// bring in the POSIX functions such as `clock_gettime() from <time.h>`!
#define _POSIX_C_SOURCE 199309L
        
#include <time.h>

/// Convert seconds to milliseconds
#define SEC_TO_MS(sec) ((sec)*1000)
/// Convert seconds to microseconds
#define SEC_TO_US(sec) ((sec)*1000000)
/// Convert seconds to nanoseconds
#define SEC_TO_NS(sec) ((sec)*1000000000)

/// Convert nanoseconds to seconds
#define NS_TO_SEC(ns)   ((ns)/1000000000)
/// Convert nanoseconds to milliseconds
#define NS_TO_MS(ns)    ((ns)/1000000)
/// Convert nanoseconds to microseconds
#define NS_TO_US(ns)    ((ns)/1000)

/// Get a time stamp in milliseconds.
uint64_t millis()
{
    struct timespec ts;
    clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
    uint64_t ms = SEC_TO_MS((uint64_t)ts.tv_sec) + NS_TO_MS((uint64_t)ts.tv_nsec);
    return ms;
}

/// Get a time stamp in microseconds.
uint64_t micros()
{
    struct timespec ts;
    clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
    uint64_t us = SEC_TO_US((uint64_t)ts.tv_sec) + NS_TO_US((uint64_t)ts.tv_nsec);
    return us;
}

/// Get a time stamp in nanoseconds.
uint64_t nanos()
{
    struct timespec ts;
    clock_gettime(CLOCK_MONOTONIC_RAW, &ts);
    uint64_t ns = SEC_TO_NS((uint64_t)ts.tv_sec) + (uint64_t)ts.tv_nsec;
    return ns;
}

// NB: for all 3 timestamp functions above: gcc defines the type of the internal
// `tv_sec` seconds value inside the `struct timespec`, which is used
// internally in these functions, as a signed `long int`. For architectures
// where `long int` is 64 bits, that means it will have undefined
// (signed) overflow in 2^64 sec = 5.8455 x 10^11 years. For architectures
// where this type is 32 bits, it will occur in 2^32 sec = 136 years. If the
// implementation-defined epoch for the timespec is 1970, then your program
// could have undefined behavior signed time rollover in as little as
// 136 years - (year 2021 - year 1970) = 136 - 51 = 85 years. If the epoch
// was 1900 then it could be as short as 136 - (2021 - 1900) = 136 - 121 =
// 15 years. Hopefully your program won't need to run that long. :). To see,
// by inspection, what your system's epoch is, simply print out a timestamp and
// calculate how far back a timestamp of 0 would have occurred. Ex: convert
// the timestamp to years and subtract that number of years from the present
// year.

Timestamp Resolution:

On my x86-64 Linux Ubuntu 18.04 system with the gcc compiler, clock_getres() returns a resolution of 1 ns.

For both clock_gettime() and timespec_get(), I have also done empirical testing where I take 1000 timestamps rapidly, as fast as possible (see the get_estimated_resolution() function of my timinglib.c timing library), and look to see what the minimum gap is between timestamp samples. This reveals a range of ~14~26 ns on my system when using timespec_get(&ts, TIME_UTC) and clock_gettime(CLOCK_MONOTONIC, &ts), and ~75~130 ns for clock_gettime(CLOCK_MONOTONIC_RAW, &ts). This can be considered the rough "practical resolution" of these functions. See that test code in timinglib_get_resolution.c, and see the definition for my get_estimated_resolution() and get_specified_resolution() functions (which are used by that test code) in timinglib.c.

These results are hardware-specific, and your results on your hardware may vary.

References:

  1. The cppreference.com documentation sources I link to above.
  2. This answer here by @Ciro Santilli新疆棉花
  3. [my answer] my answer about usleep() and nanosleep() - it reminded me I needed to do #define _POSIX_C_SOURCE 199309L in order to bring in the clock_gettime() POSIX function from <time.h>!
  4. https://linux.die.net/man/3/clock_gettime
  5. https://man7.org/linux/man-pages/man3/clock_gettime.3.html
    1. Mentions the requirement for:

    _POSIX_C_SOURCE >= 199309L

    1. See definitions for all of the clock types here, too, such as CLOCK_REALTIME, CLOCK_MONOTONIC, CLOCK_MONOTONIC_RAW, etc.

See also:

  1. My shorter and less-through answer here, which applies only to ANSI/ISO C11 or later: How to measure time in milliseconds using ANSI C?
  2. My 3 sets of timestamp functions (cross-linked to each other):
    1. For C timestamps, see my answer here: Get a timestamp in C in microseconds?
    2. For C++ high-resolution timestamps, see my answer here: Here is how to get simple C-like millisecond, microsecond, and nanosecond timestamps in C++
    3. For Python high-resolution timestamps, see my answer here: How can I get millisecond and microsecond-resolution timestamps in Python?
  3. https://en.cppreference.com/w/c/chrono/clock
    1. POSIX clock_gettime(): https://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.html
  4. clock_gettime() on Linux: https://linux.die.net/man/3/clock_gettime
    1. Note: for C11 and later, you can use timespec_get(), as I have done above, instead of POSIX clock_gettime(). https://en.cppreference.com/w/c/chrono/clock says:

      use timespec_get in C11

    2. But, using clock_gettime() instead allows you to choose a desired clock ID for the type of clock you want! See also here: ***** https://people.cs.rutgers.edu/~pxk/416/notes/c-tutorials/gettime.html

Todo:

  1. ✓ DONE AS OF 3 Apr. 2022: Since timespec_getres() isn't supported until C23, update my examples to include one which uses the POSIX clock_gettime() and clock_getres() functions on Linux. I'd like to know precisely how good the clock resolution is that I can expect on a given system. Is it ms-resolution, us-resolution, ns-resolution, something else? For reference, see:
    1. https://linux.die.net/man/3/clock_gettime
    2. https://people.cs.rutgers.edu/~pxk/416/notes/c-tutorials/gettime.html
    3. https://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_getres.html
    4. Answer: clock_getres() returns 1 ns, but the actual resolution is about 14~27 ns, according to my get_estimated_resolution() function here: https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/c/timinglib.c. See the results here:
      1. https://github.com/ElectricRCAircraftGuy/eRCaGuy_hello_world/blob/master/c/timinglib_get_resolution.c#L46-L77
      2. Activate the Linux SCHED_RR soft real-time round-robin scheduler for the best and most-consistent timing possible. See my answer here regarding clock_nanosleep(): How to configure the Linux SCHED_RR soft real-time round-robin scheduler so that clock_nanosleep() can have improved resolution of ~4 us down from ~ 55 us.
8

struct timeval contains two components, the second and the microsecond. A timestamp with microsecond precision is represented as seconds since the epoch stored in the tv_sec field and the fractional microseconds in tv_usec. Thus you cannot just ignore tv_sec and expect sensible results.

If you use Linux or *BSD, you can use timersub() to subtract two struct timeval values, which might be what you want.

0
7

timespec_get from C11

Returns with precision of up to nanoseconds, rounded to the resolution of the implementation.

#include <time.h>
struct timespec ts;
timespec_get(&ts, TIME_UTC);
struct timespec {
    time_t   tv_sec;        /* seconds */
    long     tv_nsec;       /* nanoseconds */
};

See more details in my other answer here: How to measure time in milliseconds using ANSI C?

1
  • 1
    Thanks. I expanded this into 3 separate functions: millis(), micros(), and nanos(), in my answer here, for timestamps in milliseconds, microseconds, and nanoseconds, respectively. Commented May 28, 2021 at 2:14
2

But this returns some nonsense value that if I get two timestamps, the second one can be smaller or bigger than the first (second one should always be bigger).

What makes you think that? The value is probably OK. It’s the same situation as with seconds and minutes – when you measure time in minutes and seconds, the number of seconds rolls over to zero when it gets to sixty.

To convert the returned value into a “linear” number you could multiply the number of seconds and add the microseconds. But if I count correctly, one year is about 1e6*60*60*24*360 μsec and that means you’ll need more than 32 bits to store the result:

$ perl -E '$_=1e6*60*60*24*360; say int log($_)/log(2)'
44

That’s probably one of the reasons to split the original returned value into two pieces.

0

use an unsigned long long (i.e. a 64 bit unit) to represent the system time:

typedef unsigned long long u64;

u64 u64useconds;
struct timeval tv;

gettimeofday(&tv,NULL);
u64useconds = (1000000*tv.tv_sec) + tv.tv_usec;
0

Better late than never! This little programme can be used as the quickest way to get time stamp in microseconds and calculate the time of a process in microseconds:

#include <sys/time.h>
#include <stdio.h>
#include <time.h>

struct timeval GetTimeStamp() 
{
    struct timeval tv;
    gettimeofday(&tv,NULL);
    return tv;
}

int main()
{
    struct timeval tv= GetTimeStamp(); // Calculate time
    signed long time_in_micros = 1000000 * tv.tv_sec + tv.tv_usec; // Store time in microseconds
    
    getchar(); // Replace this line with the process that you need to time

    printf("Elapsed time: %ld microsecons\n", (1000000 * GetTimeStamp().tv_sec + GetTimeStamp().tv_usec) - time_in_micros);
    
}

You can replace getchar() with a function/process. Finally, instead of printing the difference you can store it in a signed long. The programme works fine in Windows 10.

-1

First we need to know on the range of microseconds i.e. 000_000 to 999_999 (1000000 microseconds is equal to 1second). tv.tv_usec will return value from 0 to 999999 not 000000 to 999999 so when using it with seconds we might get 2.1seconds instead of 2.000001 seconds because when only talking about tv_usec 000001 is essentially 1. Its better if you insert

if(tv.tv_usec<10)
{
 printf("00000");
} 
else if(tv.tv_usec<100&&tv.tv_usec>9)// i.e. 2digits
{
 printf("0000");
}

and so on...

Not the answer you're looking for? Browse other questions tagged or ask your own question.