1

I am trying to get a UTC posix timestamp in better resolution than seconds – milliseconds would be acceptable, but microseconds would be better. I need that as a synchronization for unrelated external counter/timer HW which runs in nanoseconds from 0 on powerup.

Which means I want some "absolute" time to have a pair of (my_absolute_utc_timestamp, some_counter_ns) to be able to interpret subsequent counter/timer values.

And I need at least milliseconds precision. I'd like it to be an int value, so I have no problems with floating point arithmetic precision loss.

What have I tried:

  1. time.time_ns()

    • I thought this was it, but it's local time.
  2. time.mktime(time.gmtime())

    • for some strange reason, this is one hour more than utc time:
      >>> time.mktime(time.gmtime()) - datetime.datetime.utcnow().timestamp()
      3599.555135011673
      
    • and it has only seconds precision
  3. I ended up with int(datetime.datetime.utcnow() * 1000000) as "utc_microseconds", which works, but:

    • there may be problems with floating precision.
    • it seems too complicated and I just don't like it.

Question:

Is there any better way to get microseconds or milliseconds UTC posix timestamp in Python? Using the Python standard library is preferred.

I'm using Python 3.10.

11
  • 2
    The time.time_ns() method returns the number of seconds since the epoch which is a UTC value.
    – OldBoy
    Commented Apr 30 at 11:30
  • @OldBoy it's nanoseconds actually for time_ns(), and seconds for time(), with the decimals representing microseconds or what ever the system / OS allows Commented Apr 30 at 12:07
  • @FObersteiner both time and time_ns return LOCAL time, Posix/Unix time has nothing to do with timezone. And you can check it: ` time.time() - datetime.datetime.utcnow().timestamp() => 7199.999995470047` (i.e. 2 hours difference which is correct for my timezone)
    – Jan Spurny
    Commented Apr 30 at 12:08
  • @OldBoy - epoch has nothing to do with UTC
    – Jan Spurny
    Commented Apr 30 at 12:09
  • 4
    @JanSpurny This is incorrect. Please refer to the docs, "time in seconds since the epoch as a floating point number", with epoch being "the point where the time starts, the return value of time.gmtime(0). It is January 1, 1970, 00:00:00 (UTC) on all platforms". UTC, not local time. Commented Apr 30 at 12:09

1 Answer 1

3

time.time_ns() is the appropriate function for this.

It "returns time as an integer number of nanoseconds since the epoch." This is a "UTC timestamp", as the "epoch is the point where the time starts, the return value of time.gmtime(0). It is January 1, 1970, 00:00:00 (UTC) on all platforms."

Note that the source of the time information may offer less resolution than the function itself can express. Be sure to check time.get_clock_info('time') for information on the underlying clock. As an example, on my MacOS the clock is limited to microsecond resolution.

6
  • given that we're dealing with C types here from time.time_ns, i.e. limited to 64 bits for a signed integer, we're relatively limited concerning the range of datetimes we can represent with nanoseconds Unix time. I wonder if within said range, a 64 bit float representing seconds and seconds fraction in the decimals, the precision actually becomes relevant? Commented Apr 30 at 16:53
  • ok so turns out in practice, it does make a difference. For instance, the maximum (nanoseconds Unix time in this case) a 64bit signed int can represent is 9_223_372_036_854_775_807. That would be 9_223_372_036_854_775 microseconds, or 9223372036.854775 seconds as a float. That number already cannot be represented correctly, the precision is down to 5 digits and you end up with 9223372036.854776 or 9223372036.854774, depending on the code used to generate the float. Commented Apr 30 at 18:18
  • 2
    @FObersteiner if you keep it as an integer, you should be able to get up to 2**63-1 ns. That puts you in 2262 which should be good enough for most purposes. Commented Apr 30 at 23:24
  • @MarkRansom ok that comment I made was a bit theoretical. The question I had was under which circumstances it is actually beneficial to use time_ns(), given the limited range (which is still fine for most use-cases). It turns out to be ok to use just time() if you can live with 10 µs resolution (it's only that "bad" on the edges of the range btw.). So if you just need "better-than-seconds" resolution, time() is fine. If you want "better-than-10µs" (again, this is range-specific), time_ns() if your friend. Commented May 1 at 9:44
  • @FObersteiner there is large benefit in using integer time (in any resolution) - there is no loss of precision when adding, subtracting or multiplying - floats cannot offer this guarantee. Also, integer timestamps are easier for some embedded devices which may not have HW floating point instructions.
    – Jan Spurny
    Commented May 2 at 7:29

Not the answer you're looking for? Browse other questions tagged or ask your own question.