0

I'm seeking clarification regarding the usage of timestamp without time zone in a database and its corresponding Java type, LocalDateTime. The current setup I'm working on involves servers and (PostgreSQL) databases running exclusively in UTC.

I've encountered numerous recommendations suggesting the use of timestamp with time zone in the database paired with OffsetDateTime in Java. However, I fail to see the necessity for this change in our scenario, given the following considerations:

  • Both the servers and the database operate in UTC
  • When user-agents submit date/time data, it is either already converted to UTC or accompanied by the offset from UTC
  • If submitted with an offset, the data is converted to UTC before storage (on the server-side), and eventually the offset information is discarded
  • All data is returned to user-agents in UTC, leaving them responsible for converting it to their respective time zones

While I grasp the utility of OffsetDateTime, in our case I don't see any practical usages or the need to store any time zone information other than UTC in the database.

Could someone provide insight into whether this approach is sound, and if not, why timestamp with time zone and OffsetDateTime might be preferable despite our system's UTC-centric nature?

9
  • I think you need to clarify something. Are you questioning whether you need to use an offset, whether you should use a timezone, or both? If both the app and DB are using UTC, the offset would be 0 i.e., no offset.
    – JimmyJames
    Commented Feb 8 at 22:06
  • 1
    Just want to make sure you are clear that UTC is a time zone. There's a difference between storing a date(time) in UTC and storing a date(time) with no timezone. The latter is unusual but sometimes warranted.
    – JimmyJames
    Commented Feb 8 at 22:51
  • 1
    While UTC-only is a commendable approach, this requires that you have a clear system boundary where you convert between localized and UTC timestamps. Storing an explicit timezone offset with your timestamps is really helpful to avoid accidental misinterpretation, and is typically worth the wasted storage. One failure mode I suffered recently was that our databases and servers assume UTC, but that tests failed on developer machines because some timestamps were interpreted in the local timezone.
    – amon
    Commented Feb 8 at 23:29
  • 1
    That’s why you have tests. Bugs in code under development are no reason to make everything more complicated. How can you “accidentally misinterpret” “all times are in UTC”?
    – gnasher729
    Commented Feb 9 at 0:17
  • 1
    There is - obviously - nothing inherently wrong in designing a system where all stored time stamps follow the convention of using the same time zone, like UTC. Why would you store an additional date-time-offset or time zone information whene there is no code in your system which uses it?
    – Doc Brown
    Commented Feb 9 at 6:32

1 Answer 1

2

Storing a date in UTC, Posix or any other universal time scale means that everyone in the world can easily agree that two events happened at the same time, of that one happened before the other, and how much before.

If you want to convert a universal time to the time that your wristwatch shows, you convert it to local time.

If it is of real importance which time of the local day something happened then you can also store the time zone. That is rare, but is needed to find which day someone was born (convert utc to local time of the birthplace) whether you paid your taxes this month, and so on. If you store the time zone, don’t change the date. Leave it in UTC.

You will need functions to store utc with optional time zone, read utc with optional time zone, covert between local time and universal time, of between some time zone and universal time.

You will need date/time arithmetic based on local time. Take UTC just before end of Feb 28th. Adding a month or 28 days may br the same or very different, depending on local time and leap years. If you’re like your standard library does that.

1
  • 1
    "universal time scale means that everyone in the world can easily agree that two events happened at the same time, of that one happened before the other, and how much before." - as tempting as it is to believe, this is false. For such a guarantee, they not only need to be using the same scale, but also the same clock, which is very unlikely to be the normal case even for any two computers in the same room, let alone machinery distributed around the world. Timestamps alone, uncorroborated, should never be used to determine that one thing happened before another.
    – Steve
    Commented Feb 10 at 19:37