9
ID | info
---------
1  | xxx
2  | xxx
4  | xxx

I have a setup similar to this in PostgreSQL. (I'm using a mac running Mojave if that makes a difference).

I have a few more columns but the issue is the ID column. This was set to be sequential, so using Express I insert into the DB.

Problem is, it increments the ID on every insert even failing inserts. info is defined unique, so trying to insert an existing info returns an error. But ID is still incremented.

For example, if I insert 2 rows, I have id 1 & id 2. Then another one that fails with a unique violation. The the next successful insert gets id = 4, skipping id = 3.

Can I set up Postgres so it doesn't increment on failed inserts?

2
  • You might provide your Postgres version and the actual INSERT statement you use. And what's the load on your database? Concurrent write access or not? Commented Dec 6, 2018 at 0:40
  • 2
    Sequences (not only in postgresql) are designed to allow multiple users insert unique values at the same time (e.g. user #1 starts a transaction, inserts data and then does something else, then user #2 starts a transactions, inserts data and commits, and finally user #1 commits). There would be little use for them to develop something like this as a simple DELETE would break everything down.
    – FXD
    Commented Dec 6, 2018 at 0:41

1 Answer 1

16

serial columns, or IDENTITY in Postgres 10 or later, draw numbers from a SEQUENCE and gaps are to be expected. Their job is to make concurrent write access possible with unique numbers - not necessarily gap-less numbers.

If you don't actually have concurrent write access, there are simple ways to achieve (mostly) gap-less numbers. Like:

INSERT INTO tbl (info) 
SELECT 'xxx'
WHERE NOT EXISTS (SELECT FROM tbl WHERE info = 'xxx');

That doesn't burn a serial ID from the SEQUENCE because a duplicate insert is skipped. (The INSERT might still fail for any other reason - and burn a serial number. You could reset the SEQUENCE in such a case:

While inserting multiple rows in a single statement, you also have to rule out duplicates within the inserted set. Example code:

But if you do have concurrent writes, none of the above works reliably, on principle. You better learn to accept gaps in the IDs. You can always have a query with row_number() OVER (ORDER BY id) to generate gap-less numbers after the fact. However, the numbers are still arbitrary to a degree. Smaller numbers were not necessarily committed earlier. There are exceptions under concurrent write load. Related:

Or consider a UUID instead (dat type uuid) and avoid the inherent problem of duplicates with random values in a huge key space. Not at all serial, though:

4
  • You're right, at that Gaps don't matter in my case (or most cases for that matter I suppose). Thank you for the explanation!
    – thatdevop
    Commented Dec 6, 2018 at 17:20
  • I'm having the same problem, but in my case the amount of burned ids is something to worry in the long run. Reading your (many) SO answers about this specific topic got me to understand it better, but unfortunately none of them seems to be aplicable in my case (other than switching to uuid, which I would like to avoid if possible). What do you think of creating triggers to SELECT setval('seq', MAX(id)), or having it as CTE in the query for "resetting" the sequence? Commented Sep 13, 2023 at 14:01
  • @JesusJimenez: Use a bigserial or bigint IDENTITY then. Are you aware of the gigantic number space of a bigint column? See: stackoverflow.com/a/33838373/939860. Resetting the sequence in a trigger or otherwise is much more convoluted and expensive, and bound to fail under concurrent write load. Commented Sep 13, 2023 at 14:27
  • Thank you very much for your response, I wasn't aware of the time that could take to run out of ids. Although the gap feels like bad implementation to me, "expensive" is other word to avoid. So as long as I don't look at the gaps, I think there's no problem then. Commented Sep 13, 2023 at 15:04

Not the answer you're looking for? Browse other questions tagged or ask your own question.