22

I'm aware faster disks than what I'm using will help but this will take longer to put in and I am trying to use some emergency measures to decrease disk IO. atop is reporting DSK usage on the red almost constantly. This is for postgres 8.3.

My shared_buffers setting is at 24MB though the server has 16GB of ram which are not fully utilized. My first thought was to give the database as much ram as it could take but I'm not sure how to do that (this is a dedicated database server).

Any solution that does not require a restart is preferable but I'll take what i can get at this point.

Thanks!

2
  • This question should be asked at Serverfault Commented Jun 28, 2011 at 8:56
  • You can try to increase the shared_buffers in the postgresql.conf config file. This change requires restart. Also, you may need to increase the value of /proc/sys/kernel/shmmax before that.
    – Khaled
    Commented Jun 28, 2011 at 10:20

5 Answers 5

18

The 24MB shared_buffers setting is the conservative default, I'd say it needs to be quite a lot higher for a dedicated database with 16GB of RAM available. But yes, you'll have to restart the server to resize it. http://wiki.postgresql.org/wiki/Performance_Optimization is a good place to start for performance configuration guidelines. Setting the shared_buffers to 4GB or 6GB would seem more reasonable.

Note that on linux you need to adjust the kernel.shmmax sysctl setting (in /etc/sysctl.conf or just by writing /proc/sys/kernel/shmmax) to allocate a block of this much shared memory. If you don't you'll get an error specifying how much was requested, you have to set kernel.shmmax higher than that.

Since you have lots of memory, you might also consider setting the default work_mem higher, which will make things like sorts and hashes (group/order/distinct etc) tend to work in memory rather than using temp files. You don't need to restart the server to do this, just update the config file, reload the service and new sessions will get the new setting. The default work memory for a session is 1MB, you can calculate the maximum that may be used at a single time as work_mem * max_client_connections and estimate what impact that will have.

You should also increase effective_cache_size to indicate to the planner that the kernel FS layer is likely to be caching a lot of pages in memory outside of postgresql's shared buffers.

etc. etc. hope this gets you off to a good start.

2
  • Good post, only your memory usage estimate is a bit dangerous. work_mem is a maximum per sort/hash operation so complex queries can have multiple sort/hash operations and can thus use much more then one work_mem.
    – Eelke
    Commented Jun 29, 2011 at 5:38
  • 1
    Thanks, it helped a lot! Another significant change was checkpoint_segment and checkpoint_completion_target which had a major impact on my disk usage and overall performance. Crisis averted. (wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server)
    – Harel
    Commented Jun 29, 2011 at 8:52
3

Remount the disks with noatime

3

Aside from the suggestions given here, you might also want to look into your auto vaccum settings. By default it will trigger after around 50 updates and if your database is doing a lot of updates / inserts this can trigger an unnecessary amount of vacuum statements which will generate a lot of IO.

2

On a system that's very close to maximum I/O throughput during normal operation, you might want to increase checkpoint_completion_target to reduce the I/O load from checkpoints. The disadvantage of this is that prolonging checkpoints affects recovery time, because more WAL segments will need to be kept around for possible use in recovery

See more here.

0

If the diskio of the postgresql is very high, you should check the statements running, especially for statements, doing a "sort on disk", and set proper indexes.

Just google for "Postgresql Performance Tuning", you'll find enough hinds where to start.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .