The Postgres resource consumption docs state, under the
…it is unlikely that an allocation of more than 40% of RAM to shared_buffers will work better than a smaller amount.
Why is this? I’m always told that the more RAM a server has, the better Postgres will perform. Isn’t
shared_buffers the quintessential memory setting for Postgres? If I’m only allocating 3GB of 12GB (the recommended 25% starting point) in my server to
shared_buffers, where can I expect Postgres to take advantage of at least 6GB more?
effective_cache_size coupled with
shared_buffers could more appropriately be considered to be the quintessential memory settings. Keeping
shared_buffers a bit lower (e.g., 25%) is useful because Postgres also relies on operating system caches as well, which may account for utilization of some of the other “6GB” of RAM in the OP.
According to the official Postgres “tuning” page, setting
effective_cache_size to half of total memory is considered to be conservative. However, this isn’t a memory allocation, rather a guideline to help Postgres plan its queries based on what’s available to it.
Also note that understating resources to Postgres by a slight degree can be helpful, in allowing some breathing room for future scaling. Imagine your Postgres server was optimized to take 100% advantage of all the physical resources in your machine, and then you reached your server’s limit. There would be little you could do at this point to stave off disaster (e.g., swapping, extreme performance degradation, etc.), so leaving a bit of wiggle room can come in handy when you need a week’s time to upgrade your server.