Question :
I’m running a database with a few tables with large row counts (millions) but which are fairly small in data size. All the tables are append-only, so no deletes or updates ever happening, around 250 rows are added per minute. 99% of the time queries take less than 5ms. However, occasionally queries get terminated by Postgres for taking more than 200ms (my configured statement timeout).
I’m using default settings apart from autovacuum_analyze_scale_factor
=0.05, autovacuum_vacuum_scale_factor
=0.1 and autovacuum_naptime
=30.
I’m running PostgreSQL 9.6.3 on AWS RDS on a t2.medium machine, with multi-az and connecting to it from AWS Lambda (Node.js 6.10) from within the same private VPC.
CPU stays under 7% at all times, disk read latency is less than 2 ms, number of concurrent connections rarely goes above 1. Average queries per second is quite consistent at around 20.
We’ve also tried turning synchronous_commit
to off
, but the same timeouts continued to happen.
Could this be an issue with the auto-vacuum process slowing queries even though I’ve got no updates or deletes?
Update: Having removed all writes to the database, we still see the same issue. So, no inserts, no updates, no deletes, just ~30 selects per second causes timeouts.
Answer :
I unfortunately don’t know much about postgres on AWS, my company is just starting to move that way, but when a vacuum runs here it does create disk io that could potentially slow down a query. It could be that the locks are on resource contention more as opposed to Postgres Locks.