How do I maintain high INSERT-performance on PostgreSQL

Posted on

Question :

I’m working on a project which is parsing data from measurement files into a Posgres 9.3.5 database.

At the core is a table (partitioned by month) which contains a row for each measurement point:

CREATE TABLE "tblReadings2013-10-01"
(
-- Inherited from table "tblReadings_master":  "sessionID" integer NOT NULL,
-- Inherited from table "tblReadings_master":  "fieldSerialID" integer NOT NULL,
-- Inherited from table "tblReadings_master":  "timeStamp" timestamp without time zone NOT NULL,
-- Inherited from table "tblReadings_master":  value double precision NOT NULL,
  CONSTRAINT "tblReadings2013-10-01_readingPK" PRIMARY KEY ("sessionID", "fieldSerialID", "timeStamp"),
  CONSTRAINT "tblReadings2013-10-01_fieldSerialFK" FOREIGN KEY ("fieldSerialID")
      REFERENCES "tblFields" ("fieldSerial") MATCH SIMPLE
      ON UPDATE CASCADE ON DELETE RESTRICT,
  CONSTRAINT "tblReadings2013-10-01_sessionFK" FOREIGN KEY ("sessionID")
  REFERENCES "tblSessions" ("sessionID") MATCH SIMPLE
  ON UPDATE CASCADE ON DELETE RESTRICT,
  CONSTRAINT "tblReadings2013-10-01_timeStamp_check" CHECK ("timeStamp" >= '2013-10-01 00:00:00'::timestamp without time zone AND "timeStamp" < '2013-11-01 00:00:00'::timestamp without time zone)
)

We are in the process of populating the table with data that has already been collected. Each file represents a transaction of around 48,000 points and there are several thousand files. They are imported using an INSERT INTO "tblReadings_master" VALUES (?,?,?,?);

Initially the files import at a rate of 1000+ inserts/sec but after a while (an inconsistent amount but never longer than 30mins or so) this rate plummets to 10-40 inserts/sec and the Postgres process rails a CPU. The only way to recover the original rates is to perform a full vacuum and analyze. This is ultimately going to be storing around 1,000,000,000 rows per monthly table so the vacuum takes some time.

EDIT: Here is an example where it ran for some time on smaller files, and then after larger files started it failed. The larger files look more erratic but I think it is because the transaction is only commited at the end of a file, around 40sec.
CPU and Insert trace of problem

There will be a web front end selecting some items but no update or deletes and this is seen with no other active connections.

My questions are:

  1. How can we tell what is causing the slowdown/rail the CPU (this is on Windows)?
  2. What can we do to maintain the original performance?

Answer :

There are a few things that might be causing this issue, but I can’t be sure any of them are the real problem. The troubleshooting all involves turning on extra logging in the database, then seeing if the slow parts line up with messages there. Make sure you put a timestamp in the log_line_prefix setting to have useful logs to look at. See my tuning intro to get started here: https://wiki.postgresql.org/wiki/Tuning_Your_PostgreSQL_Server

Postgres does all of its writes to the operating system cache, then later they head to disk. You can watch that by turning on log_checkpoints and reading the messages. When things slow down, it may simply be that all the caches are now full, and all writes are stuck waiting for the slowest part of I/O. You might improve this by changing the Postgres checkpoint settings.

There’s an internal issue with the database people hit sometimes where heavy inserts get stuck waiting for a lock in the database. Turn on log_lock_waits to see if you’re hitting that one.

Sometimes the rate you can do burst inserts at is higher than you can sustain once the system autovacuum process kicks in. Turn on log_autovacuum to see if the problems are concurrent with when it’s happening.

We know that large amount of memory in the database’s private shared_buffers cache doesn’t work as well on Windows as it does on other operating systems. There isn’t as much visibility into what goes wrong when it happens either. I would not try to host something that’s doing 1000+ inserts a second to a Windows PostgreSQL database. It’s just not a good platform for really heavy writes yet.

I’m no Postgres expert so this might be wrong! Your primary key has 3 columns, sessionID as the first field. Does the file contain a decent spread of timestamps? you might consider making that the first field in the primary key or using a surrogate key as currently this is fairly wide.

From your script I dont think you have a cluster. Different to SQL Server but I think you have to specify with the physical ordering of the table in Postgres with the ‘Cluster’ command. The link talks about this :

https://stackoverflow.com/questions/4796548/about-clustered-index-in-postgres

Leave a Reply

Your email address will not be published. Required fields are marked *