Indexes Size 20 times bigger then the table itself and very slow queries

Posted on

Question :

The table I want to query has only about ~3000 rows with 3 columns:


yet it takes ~30 sec to query them all!

Then I saw that the table size is 1331MB but the indexes size is 25GB!

The table has only one index and that is the pkey.

VACCUM FULL is running every 24hours and the table sould be pretty static anyway

My question is why are the indexes so big (I guess thats why the querys are so slow) and what can I do to fix that?

Hope you have any ideas, because I have none and I haven’t found anything on google.

EDIT: Postgres Version is 8.1.18

Answer :


You’re on PostgreSQL 8.4 or older, where VACUUM FULL tends to bloat indexes. See this wiki page for details.

Don’t run VACUUM FULL as a periodic maintenance task. It’s unnecessary and inefficient. This remains true on current versions, it’s just not as bad on 9.0 and above. If you feel the need to run VACUUM FULL regularly then you probably don’t have autovacuum turned up far enough and are having table bloat issues. In fact, unless you’ve changed the FILLFACTOR on the table from its default 100 a VACUUM FULL is quite counter-productive; it’ll compact away all the free space in the table, so following UPDATEs will have to extend the table.

Table extensions are currently one of the poorer performing operations in PostgreSQL, as they’re controlled by a single global lock. So if you have tables that fluctuate in size, you really want to avoid constantly compacting and truncating them only to extend them again.

On some unusual workloads it can be worth running a periodic CLUSTER, which orders the table based on an index and effectively REINDEXes it. If you do many UPDATEs on the table should set a lower FILLFACTOR for efficiency.

If this table is being emptied and re-populated regularly, you should generally using TRUNCATE followed by COPY to fill it back up. If it’s big, drop the indexes before the COPY then re-create them afterwards to produce indexes that are more compact and faster and to speed up the data load.

For one-off mitigation, CLUSTER the table or REINDEX it.


After edit added version: Holy bleepazoids, batman. 8.1.18? Forget what I said about autovacuum, autovacuum in 8.1 was way too ineffective. Upgrade to a sane version ASAP. You’re not even on the current point release of 8.1, 8.1.23, from December 2010. 8.1.18 was released in September 2009! You need to begin your upgrade planning … well, about two years ago, preferably. Read the release notes for every .0 version between 8.1 and the current release, focusing on the upgrade notes and compatibility notes. Then plan and execute your upgrade. If you don’t feel up managing that on your own there are people who’ll help you with it (I work for one of them) but honestly, the release notes and docs are quite sufficient for most people to do an upgrade themselves without undue pain.

Moving from 8.1 to 8.3 or newer will be your biggest pain point, as PostgreSQL 8.3 dropped a whole bunch of implicit casts that lots of potentially buggy SQL relied on. You’ll need to test your application carefully on the newer version. Other changes to be aware of are:

  • The removal of implicit FROM and in later versions removal of the backwards compatibility parameter for it;
  • UTF-8 validation improvements in newer versions that can cause older dumps to fail to load until the data is corrected;
  • The change to standard_conforming_strings by default;
  • The change of bytea_output to hex

Same thing happened to me once on 9.1.

I had a script that would process large size of data and then insert in the db. The issue showed up in the initial data import.

I didn’t find the cause of this but a workaround was to drop the index and recreate it.

You could also pg_dump and then pg_restore the whole db, and this will also recreate the index.

Leave a Reply

Your email address will not be published. Required fields are marked *