I am using using the Heroku command as described here to detect bloat in tables. One of our tables reported a bloat size of around 7GB, but running vacuum on it, then running the same bloat command, reports close to 21GB of “waste”. I have no idea how this can happen. I understand the various conditions under which a table can bloat, even this dramatically, but not after a vacuum, I have never seen such a case.
Any ideas why this might happen? The table in question does have a few TOAST tuples, perhaps that might be it. I also can’t tell if that query from Heroku accounts for TOAST waste, but my understanding is that vacuum would take care of those as well. Local tests show the “waste” column from that bloat query staying pretty stable for small sizes and decreasing for larger values, but never an increase, which makes this rather strange.
All these queries that estimate bloat are not precise, but rely on heuristics.
If you want to find out the truth, install the pgstattuple extension and run the
pgstattuple function on the table in question. That will tell you how much space is occupied by user data, dead tuples and empty space, respectively.