Question :
Hi All I’ve got a problem with my PostgreSQL database query and wondering if anyone can help. In some scenarios my query seems to ignore the index that I’ve created which is used for joining the two tables data
and data_area
. When this happens it uses a sequential scan and results in a much slower query.
Sequential Scan (~5 minutes)
Unique (cost=15368261.82..15369053.96 rows=200 width=1942) (actual time=301266.832..301346.936 rows=153812 loops=1)
CTE data
-> Bitmap Heap Scan on data (cost=6086.77..610089.54 rows=321976 width=297) (actual time=26.286..197.625 rows=335130 loops=1)
Recheck Cond: (datasetid = 1)
Filter: ((readingdatetime >= '1920-01-01 00:00:00'::timestamp without time zone) AND (readingdatetime <= '2013-03-11 00:00:00'::timestamp without time zone) AND (depth >= 0::double precision) AND (depth <= 99999::double precision))
-> Bitmap Index Scan on data_datasetid_index (cost=0.00..6006.27 rows=324789 width=0) (actual time=25.462..25.462 rows=335130 loops=1)
Index Cond: (datasetid = 1)
-> Sort (cost=15368261.82..15368657.89 rows=158427 width=1942) (actual time=301266.829..301287.110 rows=155194 loops=1)
Sort Key: data.id
Sort Method: quicksort Memory: 81999kB
-> Hash Left Join (cost=15174943.29..15354578.91 rows=158427 width=1942) (actual time=300068.588..301052.832 rows=155194 loops=1)
Hash Cond: (data_area.area_id = area.id)
-> Hash Join (cost=15174792.93..15351854.12 rows=158427 width=684) (actual time=300066.288..300971.644 rows=155194 loops=1)
Hash Cond: (data.id = data_area.data_id)
-> CTE Scan on data (cost=0.00..6439.52 rows=321976 width=676) (actual time=26.290..313.842 rows=335130 loops=1)
-> Hash (cost=14857017.62..14857017.62 rows=25422025 width=8) (actual time=300028.260..300028.260 rows=26709939 loops=1)
Buckets: 4194304 Batches: 1 Memory Usage: 1043357kB
-> Seq Scan on data_area (cost=0.00..14857017.62 rows=25422025 width=8) (actual time=182921.056..291687.996 rows=26709939 loops=1)
Filter: (area_id = ANY ('{28,29,30,31,32,33,25,26,27,18,19,20,21,12,13,14,15,16,17,34,35,1,2,3,4,5,6,22,23,24,7,8,9,10,11}'::integer[]))
-> Hash (cost=108.49..108.49 rows=3349 width=1258) (actual time=2.256..2.256 rows=3349 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 584kB
-> Seq Scan on area (cost=0.00..108.49 rows=3349 width=1258) (actual time=0.007..0.666 rows=3349 loops=1)
Total runtime: 301493.379 ms
Index Scan (~3 seconds) (on explain.depesz.com)
Unique (cost=17352256.47..17353067.50 rows=200 width=1942) (actual time=3603.303..3681.619 rows=153812 loops=1)
CTE data
-> Bitmap Heap Scan on data (cost=6284.60..619979.56 rows=332340 width=297) (actual time=26.201..262.314 rows=335130 loops=1)
Recheck Cond: (datasetid = 1)
Filter: ((readingdatetime >= '1920-01-01 00:00:00'::timestamp without time zone) AND (readingdatetime <= '2013-03-11 00:00:00'::timestamp without time zone) AND (depth >= 0::double precision) AND (depth <= 99999::double precision))
-> Bitmap Index Scan on data_datasetid_index (cost=0.00..6201.51 rows=335354 width=0) (actual time=25.381..25.381 rows=335130 loops=1)
Index Cond: (datasetid = 1)
-> Sort (cost=17352256.47..17352661.98 rows=162206 width=1942) (actual time=3603.302..3623.113 rows=155194 loops=1)
Sort Key: data.id
Sort Method: quicksort Memory: 81999kB
-> Hash Left Join (cost=1296.08..17338219.59 rows=162206 width=1942) (actual time=29.980..3375.921 rows=155194 loops=1)
Hash Cond: (data_area.area_id = area.id)
-> Nested Loop (cost=0.00..17334287.66 rows=162206 width=684) (actual time=26.903..3268.674 rows=155194 loops=1)
-> CTE Scan on data (cost=0.00..6646.80 rows=332340 width=676) (actual time=26.205..421.858 rows=335130 loops=1)
-> Index Scan using data_area_pkey on data_area (cost=0.00..52.13 rows=1 width=8) (actual time=0.006..0.008 rows=0 loops=335130)
Index Cond: (data_id = data.id)
Filter: (area_id = ANY ('{28,29,30,31,32,33,25,26,27,18,19,20,21,12,13,14,15,16,17,34,35,1,2,3,4,5,6,22,23,24,7,8,9,10,11}'::integer[]))
-> Hash (cost=1254.22..1254.22 rows=3349 width=1258) (actual time=3.057..3.057 rows=3349 loops=1)
Buckets: 1024 Batches: 1 Memory Usage: 584kB
-> Index Scan using area_primary_key on area (cost=0.00..1254.22 rows=3349 width=1258) (actual time=0.012..1.429 rows=3349 loops=1)
Total runtime: 3706.630 ms
Table Structure
This is the table structure for the data_area
table. I can provide the other tables if need be.
CREATE TABLE data_area
(
data_id integer NOT NULL,
area_id integer NOT NULL,
CONSTRAINT data_area_pkey PRIMARY KEY (data_id , area_id ),
CONSTRAINT data_area_area_id_fk FOREIGN KEY (area_id)
REFERENCES area (id) MATCH SIMPLE
ON UPDATE NO ACTION ON DELETE NO ACTION,
CONSTRAINT data_area_data_id_fk FOREIGN KEY (data_id)
REFERENCES data (id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE CASCADE
);
QUERY
WITH data AS (
SELECT *
FROM data
WHERE
datasetid IN (1)
AND (readingdatetime BETWEEN '1920-01-01' AND '2013-03-11')
AND depth BETWEEN 0 AND 99999
)
SELECT *
FROM (
SELECT DISTINCT ON (data.id) data.id, *
FROM
data,
data_area
LEFT JOIN area ON area_id = area.id
WHERE
data_id = data.id
AND area_id IN (28,29,30,31,32,33,25,26,27,18,19,20,21,12,13,14,15,16,17,34,35,1,2,3,4,5,6,22,23,24,7,8,9,10,11)
) as s;
Returns 153812
rows. Did set enable_seqscan= false;
to disable sequential scan and get the index result.
I’ve tried doing an ANALYSE
on the database and increasing the statistics gathered on the columns used in the query, but nothing seems to help.
Could anyone spread and light on this or suggest anything else I should try?
Answer :
Notice this line:
-> Index Scan using data_area_pkey on data_area (cost=0.00..52.13 rows=1 width=8)
(actual time=0.006..0.008 rows=0 loops=335130)
If you compute the total cost, considering loops, it is 52.13 * 335130 = 17470326.9
. This is larger than 14857017.62 for the seq_scan
alternative. That is why it does not use the index.
So the optimizer is overestimating the cost of the index scan. I’d guess that your data is sorted on the index (either due to a clustered index or to how it was loaded) and/or you have plenty of cache memory and/or a nice fast disk. Hence there is little random I/O going on.
You should also check the correlation
in pg_stats
, that is used by the optimizer to assess clustering when computing the index cost, and finally try changing random_page_cost
and cpu_index_tuple_cost
, to match your system.
Your CTE actually does nothing else then ‘outsource’ a few WHERE
conditions, most of them looking equivalent of WHERE TRUE
. Since CTEs are usually behind an optimization fence (meaning that it is optimized on its own), they can help a lot with certain queries. In this case, however, I would expect the exact opposite effect.
What I would try is to rewrite the query to be as simple as possible:
SELECT d.id, *
FROM
data d
JOIN data_area da ON da.data_id = d.id
LEFT JOIN area a ON da.area_id = a.id
WHERE
d.datasetid IN (1)
AND da.area_id IN (28,29,30,31,32,33,25,26,27,18,19,20,21,12,13,14,15,16,17,34,35,1,2,3,4,5,6,22,23,24,7,8,9,10,11)
AND (readingdatetime BETWEEN '1920-01-01' AND '2013-03-11') -- this and the next condition don't do anything, I think
AND depth BETWEEN 0 AND 99999
;
and then check whether the index is used or not. It is still very possible that you don’t need all the output columns (at least the two columns of the junction table are superfluous).
Please report back and tell us which PostgreSQL version you use.
For followers, I had a similar problem that was like
select * from table where bigint_column between x and y and mod(bigint_column, 10000) == z
The problem was that my bigint_column “between x and y” had an index, but my query was basically “all the rows” in that table, so it wasn’t using the index [since it had to scan the entire table anyway] but was doing a seq_scan sequential scan. A fix for me was to create a new index for the “mod” side of the equation, so that it could use that on an expression.