Why is this IN-clause with subquery materialization slow?

Posted on

Question :

Can someone help me explain why the performance of the two queries are so vastly different? (Setup-Code is at the end, DB-Fiddle is here: https://www.db-fiddle.com/f/eEuPWqR6gZcjbeSeWk4tu2/0)


select id from texts WHERE id not in (select doc_id from details);
select id from texts WHERE not exists (select 1 from details where details.doc_id=texts.id)

When running the query select id from texts WHERE id not in (select doc_id from details); the query seems to run “forever”. The query plan looks like this:

                                     QUERY PLAN                                     
 Gather  (cost=1000.00..3703524012.67 rows=400000 width=8)
   Workers Planned: 2
   ->  Parallel Seq Scan on texts  (cost=0.00..3703483012.67 rows=166667 width=8)
         Filter: (NOT (SubPlan 1))
         SubPlan 1
           ->  Materialize  (cost=0.00..20220.86 rows=799991 width=8)
                 ->  Seq Scan on details  (cost=0.00..13095.91 rows=799991 width=8)
   Functions: 8
   Options: Inlining true, Optimization true, Expressions true, Deforming true
(10 rows)

Time: 1.319 ms

The costs already hint a much longer execution time, but I do not understand why the costs are that much bigger? Why does the Parallel Seq Scan on texts take so long? What is postgres doing here?

With fewer rows in the tables, I get the following query plan:

explain (analyse) select id from texts WHERE id not in (select doc_id from details);                        
                                                      QUERY PLAN                                                       
 Seq Scan on texts  (cost=11466.00..23744.18 rows=338247 width=8) (actual time=174.488..325.775 rows=10 loops=1)
   Filter: (NOT (hashed SubPlan 1))
   Rows Removed by Filter: 599990
   SubPlan 1
     ->  Seq Scan on details  (cost=0.00..9937.20 rows=611520 width=8) (actual time=0.014..56.549 rows=599990 loops=1)
 Planning Time: 0.079 ms
 Execution Time: 326.372 ms
(7 rows)

Why does it degrade that much with more rows? If I understand the output correctly the materialization itself is not the problem. Also how does the second query avoid this problem?

Schema/Data generation:

create table texts (
  id bigint primary key,
  url text);
create table details (
  id bigserial primary key,
  doc_id bigint,
  content text);
insert into details (doc_id, content) select generate_series(1,800000), 'foobar';
insert into texts (id, url) select generate_series(1,800000), 'something';

-- Delete some values
delete from details where doc_id IN ( 307531, 630732, 86402, 584950, 835230, 334934, 673047, 772541, 239455, 763671);

Answer :

What you got is essentially a nested loop, where for each row of “texts” it scans over the list of each doc_id from “details”.

The “Materialize” means that it makes a first pass over “details” and extracts just the doc_id and puts that in private temp storage for future use in the inner side of the nested loop. This has several advantages. It doesn’t need to reparse doc_id out of the whole row for each pass, it doesn’t have to lock each page as it iterates over it because it is now in private storage, and the total size it iterates over is smaller and so more cacheable and if not cached will still be lower IO. But it doesn’t fundamentally change the fact that it is still a nested loop operation and it still sucks.

When you have fewer qualifying rows in “details”, it thinks it can build a hash table of those doc_id and keep it all in memory, so it instead uses the “hashed SubPlan”. If you increase work_mem, you can increase the number of rows it needs to expect before it gives up on that approach and switches to the much worse one that at least won’t crash your machine.

It is possible to spill hash tables to disk efficiently. That is done for hash joins, but not done for hashed subplans. Why? Well, PostgreSQL was written by a finite number of people who each have a finite amount of time. Not everything which is not impossible has actually been programmed.

Also how does the second query avoid this problem?

Is the ‘second query’ the one with the fewer rows (explained above) or the one with the NOT EXISTS? Rather than speculate, just use EXPLAIN and see.

Leave a Reply

Your email address will not be published.