Improving performance of nasty nested-view joins

Posted on

Question :

I have a moderate-sized database spread out over a few tables, the rough architecture is:

  • Input Data (Data ID, Session ID and a few fields of statistical importance)
  • Input File (Data ID and a blob)
  • Stage 1 Output File (Data ID and a blob)
  • Stage 2 output File (Data ID and a blob)
  • Category 1 results (Data ID and some booleans)
  • Category 2 results (Data ID and some integers)
  • Category 3 results (Data ID and some integers)

Each table has ~200,000 rows.

I also have a view that basically glues all of these together so that I can SELECT a bunch of IDs (usually selecting them based on the Session ID) and view all the relevant data on one page.

The view works, and the query plan’s index utilization seems sane, but the results are less-than-fast:

> EXPLAIN ANALYZE SELECT(*) FROM overlay WHERE test_session=12345;

                 QUERY PLAN
------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
 Merge Right Join  (cost=7.19..74179.49 rows=10 width=305) (actual time=10680.129..10680.494 rows=4 loops=1)
   Merge Cond: (p.data_id = d.id)
   ->  Merge Join  (cost=7.19..75077.04 rows=183718 width=234) (actual time=0.192..10434.995 rows=173986 loops=1)
         Merge Cond: (p.data_id = input_file.data_id)
         ->  Merge Join  (cost=7.19..69917.74 rows=183718 width=222) (actual time=0.173..9255.653 rows=173986 loops=1)
               Merge Cond: (p.data_id = stage1_output_file.data_id)
               ->  Merge Join  (cost=5.50..62948.54 rows=183718 width=186) (actual time=0.153..8081.949 rows=173986 loops=1)
                     Merge Cond: (p.data_id = stage2_output_file.data_id)
                     ->  Merge Join  (cost=3.90..55217.36 rows=183723 width=150) (actual time=0.132..6918.814 rows=173986 loops=1)
                           Merge Cond: (p.data_id = stage3_output_file.data_id)
                           ->  Nested Loop  (cost=2.72..47004.01 rows=183723 width=114) (actual time=0.111..5753.105 rows=173986 loops=1)
                                 Join Filter: (p.impression = istr.id)
                                 ->  Merge Join  (cost=1.68..30467.90 rows=183723 width=102) (actual time=0.070..2675.733 rows=173986 loops=1)
                                       Merge Cond: (p.data_id = s.data_id)
                                       ->  Merge Join  (cost=1.68..19031.56 rows=183723 width=58) (actual time=0.049..1501.546 rows=173986 loops=1)
                                             Merge Cond: (p.data_id = t.data_id)
                                             ->  Index Scan using Category1_Results_pkey on Category1_Results p  (cost=0.00..7652.17 rows=183723 width=18) (actual time=0.025..315.531 rows=173986 loops=1)
                                             ->  Index Scan using Category3_Results_pkey on Category3_Results t  (cost=0.00..8624.43 rows=183787 width=40) (actual time=0.016..321.460 rows=173986 loops=1)
                                       ->  Index Scan using Category2_Results_pkey on Category2_Results s  (cost=0.00..8681.47 rows=183787 width=44) (actual time=0.014..320.794 rows=173986 loops=1)
                                 ->  Materialize  (cost=1.04..1.08 rows=4 width=20) (actual time=0.001..0.007 rows=4 loops=173986)
                                       ->  Seq Scan on Category1_impression_str istr  (cost=0.00..1.04 rows=4 width=20) (actual time=0.005..0.012 rows=4 loops=1)
                           ->  Index Scan using Stage3_Output_file_pkey on Stage3_Output_file stage3  (cost=0.00..8178.35 rows=183871 width=36) (actual time=0.015..317.698 rows=173986 loops=1)
                     ->  Index Scan using analysis_file_pkey on analysis_file Stage2_Output  (cost=0.00..8168.99 rows=183718 width=36) (actual time=0.014..317.776 rows=173986 loops=1)
               ->  Index Scan using Stage1_output_file_pkey on Stage1_output_file stg1  (cost=0.00..8199.07 rows=183856 width=36) (actual time=0.014..321.648 rows=173986 loops=1)
         ->  Index Scan using input_file_pkey on input_file input  (cost=0.00..8618.05 rows=183788 width=36) (actual time=0.014..328.968 rows=173986 loops=1)
   ->  Materialize  (cost=0.00..39.59 rows=10 width=75) (actual time=0.046..0.150 rows=4 loops=1)
         ->  Nested Loop Left Join  (cost=0.00..39.49 rows=10 width=75) (actual time=0.039..0.128 rows=4 loops=1)
               Join Filter: (t.id = d.input_quality)
               ->  Index Scan using input_data_exists_index on input_data d  (cost=0.00..28.59 rows=10 width=45) (actual time=0.013..0.025 rows=4 loops=1)
                     Index Cond: (test_session = 1040)
               ->  Seq Scan on quality_codes t  (cost=0.00..1.04 rows=4 width=38) (actual time=0.002..0.009 rows=4 loops=4)
 Total runtime: 10680.902 ms

The underlying views for this are our “full results” view, defined as:

 SELECT p.data_id, p.x2, istr.str AS impression, input.h, p.x3, p.x3, p.x4, s.x5,
        s.x6, s.x7, s.x8, s.x9, s.x10, s.x11, s.x12, s.x13, s.x14, t.x15,
        t.x16, t.x17, t.x18, t.x19, t.x20, t.x21, t.x22, t.x23,
        input.data AS input, stage1_output_file.data AS stage1, 
        stage2_output_file.data AS stage2, stage3_output_file.data AS stage3
FROM category1_results p, category1_impression_str istr, input_file input,
     stage1_output_file, stage2_output_file, stage3_output_file, 
     category2_results s, category3_results t
 WHERE p.impression = istr.id AND p.data_id = input.data_id AND p.data_id = stage1_output_file.data_id
       AND p.data_id = stage2_output_file.data_id AND p.data_id = stage3_output_file.data_id AND p.data_id = s.data_id AND p.data_id = t.data_id;                                  

and the overlay view which the query plan above was generated from, defined as:

 SELECT d.data_id, d.test_session, d.a, d.b, t.c, d.d, d.e, d.f, r.*
 FROM input_data d LEFT JOIN quality_codes t ON t.id = d.input_quality
      LEFT JOIN full_results r ON r.data_id = d.data_id  
 WHERE NOT d.deleted;

We seem to be JOINing our whole data set most of the way down the chain, and I’m pretty convinced that’s our performance problem — Does anyone have a suggestion on ways to optimize this pig?

Answer :

I am speculating here, but I guess the fact that you LEFT JOIN to the view makes the planner compute the results from the view as a whole before joining to the first part of the query.

Try inlining the query from the view and make it a JOIN instead of a LEFT JOIN, just to see if the planner finds a faster way now:

SELECT d.data_id, d.test_session, d.a, d.b, t.c, d.d, d.e, d.f
     , p.data_id AS p_data_id, p.x2, c.str AS impression, i.h
     , p.x3, p.x3, p.x4
     , s.x5, s.x6, s.x7, s.x8, s.x9, s.x10, s.x11, s.x12, s.x13, s.x14
     , t.x15, t.x16, t.x17, t.x18, t.x19, t.x20, t.x21, t.x22, t.x23
     , i.data AS input
     , s1.data AS stage1, s2.data AS stage2, s3.data AS stage3
FROM   input_data d
JOIN   category1_results        p ON p.data_id = d.data_id
JOIN   input_file               i USING (data_id)
JOIN   stage1_output_file      s1 USING (data_id)
JOIN   stage2_output_file      s2 USING (data_id)
JOIN   stage3_output_file      s3 USING (data_id)
JOIN   category2_results        s USING (data_id)
JOIN   category3_results        t USING (data_id)
JOIN   category1_impression_str c ON p.impression = c.id 
LEFT   JOIN quality_codes       t ON t.id = d.input_quality
WHERE  NOT d.deleted;

I cleaned up your syntax to make it more manageable. Added an alias for the second data_id column, so it can execute.

If that should lead to a considerably faster execution time, you can try and add missing rows due to the INNER JOIN like this:

SELECT DISTINCT ON (1,2,3,4,5,6,7,8) *
FROM (
    <<query>>
    ) x
UNION ALL
SELECT d.data_id, d.test_session, d.a, d.b, t.c, d.d, d.e, d.f
      ,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL
      ,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL
      ,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL,NULL
FROM   input_data d
LEFT   JOIN quality_codes t ON t.id = d.input_quality
WHERE  NOT d.deleted;
ORDER  BY 1,2,3,4,5,6,7,8, 9 NULLS LAST; -- p.data_id is otherwise not null

Having stared at this for a couple of days I’m pretty sure one possible solution is to denormalize the tables and stick Session IDs on all of them. This should let the query planner to reduce the JOINs to a smaller subset of rows much more quickly.

The big disadvantage here is denormalizing the databases – Probably not a deal-breaker, but something I’d avoid if possible…

Leave a Reply

Your email address will not be published. Required fields are marked *