Question :
As a simplified example, suppose I have a table like this:
seq | value
----+------
102 | 11954
211 | 43292
278 | 19222
499 | 3843
The table may contain hundreds of millions of records, and I need to frequently do queries like this:
SELECT sum(value) WHERE seq > $a and seq < $b
Even if seq
is indexed, a typical database implementation will loop through each row to compute the sum in best case O(n)
, where n
is the size of the range.
Is there any database that can do this efficiently, as in O(log(n))
per query?
I have come across a data structure called a Segment Tree as described here. Also sometimes referred to as a range tree or interval tree, although all these names are often described as a slightly different variation of the data structure.
However, I have not come across any database that implements such a data structure. Implementing it from scratch is easy for an in-memory structure, but becomes tricky if it has to be persisted or is too big to fit into memory. If there is an efficient pattern for implementing this on top of an existing database, that could also help.
Side note: This is not an append-only table, so a solution such as keeping a cumulative sum will not work in this case.
Answer :
Using SQL Server ColumnStore indexes
Well, okay, just one — a clustered CS index.
If you want to read about the hardware I did this on, head over here. Full disclosure, I wrote that blog post on the website of the company I work for.
On to the test!
Here’s some generic code to build a pretty big table. Same warning as Evan, this can take a while to build and index.
USE tempdb
CREATE TABLE t1 (Id INT NOT NULL, Amount INT NOT NULL)
;WITH T (N)
AS ( SELECT X.N
FROM (
VALUES (NULL), (NULL), (NULL),
(NULL), (NULL), (NULL),
(NULL), (NULL), (NULL),
(NULL) ) AS X (N)
), NUMS (N) AS (
SELECT TOP ( 710000000 )
ROW_NUMBER() OVER ( ORDER BY ( SELECT NULL )) AS N
FROM T AS T1, T AS T2, T AS T3,
T AS T4, T AS T5, T AS T6,
T AS T7, T AS T8, T AS T9,
T AS T10 )
INSERT dbo.t1 WITH ( TABLOCK ) (
Id, Amount )
SELECT NUMS.N % 999 AS Id, NUMS.N % 9999 AS Amount
FROM NUMS;
--(705032704 row(s) affected) --Aw, close enough
Well, Evan wins for simplicity, but I’ve talked about that before.
Here’s the index definition. La and dee and dah.
CREATE CLUSTERED COLUMNSTORE INDEX CX_WOAHMAMA ON dbo.t1
Looking at a count, every Id has a pretty even distribution:
SELECT t.Id, COUNT(*) AS [Records]
FROM dbo.t1 AS t
GROUP BY t.Id
ORDER BY t.Id
Results:
Id Records
0 5005005
1 5005006
2 5005006
3 5005006
4 5005006
5 5005006
…
994 5005005
995 5005005
996 5005005
997 5005005
998 5005005
With every Id having ~5,005,005 rows, we can look at a pretty small range of IDs to get you a 10 million row sum.
SELECT COUNT(*) AS [Records], SUM(t.Amount) AS [Total]
FROM dbo.t1 AS t
WHERE t.Id > 0
AND t.Id < 3;
Result:
Records Total
10010012 50015062308
Query profile:
Table 't1'. Scan count 6, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 2560758, lob physical reads 0, lob read-ahead reads 0.
Table 't1'. Segment reads 4773, segment skipped 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 564 ms, elapsed time = 106 ms.
For fun, a larger aggregation:
SELECT COUNT(*) AS [Records], SUM(CONVERT(BIGINT, t.Amount)) AS [Total]
FROM dbo.t1 AS t
WHERE t.Id > 0
AND t.Id < 101;
Results:
Records Total
500500505 2501989114575
Query profile:
Table 't1'. Scan count 6, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 2560758, lob physical reads 0, lob read-ahead reads 0.
Table 't1'. Segment reads 4773, segment skipped 0.
Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.
SQL Server Execution Times:
CPU time = 1859 ms, elapsed time = 321 ms.
Hope this helps!
PostgreSQL with a BRIN index
Even if seq is indexed, a typical database implementation will loop through each row to compute the sum in best case O(n), where n is the size of the range.
That’s not true. At least, no decent database will do that. PostgreSQL supports creating BRIN indexes on these kinds of tables. BRIN indexes are super-small and can fit in ram even on tables this big. Hundreds of millions of rows ain’t nothing.
Here, 300 million rows defined just like you ordered them. Warning it may take a long time to create it (Time: 336057.807 ms + 95121.809 ms for index).
CREATE TABLE foo
AS
SELECT seq::int, trunc(random()*100000)::int AS v
FROM generate_series(1,3e8) AS gs(seq);
CREATE INDEX ON foo USING BRIN (seq);
ANALYZE foo;
And, … now…
EXPLAIN ANALYZE SELECT sum(v) FROM foo WHERE seq BETWEEN 424242 AND 6313376;
QUERY PLAN
-------------------------------------------------------------------------------------------------------------------------------------------
Aggregate (cost=1486163.53..1486163.54 rows=1 width=4) (actual time=1493.888..1493.888 rows=1 loops=1)
-> Bitmap Heap Scan on foo (cost=58718.12..1471876.19 rows=5714938 width=4) (actual time=12.565..1035.153 rows=5889135 loops=1)
Recheck Cond: ((seq >= 424242) AND (seq <= 6313376))
Rows Removed by Index Recheck: 41105
Heap Blocks: lossy=26240
-> Bitmap Index Scan on foo_seq_idx (cost=0.00..57289.38 rows=5714938 width=0) (actual time=10.378..10.378 rows=262400 loops=1)
Index Cond: ((seq >= 424242) AND (seq <= 6313376))
Planning time: 0.125 ms
Execution time: 1493.948 ms
(9 rows)
1.4 seconds to aggregate/sum 5,889,135 rows in the given range.
Despite the table being 10 GB, the BRIN index is 304 kB.
Even faster
If this is still not fast enough, you can cache the aggregates by 100k rows.
CREATE MATERIALIZED VIEW cache_foo
AS
SELECT seq/1e5::int AS grp, sum(v)
FROM foo GROUP BY seq/1e5::int
ORDER BY 1;
Now you’ll only ever need to use the brin and aggregate 2(1e5-1)
rows rather than 300 million or whatever.
Hardware
Lenovo x230, i5-3230M, 16GB RAM, 1tb Samsung 840 SSD.