Monday, June 14, 2021

TRYING OUT POSTGRES BLOOM INDEXES

 First – if the word “Bloom” doesn’t yet resonate with you in Postgres context, I’d recommend to read my previous introductory post on the topic here. This one is a follow up with an aim to set up a simple demo use case to exemplify “fields of application” for Bloom and to also gauge pros/cons/performance in comparison to B-tree.

So to get started, we would first need some kind of “close to real life” test case to visualize and remember the problem better in our heads. As explained in the previous post – for Bloom to make sense, we would need a query/table with many columns which are queried in random combinations… but with only integer/text data types and equality operators, options are a bit limited. After scrolling around a bit I couldn’t spot any suitable test sets (for example from here), so I decided still to look at the ceiling and pictured a simple table from an online car dealership domain where Bloom would be a good fit in addition to being easily digestible for our brains.

The online car dealership

Schema (a single table) is described below, with the amount of distinct values in the comment. Choosing a domain where values are not too distinct should be a good example as B-tree indexes are not too effective there and we should benefit from Bloom. Also note the table is well normalized – we’re only dealing with numeric type codes and natural values so we can generate test data easily with random brand_codes from 0..199 etc. As size for test data – I decided to generate 5 million test rows (cars listed for sale, could even be a realistic number for bigger countries), giving us a 365 MB table which would fit nicely into shared buffers, removing disk access factor from the tests.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
create table vehicle (
 
id bigserial primary key,
 
brand_code int not null-- n_distinct 200, imagine “brand_code”=0 => “Audi”
 
model_code int not null-- n_distinct 25, “model_code”=0 => “Q7”
 
color_code int not null-- n_distinct 25, “color_code”=0 => “silver” etc...
 
year int not null,                   -- n_distinct 26
 
body_type_code int not null,          -- n_distinct 10
 
doors int not null,                 -- n_distinct 5
 
seats int not null,                  -- n_distinct 9
 
gearbox_code int not null,               -- n_distinct 3
 
fuel_type_code int not null,    -- n_distinct 10
 
aircon_type_code int not null    -- n_distinct 4
 
);
 
 
 
insert into vehicle (brand_code, model_code, color_code, year, body_type_code, doors, seats, gearbox_code, fuel_type_code, aircon_type_code)
 
select
 
random()*200, --brand
 
random()*25, --model
 
random()*25, --color
 
random()*26 + 1990, --year
 
random()*10, --body_type
 
random()*4 + 1, --doors
 
random()*8 + 1, --seats
 
random()*3, --gearbox
 
random()*10, --fuel
 
random()*4 --aircon
 
from
 
generate_series(1, 5*1e6);  --  5 mio rows
 
 
analyze vehicle;

Building the indexes

Again – idea for our “application” was that users issue queries using equality filters on many columns, meaning all columns need to be indexed for fast response times. So we index separately all columns with B-tree and then one Bloom index for all columns.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
CREATE INDEX ON vehicle(brand_code, model_code);
 
CREATE INDEX ON vehicle(color_code);
 
CREATE INDEX ON vehicle(year);
 
CREATE INDEX ON vehicle(body_type_code);
 
CREATE INDEX ON vehicle(doors);
 
CREATE INDEX ON vehicle(seats);
 
CREATE INDEX ON vehicle(gearbox_code);
 
CREATE INDEX ON vehicle(fuel_type_code);
 
CREATE INDEX ON vehicle(aircon_type_code);
 
 
 
CREATE INDEX bloom_80_bits ON vehicle USING bloom (brand_code,model_code,color_code,year,body_type_code,doors,seats,gearbox_code,fuel_type_code,aircon_type_code);

And the resulting index sizes: all B-tree indexes yield 107 MB, totalling to 963 MB, and Bloom yields 77 MB. Pretty impressive already – remember they should basically provide comparable outcomes for our queries!

Running a test query

Now to the interesting part – will queries profit from the Bloom index? Note that here I’m imposing some behaviour on our “users” for my contrived use case –  I will not use all 10 columns and leave out brand/model from the query as together they’re too selective, probably giving no chance to the “lossy” Bloom index. This could be translated so that our “users” are being pragmatic, knowing what  features they would want for their cars and set on utility first, not a flashy brand name 🙂 Note that selected code values are randomly chosen but as we have a random distribution with limited distinct values, playing with different numbers didn’t change the outcome.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
select * from vehicle
 
where color_code = 12
 
and year > 2010
 
and body_type_code = 5
 
and doors = 4
 
and gearbox_code = 2
 
and fuel_type_code = 5
 
and aircon_type_code = 2;

* Run 1.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Bitmap Heap Scan on vehicle  (cost=22376.30..28976.90 rows=8 width=48) (actual time=130.326..132.075 rows=13 loops=1)
 
Recheck Cond: ((color_code = 12) AND (body_type_code = 5) AND (fuel_type_code = 5))
 
Filter: ((year > 2010) AND (doors = 4) AND (gearbox_code = 2) AND (aircon_type_code = 2))
 
Rows Removed by Filter: 2113
 
Heap Blocks: exact=2072
 
->  BitmapAnd  (cost=22376.30..22376.30 rows=1971 width=0) (actual time=129.832..129.832 rows=0 loops=1)
 
->  Bitmap Index Scan on vehicle_color_code_idx  (cost=0.00..3487.43 rows=188667 width=0) (actual time=50.499..50.499 rows=199791 loops=1)
 
Index Cond: (color_code = 12)
 
->  Bitmap Index Scan on vehicle_body_type_code_idx  (cost=0.00..9244.93 rows=500333 width=0) (actual time=33.992..33.992 rows=499668 loops=1)
 
Index Cond: (body_type_code = 5)
 
->  Bitmap Index Scan on vehicle_fuel_type_code_idx  (cost=0.00..9643.43 rows=522000 width=0) (actual time=33.622..33.622 rows=500561 loops=1)
 
Index Cond: (fuel_type_code = 5)
 
Planning time: 0.642 ms
 
Execution time: 132.150 ms

To my surprise still the B-tree indexes were used with a 130 ms average runtime for 5 runs. Hmm. Here we can see though that Postgres is smart enough to pick the most selective indexes and does a bitmap merge on only three indexes. Ok, seems that to test out Bloom we need to disable/delete the normal indexes still.

* Run 2.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
Bitmap Heap Scan on vehicle  (cost=139220.00..139377.59 rows=8 width=48) (actual time=67.600..67.673 rows=13 loops=1)
 
Recheck Cond: ((color_code = 12) AND (body_type_code = 5) AND (doors = 4) AND (gearbox_code = 2) AND (fuel_type_code = 5) AND (aircon_type_code = 2))
 
Rows Removed by Index Recheck: 37
 
Filter: (year > 2010)
 
Rows Removed by Filter: 28
 
Heap Blocks: exact=78
 
->  Bitmap Index Scan on bloom_80_bits  (cost=0.00..139220.00 rows=40 width=0) (actual time=67.576..67.576 rows=78 loops=1)
 
Index Cond: ((color_code = 12) AND (body_type_code = 5) AND (doors = 4) AND (gearbox_code = 2) AND (fuel_type_code = 5) AND (aircon_type_code = 2))
 
Planning time: 0.270 ms
 
Execution time: 67.728 ms

Hurray, finally some progress after disabling the indexes! Seems that Bloom in our use case actually is a better choice – it is 2x faster and ~10x more space efficient! Only sad thing is that Postgres cannot correctly estimate that yet so one needs to point it out ourselves. But why is that? Because of the cost estimation mechanics of the Postgres planner. We can see that total cost (penalty points calculated for every operation) is bigger for the Bloom case (29K vs 139K)… albeit Bloom is faster in real time. And main costs come from Bitmap Index Scan on the Bloom index and the fact that currently the whole index needs to be scanned and all the 5 million signatures compared! One could of course fine tune parameters like cpu_tuple_cost etc. so that the planner would give Bloom the advantage, but that’s another topic and could heavily affect other well-working queries. For comparison by the way, a full table scan for our query (set enable_bitmapscan to off) takes about 500ms.

Tuning Bloom signature length

So after first “feedback” from Bloom I decided to see what is the practical outcome of changing the signature length. Trying out a relatively small value of 32 and a bit bigger 128, yielded sizes of 48 MB and 106 MB (5 Mio rows still). Note that the index size really doesn’t change linearly with the signature size as physical location info (6 bytes) already takes a considerable toll for every Bloom index entry.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
krl@postgres=# \di+ bloom_*
 
List of relations
 
Schema │      Name      │ Type  │ Owner │  Table   │  Size  │ Description
 
────────┼────────────────┼───────┼───────┼──────────┼────────┼─────────────
 
public │ bloom_128_bits │ index │ krl   │ vehicle  │ 106 MB │
 
public │ bloom_32_bits   │ index │ krl   │ vehicle  │ 48 MB   │
 
public │ bloom_80_bits   │ index │ krl   │ vehicle  │ 77 MB   │
 
(3 rows)

After executing the same EXPLAIN ANALYZE from above (not including output here for brevity), Postgres funnily enough decided to use the less precise (32 bits) index here, although we have a better (more precise) one available. To test my theory of improvement, I dropped the 32-bit one and indeed – runtime was decreased 2x, 137ms vs 65ms! Less heap pages to visit and tuples to recheck. Again Postgres could be a bit cleverer here, probably it just uses the smallest available index covering the queried columns as with B-tree.

Summary

Our example use case was not too exhaustive but hopefully close enough to real life so that you got an idea when to consider Bloom and what to take into account and what are the pros and cons. A list of things to remember:

  • Perfect for “checkbox” type of applications – lots of columns with little varying data (and lots of data rows). One could draw parallels to Oracle bitmap index use cases.
  • Index is lossy, meaning it may give false positives and instead of real column values, in the index there are signatures linked to tables via CTID-s as usual.
  • Be ready for some trial and error with Bloom – finding out optimal columns to include and signature length. But when applied appropriately one can easily gain 10x benefit (aggregating space and speed).
  • Currently only equality operations and int4 and text (a.k.a. varchar as they’re the same) data types are supported.
  • For 5m rows with 10 columns the default signature length of 80 bits was sufficient (limited nr. of  distinct values) and performed better than individual B-tree indexes being at the same time a lot smaller!
  • There’s probably some room for improvement in planner cost estimates for Bloom, so currently there’s no point to have both B-tree and Bloom indexes as B-tree is preferred.
  • What about unique values? ERROR:  access method “bloom” does not support unique indexes…
  • What about NULL-s? NULL-s are not indexed so all queries like “WHERE col IS NULL” can’t benefit from Bloom indexes.
  • Bloom is WAL logged and crash-safe (unlike Hash indexes), so you can rely on it.
  • Changing individual bit lengths is probably not worth the effort. Only makes sense for special cases where one of the columns has a lot of distinct values and others very few. Then in theory increasing bits for the distinct column should give you less rows thrown away during rechecking.
  • When specifying index signature bit length (… WITH (length=X)), bits are actually rounded up to full words (16 bits). Thus “length=33” bits ends up as “length=48” bits.

No comments:

Post a Comment