ClickHouse gets lazier and faster: Introducing lazy materialization

(clickhouse.com)

Comments

tmoertel 22 April 2025
This optimization should provide dramatic speed-ups when taking random samples from massive data sets, especially when the wanted columns can contain large values. That's because the basic SQL recipe relies on a LIMIT clause to determine which rows are in the sample (see query below), and this new optimization promises to defer reading the big columns until the LIMIT clause has filtered the data set down to a tiny number of lucky rows.

    SELECT *
    FROM Population
    WHERE weight > 0
    ORDER BY -LN(1.0 - RANDOM()) / weight
    LIMIT 100  -- Sample size.
Can anyone from ClickHouse verify that the lazy-materialization optimization speeds up queries like this one? (I want to make sure the randomization in the ORDER BY clause doesn't prevent the optimization.)
jurgenkesker 22 April 2025
I really like Clickhouse. Discovered it recently, and man, it's such a breath of fresh air compared to suboptimal solutions I used for analytics. It's so fast and the CLI is also a joy to work with.
simonw 22 April 2025
Unrelated to the new materialization option, this caught my eye:

"this query sorts all 150 million values in the helpful_votes column (which isn’t part of the table’s sort key) and returns the top 3, in just 70 milliseconds cold (with the OS filesystem cache cleared beforehand) and a processing throughput of 2.15 billion rows/s"

I clearly need to update my mental model of what might be a slow query against modern hardware and software. Looks like that's so fast because in a columnar database it only has to load that 150 million value column. I guess sorting 150 million integers in 70ms shouldn't be surprising.

(Also "Peak memory usage: 3.59 MiB" for that? Nice.)

This is a really great article - very clearly explained, good diagrams, I learned a bunch from it.

kwillets 23 April 2025
Late Materialization, 19 years later.

https://dspace.mit.edu/bitstream/handle/1721.1/34929/MIT-CSA...

mmsimanga 22 April 2025
IMHO if ClickHouse had Windows native release that does not need WSL or a Linux virtual machine it would be more popular than DuckDB. I remember for years MySQL being way more popular than PostgreSQL. One of the reasons being MySQL had a Windows installer.
Onavo 22 April 2025
Reminder clickhouse can be optionally embedded, you don't need to reach for Duck just because of hype (it's buggy as hell everytime I tried it).

https://clickhouse.com/blog/chdb-embedded-clickhouse-rocket-...

justmarc 22 April 2025
Clickhouse is a masterpiece of modern engineering with absolute attention to performance.
skeptrune 23 April 2025
>Despite the airport drama, I’m still set on that beach holiday, and that means loading my eReader with only the best.

What a nice touch. Technical information and diagrams in this were top notch, but the fact there was also some kind of narrative threaded in really put it over the top for me.

xiasongh 23 April 2025
Has anyone compared ClickHouse and StarRocks[0]? Join performance seems a lot better on StarRocks a few months ago but I'm not sure if that still holds true.

[0] https://www.starrocks.io/

hexo 23 April 2025
Whats up with these unscrollable websites? i dont get it. i scroll down a bit and it jumps up making it impossible to use.
vjerancrnjak 22 April 2025
It's quite amazing how a db like this shows that all of those row-based dbs are doing something wrong, they can't even approach these speeds with btree index structures. I know they like transactions more than Clickhouse, but it's just amazing to see how fast modern machines are, billions of rows per second.

I'm pretty sure they did not even bother to properly compress the dataset, with some tweaking, could have probably been much smaller than 30GBs. The speed shows that reading the data is slower than decompressing it.

Reminds me of that Cloudflare article where they had a similar idea about encryption being free (slower to read than to decrypt) and finding a bug, that when fixed, materialized this behavior.

The compute engine (chdb) is a wonder to use.

simianwords 22 April 2025
Maybe I'm too inexperienced in this field but reading the mechanism I think this would be an obvious optimisation. Is it not?

But credit where it is due, obviously clickhouse is an industry leader.

ohnoesjmr 22 April 2025
Wonder how well this propagates down to subqueries/CTE's
apwell23 23 April 2025
is apache druid still a player in this space ? Never seem to hear about it anymore. why would someone choose it over clickhouse?
higeorge13 23 April 2025
That’s an awesome change. Will that also work for limit offset queries?
meta_ai_x 22 April 2025
can we take the "packing your luggage" analogy and only pack the things we actually use in the trip and apply that to clickhouse?
jangliss 23 April 2025
Thought this was Clickhole.com and was waiting for the payoff to the joke
dangoodmanUT 22 April 2025
God clickhouse is such great software, if it only it was as ergonomic as duckdb, and management wasn't doing some questionable things (deleting references to competitors in GH issues, weird legal letters, etc.)

The CH contributors are really stellar, from multiple companies (Altinity, Tinybird, Cloudflare, ClickHouse)

tnolet 23 April 2025
We adopted ClickHouse ~4 years ago. We COULD have stayed on just Postgres. With a lot of bells, whistles, aggregation, denormalisation, aggressive retention limits and job queues etc. we could have gotten acceptable response times for our interactive dashboard.

But we chose ClickHouse and now we just pump in data with little to no optimization.