Update 'How fast are really the analytical DBMS?'
parent
8ef9c5a3ad
commit
92b720ed90
|
@ -55,3 +55,5 @@ And finally clickhouse
|
||||||
Additionally, clickhouse used rougly 3.5 GB of memory to execute the query, while duckdb ran with a limitation of 4GB RAM memory. The limitation was followed, since the container used for these tests would have crashed if it weren't.
|
Additionally, clickhouse used rougly 3.5 GB of memory to execute the query, while duckdb ran with a limitation of 4GB RAM memory. The limitation was followed, since the container used for these tests would have crashed if it weren't.
|
||||||
|
|
||||||
Clickhouse was 88 faster than PostgreSQL.
|
Clickhouse was 88 faster than PostgreSQL.
|
||||||
|
|
||||||
|
This two-order-of-magnitude-faster operation is an obvious enabler. This means that someone can attach gigabyte-sized datasets to dashboards without the need of pre-aggregating data. I've needed to create hundreds of ETL processes during my career for the sole purpose of feeding some dashboard that otherwise would have taken hours to plot the results. You can still do that with cloud-hosted distributed databases like Snowflake or Redshift but at a relevant cost.
|
Loading…
Reference in a new issue