diff --git a/How-fast-are-really-the-analytical-DBMS%3F.md b/How-fast-are-really-the-analytical-DBMS%3F.md index cbe7f50..acd9c61 100644 --- a/How-fast-are-really-the-analytical-DBMS%3F.md +++ b/How-fast-are-really-the-analytical-DBMS%3F.md @@ -54,4 +54,6 @@ And finally clickhouse Additionally, clickhouse used rougly 3.5 GB of memory to execute the query, while duckdb ran with a limitation of 4GB RAM memory. The limitation was followed, since the container used for these tests would have crashed if it weren't. -Clickhouse was 88 faster than PostgreSQL. \ No newline at end of file +Clickhouse was 88 faster than PostgreSQL. + +This two-order-of-magnitude-faster operation is an obvious enabler. This means that someone can attach gigabyte-sized datasets to dashboards without the need of pre-aggregating data. I've needed to create hundreds of ETL processes during my career for the sole purpose of feeding some dashboard that otherwise would have taken hours to plot the results. You can still do that with cloud-hosted distributed databases like Snowflake or Redshift but at a relevant cost. \ No newline at end of file