以下内容来自对此Blog文章内容的整理和总结:
https://medium.com/@leventov/comparison-of-the-open-source-olap-systems-for-big-data-clickhouse-druid-and-pinot-8e042a5ed1c7?email=zhouwei.hit%40163.com&g-recaptcha-response=03AHqfIOmSujGHzZiiVvpWmVK73WjjnwhKdJdZoe2Z_c25ddPUnyJKebeCy-Cv3cMJ8W48nnBCJpco53XvxTOfpZLzz1KC7XdHdzIAN_zFXfX3n0Ufvv6cH4kTen1HgRewsi2jGbk9lJGrRyBq3SzfnXJt6R5yU-1n6ev54BgiMJGUP8dbwVrDfNyqp_BAq9sBO37iPugvpxb9uJDpTOrJ-hVMV_yws2gezrCZuWWqV1zVmc1ixjMaSmv_B_ZHLNAL3y_MCumdQ0BlwXjqSmg5Yds_LVH62bEPEQ
ClickHouse, Druid, Pinot Similarity:
- Coupled architecture
- Run queries fast:
a. Their own format for storing data with indexes and tightly integrated with their query processing engines
b. Data distributed relatively statically between the nodes
- No points updates and deletes:
a. More efficiently columnar compression and more aggressive indexes
b. ClickHouse supports updates and deletes
- Big data style ingestions: both realtime data from Kafka and batch data
- Proven at large scale: ten thousands of CPU cores / thousands of machines
- Immature
Differences Between ClickHouse and Druid/Pinot:
- Data Management
- Druid/Pivot:
a. Segments(Specific time ranges),Deep Storage(HDFS)
b. A Special Node responsible for:
- assign the segments to the nodes
- move segments between the nodes
c. Metadata is persisted in ZK/MySQL(Druid)、Helix(Pinot)
d. Data Tiering: cold data moved to servers with relatively large disks, but less memory and CPU, which could significantly reduce costs of running a large Druid cluster.
- ClickHouse:
a. No segments, No deep storage,nodes are responsible for:
- query processing
- persistence/durability of the data
b. No central authority or metadata server, all nodes are euqals.
c. Partitioned tables includes "weights" for distribution of new written data. "Weights" is manually operated.
- Comparison: when data grows large, ClickHouse need the table to be partitioned, and query amplification factor becomes as large as the partitioning factor, make ClickHouse visit more nodes than Druid/Pinot. So users may need to build multiple "subclusters" to avoid the problem. ClickHouse don't make data automatically rebalance, humans need to manually change "node weights" in a partitioned table.
- Data Replication
- Druid/Pinot:The unit of replication is a single segment.
a. Segments are replicated in deep storage layer
b. Segments are loaded on two different nodes.
b. Master server is responsible for data recovery
- ClickHouse:the unit of replication is a table partition on a server.
a. Replication is "static and specific", servers know they are replicas for each other.
b. Zookeeper for replication management, zookeeper is not needed for a single-node ClickHouse deployment.
- Data Ingestion
- Druid/Pinot: batch data: Hadoop/Spark, realtime nodes
- ClickHouse: accepts batch rows, merge row sets into larger ones
- Comparison: Druid/Pinot - heavy, ClickHouse - simpler, batch in front of ClickHouse itself
- Query Execution
- Druid/Pinot: brokers determine which historical query processing nodes subqueries should be issued, based on mapping from segments to nodes kept in memory of brokers.
Segments to node mapping information takes GB of memory, wastefully to allocation on all nodes.
- ClickHouse: every node could be the "entry point"
- ClickHouse/Pinot return partial result when a few subqueries fail, Druid will fail
Differences between Druid and Pinot
- Segment management
- Druid: persisted in ZooKeeper and MySQL, ZooKeeper keeps mapping from segment id to the list of query processing nodes on which the segment is loaded. MySQL keeps extended data, such as size of the segment, list of dimensions and metrics in its data.
- Pinot: reply on Curator for communication with ZooKeeper, and segment and cluster management logic of Helix framework.
- Predicate down
- Pinot: ingestion data is partitioned in Kafka by some dimension keys, query on this dimensions could be filter by broker node upfront, fewer segments and query processing nodes are hit.
- Druid: batch data - yes, key-based partitioning, realtime - no
- Pluggable
- Druid:
- deep storage: HDFS, Cassandra, Amazon S3, Google Cloud Storage, Azure Blob Storage
- realtime: Kafka, RabbitMQ, Samza, Flink, Spark, Storm
- sink: Druid, Graphite, Ambari, StatsD, Kafka
- Pinot: HDFS/Amazon S3, Kafka
- Data Format/Query Execution Engine
- Pinot: Better
- Compression with bit granularity, Druid bytes granularity
- Inverted index is optional for each column, in Druid it's obligatory
- Min and max value in numeric columns are recorded per segment
- Out-of-the-box support for data sorting
- more optimized format is used for multi-value columns
- Segment Assignment(Balancing) Algorithm
- Druid: take segment's table and time into account, calculate final score, 30-40% improvement
- Pinot: least total segments loaded at the moment
- Fault tolerant
- Pinot: return partial result when some subqueries fail
- Tiering
- Druid: Tiers of older and newer data, nodes for older data has much lower "CPU.RAM resources/number of loaded segments" ratio.