大数据开源OLAP系统比较:ClickHouse, Druid, Pinot

以下内容来自对此Blog文章内容的整理和总结:
https://medium.com/@leventov/comparison-of-the-open-source-olap-systems-for-big-data-clickhouse-druid-and-pinot-8e042a5ed1c7?email=zhouwei.hit%40163.com&g-recaptcha-response=03AHqfIOmSujGHzZiiVvpWmVK73WjjnwhKdJdZoe2Z_c25ddPUnyJKebeCy-Cv3cMJ8W48nnBCJpco53XvxTOfpZLzz1KC7XdHdzIAN_zFXfX3n0Ufvv6cH4kTen1HgRewsi2jGbk9lJGrRyBq3SzfnXJt6R5yU-1n6ev54BgiMJGUP8dbwVrDfNyqp_BAq9sBO37iPugvpxb9uJDpTOrJ-hVMV_yws2gezrCZuWWqV1zVmc1ixjMaSmv_B_ZHLNAL3y_MCumdQ0BlwXjqSmg5Yds_LVH62bEPEQ

ClickHouse, Druid, Pinot Similarity:

  1. Coupled architecture
  2. Run queries fast:
    a. Their own format for storing data with indexes and tightly integrated with their query processing engines
    b. Data distributed relatively statically between the nodes
  3. No points updates and deletes:
    a. More efficiently columnar compression and more aggressive indexes
    b. ClickHouse supports updates and deletes
  4. Big data style ingestions: both realtime data from Kafka and batch data
  5. Proven at large scale: ten thousands of CPU cores / thousands of machines
  6. Immature

Differences Between ClickHouse and Druid/Pinot:

  1. Data Management
  • Druid/Pivot:
    a. Segments(Specific time ranges),Deep Storage(HDFS)
    b. A Special Node responsible for:
    - assign the segments to the nodes
    - move segments between the nodes
    c. Metadata is persisted in ZK/MySQL(Druid)、Helix(Pinot)
    d. Data Tiering: cold data moved to servers with relatively large disks, but less memory and CPU, which could significantly reduce costs of running a large Druid cluster.
  • ClickHouse:
    a. No segments, No deep storage,nodes are responsible for:
    - query processing
    - persistence/durability of the data
    b. No central authority or metadata server, all nodes are euqals.
    c. Partitioned tables includes "weights" for distribution of new written data. "Weights" is manually operated.
  • Comparison: when data grows large, ClickHouse need the table to be partitioned, and query amplification factor becomes as large as the partitioning factor, make ClickHouse visit more nodes than Druid/Pinot. So users may need to build multiple "subclusters" to avoid the problem. ClickHouse don't make data automatically rebalance, humans need to manually change "node weights" in a partitioned table.
  1. Data Replication
  • Druid/Pinot:The unit of replication is a single segment.
    a. Segments are replicated in deep storage layer
    b. Segments are loaded on two different nodes.
    b. Master server is responsible for data recovery
  • ClickHouse:the unit of replication is a table partition on a server.
    a. Replication is "static and specific", servers know they are replicas for each other.
    b. Zookeeper for replication management, zookeeper is not needed for a single-node ClickHouse deployment.
  1. Data Ingestion
  • Druid/Pinot: batch data: Hadoop/Spark, realtime nodes
  • ClickHouse: accepts batch rows, merge row sets into larger ones
  • Comparison: Druid/Pinot - heavy, ClickHouse - simpler, batch in front of ClickHouse itself
  1. Query Execution
  • Druid/Pinot: brokers determine which historical query processing nodes subqueries should be issued, based on mapping from segments to nodes kept in memory of brokers.
    Segments to node mapping information takes GB of memory, wastefully to allocation on all nodes.
  • ClickHouse: every node could be the "entry point"
  • ClickHouse/Pinot return partial result when a few subqueries fail, Druid will fail

Differences between Druid and Pinot

  1. Segment management
  • Druid: persisted in ZooKeeper and MySQL, ZooKeeper keeps mapping from segment id to the list of query processing nodes on which the segment is loaded. MySQL keeps extended data, such as size of the segment, list of dimensions and metrics in its data.
  • Pinot: reply on Curator for communication with ZooKeeper, and segment and cluster management logic of Helix framework.
  1. Predicate down
  • Pinot: ingestion data is partitioned in Kafka by some dimension keys, query on this dimensions could be filter by broker node upfront, fewer segments and query processing nodes are hit.
  • Druid: batch data - yes, key-based partitioning, realtime - no
  1. Pluggable
  • Druid:
    • deep storage: HDFS, Cassandra, Amazon S3, Google Cloud Storage, Azure Blob Storage
    • realtime: Kafka, RabbitMQ, Samza, Flink, Spark, Storm
    • sink: Druid, Graphite, Ambari, StatsD, Kafka
  • Pinot: HDFS/Amazon S3, Kafka
  1. Data Format/Query Execution Engine
  • Pinot: Better
    • Compression with bit granularity, Druid bytes granularity
    • Inverted index is optional for each column, in Druid it's obligatory
    • Min and max value in numeric columns are recorded per segment
    • Out-of-the-box support for data sorting
    • more optimized format is used for multi-value columns
  1. Segment Assignment(Balancing) Algorithm
  • Druid: take segment's table and time into account, calculate final score, 30-40% improvement
  • Pinot: least total segments loaded at the moment
  1. Fault tolerant
  • Pinot: return partial result when some subqueries fail
  1. Tiering
  • Druid: Tiers of older and newer data, nodes for older data has much lower "CPU.RAM resources/number of loaded segments" ratio.
©著作权归作者所有,转载或内容合作请联系作者
【社区内容提示】社区部分内容疑似由AI辅助生成,浏览时请结合常识与多方信息审慎甄别。
平台声明:文章内容(如有图片或视频亦包括在内)由作者上传并发布,文章内容仅代表作者本人观点,简书系信息发布平台,仅提供信息存储服务。

相关阅读更多精彩内容

  • 在Keyguard之滑动解锁流程一文中,我们已经分析过,不同的安全锁类型是在KeyguardSecurityCon...
    汪和呆喵阅读 8,831评论 0 4
  • 作为一名ToB的产品小汪,已经入职3个月了,如何在产品道路上开心的浪下去,小萌新总结了一个自救攻略 1、不自以为P...
    daidaiwang阅读 1,859评论 0 0
  • 书是进步的阶梯,是这个时代的生命,书是人生的启蒙老师,如果这个世界没有书,会变得多么枯燥无味啊! ...
    魏墨然阅读 3,490评论 0 1
  • 你失联的第八天 晚上先是被对面寝室的几个女生吵醒,然后又被鸟叫醒……明明可以睡懒觉的…… 今天上午没有课呦,中午才...
    Daroro阅读 1,823评论 0 0
  • 每个人都有过去,都有一段自己刻骨铭心的过往。这段曾经,可能带着生命行走的很为艰难。也或许这段过往,带给了生命新的启...
    呓语雪阅读 3,517评论 7 6

友情链接更多精彩内容