Druid:Integration with Kafka

本文介绍在Kafka和Druid整合使用中遇到的问题和解决方法。

1. 基本配置

Druid使用Kafka作为数据源的基本配置方式不是本文介绍的重点,可以参考Druid的官方文档进行配置:
http://druid.io/docs/latest/ingestion/stream-ingestion.html

2. 数据结构配置

  • 维度重命名

官方文档中给出的配置方式如下:
http://druid.io/docs/latest/querying/dimensionspecs.html

{
  "type" : "default",
  "dimension" : <dimension>,
  "outputName": <output_name>,
  "outputType": <"STRING"|"LONG"|"FLOAT">
}

但在按照这种方式配置后,通过tranquility提交任务之后报错:

Caused by: java.lang.NullPointerException: Dimension name cannot be null.
    at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:229) ~[com.google.guava.guava-16.0.1.jar:na]
    at io.druid.data.input.impl.DimensionSchema.<init>(DimensionSchema.java:78) ~[io.druid.druid-api-0.9.1.jar:0.9.1]
    at io.druid.data.input.impl.StringDimensionSchema.<init>(StringDimensionSchema.java:38) ~[io.druid.druid-api-0.9.1.jar:0.9.1]
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) ~[na:1.8.0_121]
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) ~[na:1.8.0_121]
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) ~[na:1.8.0_121]
    at java.lang.reflect.Constructor.newInstance(Constructor.java:423) ~[na:1.8.0_121]
    at com.fasterxml.jackson.databind.introspect.AnnotatedConstructor.call(AnnotatedConstructor.java:125) ~[com.fasterxml.jackson.core.jackson-databind-2.4.6.jar:2.4.6]
    at com.fasterxml.jackson.databind.deser.std.StdValueInstantiator.createFromObjectWith(StdValueInstantiator.java:230) ~[com.fasterxml.jackson.core.jackson-databind-2.4.6.jar:2.4.6]
    ... 52 common frames omitted

在Druid源码中找到对应的报错代码,在DimensionSchema类中:

  protected DimensionSchema(String name, MultiValueHandling multiValueHandling)
  {
    this.name = Preconditions.checkNotNull(name, "Dimension name cannot be null.");
    this.multiValueHandling = multiValueHandling == null ? MultiValueHandling.ofDefault() : multiValueHandling;
  }

可以看到在进行对象构造时,由于传入的name为null,因此在此断言出报错。

DimensionSchema类的构造,是通过使用Jackson的注解来实现的。可以看到在LongDimensionSchema中看到,name是通过json中key为name的变量来赋值的。

@JsonCreator
  public LongDimensionSchema(
      @JsonProperty("name") String name
  )
  {
    super(name, null);
  }

因此,正确的配置方式如下:

{
  "type" : "default",
  "name" : <dimension>,
  "outputName": <output_name>,
  "outputType": <"STRING"|"LONG"|"FLOAT">
}

3. 索引失败

  • Kafka消息被丢弃

当Kafka消息被丢弃时,打印出的日志如下所示,receivedCount基本等于droppedCount

2018-08-29 08:21:29,951 [KafkaConsumer-CommitThread] INFO  c.m.tranquility.kafka.KafkaConsumer - Flushed {o_qixiaoQuery={receivedCount=63030, sentCount=0, droppedCount=63030, unparseableCount=0}} pending messages in 2ms and committed offsets in 961ms.
2018-08-29 08:21:47,122 [KafkaConsumer-CommitThread] INFO  c.m.tranquility.kafka.KafkaConsumer - Flushed {o_qixiaoQuery={receivedCount=253279, sentCount=0, droppedCount=253279, unparseableCount=0}} pending messages in 0ms and committed offsets in 2171ms.
2018-08-29 08:22:04,277 [KafkaConsumer-CommitThread] INFO  c.m.tranquility.kafka.KafkaConsumer - Flushed {o_qixiaoQuery={receivedCount=344454, sentCount=0, droppedCount=344454, unparseableCount=0}} pending messages in 0ms and committed offsets in 2153ms.

出现这种情况的原因,一般都是由于Kafka中数据的timestamp超出了设置的windowPeriod。
http://druid.io/docs/latest/ingestion/stream-push.html#segmentgranularity-and-windowperiod
默认的windowPeriod为PT10M,即10分钟,那么晚于当前时间10分钟或者超前当前时间10分钟的数据将被丢弃。

这里我们看一下Kafka中最新的offset,和Druid的zookeeper中记录的消费offset:

[root@zhouwei-worker-dev001-bjdx kafka_2.10-0.8.2.0]# ./bin/kafka-run-class.sh kafka.tools.GetOffsetShell --broker-list <kafka-server-ip>:9092 --topic o_qixiaoQuery --time -1
o_qixiaoQuery:0:3084948235
...
[zk: <zk-server-ip>:2181(CONNECTED) 11] get /consumers/tranquility-kafka/offsets/o_qixiaoQuery/0 
3083421225
cZxid = 0x322fabedf
ctime = Mon Jun 11 16:54:05 CST 2018
mZxid = 0x331843098
mtime = Wed Aug 29 20:25:06 CST 2018
pZxid = 0x322fabedf
cversion = 0
dataVersion = 271306
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 10
numChildren = 0

可以看到两者大概相差了150w+条消息,说明正在消费的数据时间和当前时间还是有一定差距的。

这样就引发一个问题,当tranquility提交的任务意外退出时,由于Druid使用的zookeeper记录的是退出时读取Kafka最后的offset,那么当退出时间超过10分钟之后,重新提交任务将会导致部分的数据丢失。类似下面现象,前几次读取的数据有部分被丢弃。

2018-08-29 08:32:10,378 [KafkaConsumer-CommitThread] INFO  c.m.tranquility.kafka.KafkaConsumer - Flushed {o_qixiaoQuery={receivedCount=315131, sentCount=314436, droppedCount=695, unparseableCount=0}} pending messages in 6ms and committed offsets in 2900ms.
2018-08-29 08:32:27,683 [KafkaConsumer-CommitThread] INFO  c.m.tranquility.kafka.KafkaConsumer - Flushed {o_qixiaoQuery={receivedCount=334262, sentCount=334149, droppedCount=113, unparseableCount=0}} pending messages in 10ms and committed offsets in 2247ms.
2018-08-29 08:32:45,839 [KafkaConsumer-CommitThread] INFO  c.m.tranquility.kafka.KafkaConsumer - Flushed {o_qixiaoQuery={receivedCount=385590, sentCount=385588, droppedCount=2, unparseableCount=0}} pending messages in 0ms and committed offsets in 3156ms.
2018-08-29 08:33:02,954 [KafkaConsumer-CommitThread] INFO  c.m.tranquility.kafka.KafkaConsumer - Flushed {o_qixiaoQuery={receivedCount=391031, sentCount=391031, droppedCount=0, unparseableCount=0}} pending messages in 38ms and committed offsets in 2060ms.

这样就必须保证tranquility提交任务的持续性,如果意外退出未及时重启,将会造成数据丢失。

在解决了上述的问题后,仍然会有出现数据丢失的可能,tranquility打印的日志如下:

2018-08-29 23:04:37,428 [Hashed wheel timer #1] INFO  c.metamx.emitter.core.LoggingEmitter - Event [{"feed":"alerts","timestamp":"2018-08-29T23:04:37.428Z","service":"tranquility","host":"localhost","severity":"anomaly","description":"Failed to propagate events: druid:overlord/report_qixiao_tracking_event_count_rt","data":{"exceptionType":"com.twitter.finagle.NoBrokersAvailableException","exceptionStackTrace":"com.twitter.finagle.NoBrokersAvailableException: No hosts are available for disco!firehose:druid:overlord:report_qixiao_tracking_event_count_rt-023-0000-0000, Dtab.base=[], Dtab.local=[]\n\tat com.twitter.finagle.NoStacktrace(Unknown Source)\n","timestamp":"2018-08-29T23:00:00.000Z","beams":"MergingPartitioningBeam(DruidBeam(interval = 2018-08-29T23:00:00.000Z/2018-08-30T00:00:00.000Z, partition = 0, tasks = [index_realtime_report_qixiao_tracking_event_count_rt_2018-08-29T23:00:00.000Z_0_0/report_qixiao_tracking_event_count_rt-023-0000-0000]))","eventCount":4,"exceptionMessage":"No hosts are available for disco!firehose:druid:overlord:report_qixiao_tracking_event_count_rt-023-0000-0000, Dtab.base=[], Dtab.local=[]"}}]
2018-08-29 23:05:24,257 [Hashed wheel timer #1] WARN  c.m.tranquility.beam.ClusteredBeam - Emitting alert: [anomaly] Failed to propagate events: druid:overlord/report_qixiao_tracking_event_count_rt
{
  "eventCount" : 3,
  "timestamp" : "2018-08-29T23:00:00.000Z",
  "beams" : "MergingPartitioningBeam(DruidBeam(interval = 2018-08-29T23:00:00.000Z/2018-08-30T00:00:00.000Z, partition = 0, tasks = [index_realtime_report_qixiao_tracking_event_count_rt_2018-08-29T23:00:00.000Z_0_0/report_qixiao_tracking_event_count_rt-023-0000-0000]))"
}
com.twitter.finagle.NoBrokersAvailableException: No hosts are available for disco!firehose:druid:overlord:report_qixiao_tracking_event_count_rt-023-0000-0000, Dtab.base=[], Dtab.local=[]
        at com.twitter.finagle.NoStacktrace(Unknown Source) ~[na:na]
2018-08-29 23:05:24,258 [Hashed wheel timer #1] INFO  c.metamx.emitter.core.LoggingEmitter - Event [{"feed":"alerts","timestamp":"2018-08-29T23:05:24.258Z","service":"tranquility","host":"localhost","severity":"anomaly","description":"Failed to propagate events: druid:overlord/report_qixiao_tracking_event_count_rt","data":{"exceptionType":"com.twitter.finagle.NoBrokersAvailableException","exceptionStackTrace":"com.twitter.finagle.NoBrokersAvailableException: No hosts are available for disco!firehose:druid:overlord:report_qixiao_tracking_event_count_rt-023-0000-0000, Dtab.base=[], Dtab.local=[]\n\tat com.twitter.finagle.NoStacktrace(Unknown Source)\n","timestamp":"2018-08-29T23:00:00.000Z","beams":"MergingPartitioningBeam(DruidBeam(interval = 2018-08-29T23:00:00.000Z/2018-08-30T00:00:00.000Z, partition = 0, tasks = [index_realtime_report_qixiao_tracking_event_count_rt_2018-08-29T23:00:00.000Z_0_0/report_qixiao_tracking_event_count_rt-023-0000-0000]))","eventCount":3,"exceptionMessage":"No hosts are available for disco!firehose:druid:overlord:report_qixiao_tracking_event_count_rt-023-0000-0000, Dtab.base=[], Dtab.local=[]"}}]
2018-08-29 23:05:24,531 [KafkaConsumer-CommitThread] INFO  c.m.tranquility.kafka.KafkaConsumer - Flushed {o_qixiaoQuery={receivedCount=7456, sentCount=7441, droppedCount=15, unparseableCount=0}} pending messages in 71899ms and committed offsets in 273ms.

这种情况出现的原因,一般都是indexing service的处理能力不足,导致Kafka中的数据被丢弃,这种情况就需要再增加indexing service的节点了。

最后编辑于
©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 212,332评论 6 493
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,508评论 3 385
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 157,812评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,607评论 1 284
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,728评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 49,919评论 1 290
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,071评论 3 410
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,802评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,256评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,576评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,712评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,389评论 4 332
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,032评论 3 316
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,798评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,026评论 1 266
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,473评论 2 360
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,606评论 2 350

推荐阅读更多精彩内容