Flink是如何保存Offset的

Flink对Offset的管理,有两种方式:
1.Checkpointing disabled 完全依赖于kafka自身的API
2.Checkpointing enabled 当checkpoint做完的时候,会将offset提交给kafka or zk
本文只针对于第二种,Checkpointing enabled

FlinkKafkaConsumerBase中的 notifyCheckpointComplete

@Override
//当checkpoint完成的时候,此方法会被调用
    public final void notifyCheckpointComplete(long checkpointId) throws Exception {
        if (!running) {
            LOG.debug("notifyCheckpointComplete() called on closed source");
            return;
        }

        final AbstractFetcher<?, ?> fetcher = this.kafkaFetcher;
        if (fetcher == null) {
            LOG.debug("notifyCheckpointComplete() called on uninitialized source");
            return;
        }

        if (offsetCommitMode == OffsetCommitMode.ON_CHECKPOINTS) {
            // only one commit operation must be in progress
            if (LOG.isDebugEnabled()) {
                LOG.debug("Committing offsets to Kafka/ZooKeeper for checkpoint " + checkpointId);
            }

            try {
                final int posInMap = pendingOffsetsToCommit.indexOf(checkpointId);
                if (posInMap == -1) {
                    LOG.warn("Received confirmation for unknown checkpoint id {}", checkpointId);
                    return;
                }

                @SuppressWarnings("unchecked")
                Map<KafkaTopicPartition, Long> offsets =
                    (Map<KafkaTopicPartition, Long>) pendingOffsetsToCommit.remove(posInMap);

                // remove older checkpoints in map
                for (int i = 0; i < posInMap; i++) {
                    pendingOffsetsToCommit.remove(0);
                }

                if (offsets == null || offsets.size() == 0) {
                    LOG.debug("Checkpoint state was empty.");
                    return;
                }

            //通过kafkaFetcher提交offset    fetcher.commitInternalOffsetsToKafka(offsets, offsetCommitCallback);
            } catch (Exception e) {
                if (running) {
                    throw e;
                }
                // else ignore exception if we are no longer running
            }
        }
    }

跳转到kafkaFetcher

@Override
    protected void doCommitInternalOffsetsToKafka(
        Map<KafkaTopicPartition, Long> offsets,
        @Nonnull KafkaCommitCallback commitCallback) throws Exception {

        @SuppressWarnings("unchecked")
        List<KafkaTopicPartitionState<TopicPartition>> partitions = subscribedPartitionStates();

        Map<TopicPartition, OffsetAndMetadata> offsetsToCommit = new HashMap<>(partitions.size());

        for (KafkaTopicPartitionState<TopicPartition> partition : partitions) {
            Long lastProcessedOffset = offsets.get(partition.getKafkaTopicPartition());
            if (lastProcessedOffset != null) {
                checkState(lastProcessedOffset >= 0, "Illegal offset value to commit");

                // committed offsets through the KafkaConsumer need to be 1 more than the last processed offset.
                // This does not affect Flink's checkpoints/saved state.
                long offsetToCommit = lastProcessedOffset + 1;

                offsetsToCommit.put(partition.getKafkaPartitionHandle(), new OffsetAndMetadata(offsetToCommit));
                partition.setCommittedOffset(offsetToCommit);
            }
        }

        // record the work to be committed by the main consumer thread and make sure the consumer notices that
        consumerThread.setOffsetsToCommit(offsetsToCommit, commitCallback);
    }

可以看到调用consumerThread.setOffsetsToCommit方法

void setOffsetsToCommit(
            Map<TopicPartition, OffsetAndMetadata> offsetsToCommit,
            @Nonnull KafkaCommitCallback commitCallback) {

        // record the work to be committed by the main consumer thread and make sure the consumer notices that
        /*
        !=null的时候,说明kafkaConsumerThread更新的太慢了,新的将会覆盖old
        当此处执行的时候,kafkaconsumerThread中consumer.commitAsync()
        
这个方法还是关键的方法,直接给nextOffsetsToCommit赋值了
nextOffsetsToCommit,我们可以看到是AtomicReference,可以原子更新对象的引用
         */
    
        if (nextOffsetsToCommit.getAndSet(Tuple2.of(offsetsToCommit, commitCallback)) != null) {
            log.warn("Committing offsets to Kafka takes longer than the checkpoint interval. " +
                    "Skipping commit of previous offsets because newer complete checkpoint offsets are available. " +
                    "This does not compromise Flink's checkpoint integrity.");
        }

        // if the consumer is blocked in a poll() or handover operation, wake it up to commit soon
        handover.wakeupProducer();

        synchronized (consumerReassignmentLock) {
            if (consumer != null) {
                consumer.wakeup();
            } else {
                // the consumer is currently isolated for partition reassignment;
                // set this flag so that the wakeup state is restored once the reassignment is complete
                hasBufferedWakeup = true;
            }
        }
    }

nextOffsetsToCommit已经有值了,接下我们来看一下KafkaConsumerThread的run方法

@Override
    public void run() {
        // early exit check
        if (!running) {
            return;
        }

        ......
            // main fetch loop
            while (running) {

                // check if there is something to commit
//default false
                if (!commitInProgress) {
                    // get and reset the work-to-be committed, so we don't repeatedly commit the same
//setCommittedOffset方法已经给nextOffsetsToCommit赋值了,这里进行获取,所以commitOffsetsAndCallback is not null
                    final Tuple2<Map<TopicPartition, OffsetAndMetadata>, KafkaCommitCallback> commitOffsetsAndCallback =
                            nextOffsetsToCommit.getAndSet(null);

                    if (commitOffsetsAndCallback != null) {
                        log.debug("Sending async offset commit request to Kafka broker");

                        // also record that a commit is already in progress
                        // the order here matters! first set the flag, then send the commit command.
                        commitInProgress = true;
                        consumer.commitAsync(commitOffsetsAndCallback.f0, new CommitCallback(commitOffsetsAndCallback.f1));
                    }
                }

                ....
    }

至此offset就更新完毕了,我们可以很清楚的看到,当checkpoint完成时,调用相关的commit方法,将kafka offset提交至kafka broker

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 213,047评论 6 492
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 90,807评论 3 386
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 158,501评论 0 348
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 56,839评论 1 285
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 65,951评论 6 386
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 50,117评论 1 291
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 39,188评论 3 412
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 37,929评论 0 268
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 44,372评论 1 303
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 36,679评论 2 327
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 38,837评论 1 341
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 34,536评论 4 335
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 40,168评论 3 317
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 30,886评论 0 21
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 32,129评论 1 267
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 46,665评论 2 362
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 43,739评论 2 351

推荐阅读更多精彩内容