聊聊flink KeyedStream的intervalJoin操作

本文主要研究一下flink KeyedStream的intervalJoin操作

实例

DataStream<Integer> orangeStream = ...
DataStream<Integer> greenStream = ...

orangeStream
    .keyBy(<KeySelector>)
    .intervalJoin(greenStream.keyBy(<KeySelector>))
    .between(Time.milliseconds(-2), Time.milliseconds(1))
    .process (new ProcessJoinFunction<Integer, Integer, String(){

        @Override
        public void processElement(Integer left, Integer right, Context ctx, Collector<String> out) {
            out.collect(first + "," + second);
        }
    });

KeyedStream.intervalJoin

flink-streaming-java_2.11-1.7.0-sources.jar!/org/apache/flink/streaming/api/datastream/KeyedStream.java

@Public
public class KeyedStream<T, KEY> extends DataStream<T> {
    //......

    @PublicEvolving
    public <T1> IntervalJoin<T, T1, KEY> intervalJoin(KeyedStream<T1, KEY> otherStream) {
        return new IntervalJoin<>(this, otherStream);
    }

    //......
}
  • KeyedStream的intervalJoin创建并返回IntervalJoin

IntervalJoin

flink-streaming-java_2.11-1.7.0-sources.jar!/org/apache/flink/streaming/api/datastream/KeyedStream.java

    @PublicEvolving
    public static class IntervalJoin<T1, T2, KEY> {

        private final KeyedStream<T1, KEY> streamOne;
        private final KeyedStream<T2, KEY> streamTwo;

        IntervalJoin(
                KeyedStream<T1, KEY> streamOne,
                KeyedStream<T2, KEY> streamTwo
        ) {
            this.streamOne = checkNotNull(streamOne);
            this.streamTwo = checkNotNull(streamTwo);
        }

        @PublicEvolving
        public IntervalJoined<T1, T2, KEY> between(Time lowerBound, Time upperBound) {

            TimeCharacteristic timeCharacteristic =
                streamOne.getExecutionEnvironment().getStreamTimeCharacteristic();

            if (timeCharacteristic != TimeCharacteristic.EventTime) {
                throw new UnsupportedTimeCharacteristicException("Time-bounded stream joins are only supported in event time");
            }

            checkNotNull(lowerBound, "A lower bound needs to be provided for a time-bounded join");
            checkNotNull(upperBound, "An upper bound needs to be provided for a time-bounded join");

            return new IntervalJoined<>(
                streamOne,
                streamTwo,
                lowerBound.toMilliseconds(),
                upperBound.toMilliseconds(),
                true,
                true
            );
        }
    }
  • IntervalJoin提供了between操作,用于设置interval的lowerBound及upperBound,这里可以看到between方法里头对非TimeCharacteristic.EventTime的直接抛出UnsupportedTimeCharacteristicException;between操作创建并返回IntervalJoined

IntervalJoined

flink-streaming-java_2.11-1.7.0-sources.jar!/org/apache/flink/streaming/api/datastream/KeyedStream.java

    @PublicEvolving
    public static class IntervalJoined<IN1, IN2, KEY> {

        private final KeyedStream<IN1, KEY> left;
        private final KeyedStream<IN2, KEY> right;

        private final long lowerBound;
        private final long upperBound;

        private final KeySelector<IN1, KEY> keySelector1;
        private final KeySelector<IN2, KEY> keySelector2;

        private boolean lowerBoundInclusive;
        private boolean upperBoundInclusive;

        public IntervalJoined(
                KeyedStream<IN1, KEY> left,
                KeyedStream<IN2, KEY> right,
                long lowerBound,
                long upperBound,
                boolean lowerBoundInclusive,
                boolean upperBoundInclusive) {

            this.left = checkNotNull(left);
            this.right = checkNotNull(right);

            this.lowerBound = lowerBound;
            this.upperBound = upperBound;

            this.lowerBoundInclusive = lowerBoundInclusive;
            this.upperBoundInclusive = upperBoundInclusive;

            this.keySelector1 = left.getKeySelector();
            this.keySelector2 = right.getKeySelector();
        }

        @PublicEvolving
        public IntervalJoined<IN1, IN2, KEY> upperBoundExclusive() {
            this.upperBoundInclusive = false;
            return this;
        }

        @PublicEvolving
        public IntervalJoined<IN1, IN2, KEY> lowerBoundExclusive() {
            this.lowerBoundInclusive = false;
            return this;
        }

        @PublicEvolving
        public <OUT> SingleOutputStreamOperator<OUT> process(ProcessJoinFunction<IN1, IN2, OUT> processJoinFunction) {
            Preconditions.checkNotNull(processJoinFunction);

            final TypeInformation<OUT> outputType = TypeExtractor.getBinaryOperatorReturnType(
                processJoinFunction,
                ProcessJoinFunction.class,
                0,
                1,
                2,
                TypeExtractor.NO_INDEX,
                left.getType(),
                right.getType(),
                Utils.getCallLocationName(),
                true
            );

            return process(processJoinFunction, outputType);
        }

        @PublicEvolving
        public <OUT> SingleOutputStreamOperator<OUT> process(
                ProcessJoinFunction<IN1, IN2, OUT> processJoinFunction,
                TypeInformation<OUT> outputType) {
            Preconditions.checkNotNull(processJoinFunction);
            Preconditions.checkNotNull(outputType);

            final ProcessJoinFunction<IN1, IN2, OUT> cleanedUdf = left.getExecutionEnvironment().clean(processJoinFunction);

            final IntervalJoinOperator<KEY, IN1, IN2, OUT> operator =
                new IntervalJoinOperator<>(
                    lowerBound,
                    upperBound,
                    lowerBoundInclusive,
                    upperBoundInclusive,
                    left.getType().createSerializer(left.getExecutionConfig()),
                    right.getType().createSerializer(right.getExecutionConfig()),
                    cleanedUdf
                );

            return left
                .connect(right)
                .keyBy(keySelector1, keySelector2)
                .transform("Interval Join", outputType, operator);
        }
    }
  • IntervalJoined默认对lowerBound及upperBound是inclusive的,它也提供了lowerBoundExclusive、upperBoundExclusive来单独设置为exclusive;IntervalJoined提供了process操作,接收的是ProcessJoinFunction;process操作里头创建了IntervalJoinOperator,然后执行left.connect(right).keyBy(keySelector1, keySelector2).transform("Interval Join", outputType, operator),返回的是SingleOutputStreamOperator(本实例left为orangeStream,right为greenStream)

ProcessJoinFunction

flink-streaming-java_2.11-1.7.0-sources.jar!/org/apache/flink/streaming/api/functions/co/ProcessJoinFunction.java

@PublicEvolving
public abstract class ProcessJoinFunction<IN1, IN2, OUT> extends AbstractRichFunction {

    private static final long serialVersionUID = -2444626938039012398L;

    public abstract void processElement(IN1 left, IN2 right, Context ctx, Collector<OUT> out) throws Exception;

    public abstract class Context {

        public abstract long getLeftTimestamp();

        public abstract long getRightTimestamp();

        public abstract long getTimestamp();

        public abstract <X> void output(OutputTag<X> outputTag, X value);
    }
}
  • ProcessJoinFunction继承了AbstractRichFunction,它定义了processElement抽象方法,同时也定义了自身的Context对象,该对象定义了getLeftTimestamp、getRightTimestamp、getTimestamp、output四个抽象方法

IntervalJoinOperator

flink-streaming-java_2.11-1.7.0-sources.jar!/org/apache/flink/streaming/api/operators/co/IntervalJoinOperator.java

@Internal
public class IntervalJoinOperator<K, T1, T2, OUT>
        extends AbstractUdfStreamOperator<OUT, ProcessJoinFunction<T1, T2, OUT>>
        implements TwoInputStreamOperator<T1, T2, OUT>, Triggerable<K, String> {

    private static final long serialVersionUID = -5380774605111543454L;

    private static final Logger logger = LoggerFactory.getLogger(IntervalJoinOperator.class);

    private static final String LEFT_BUFFER = "LEFT_BUFFER";
    private static final String RIGHT_BUFFER = "RIGHT_BUFFER";
    private static final String CLEANUP_TIMER_NAME = "CLEANUP_TIMER";
    private static final String CLEANUP_NAMESPACE_LEFT = "CLEANUP_LEFT";
    private static final String CLEANUP_NAMESPACE_RIGHT = "CLEANUP_RIGHT";

    private final long lowerBound;
    private final long upperBound;

    private final TypeSerializer<T1> leftTypeSerializer;
    private final TypeSerializer<T2> rightTypeSerializer;

    private transient MapState<Long, List<BufferEntry<T1>>> leftBuffer;
    private transient MapState<Long, List<BufferEntry<T2>>> rightBuffer;

    private transient TimestampedCollector<OUT> collector;
    private transient ContextImpl context;

    private transient InternalTimerService<String> internalTimerService;

    public IntervalJoinOperator(
            long lowerBound,
            long upperBound,
            boolean lowerBoundInclusive,
            boolean upperBoundInclusive,
            TypeSerializer<T1> leftTypeSerializer,
            TypeSerializer<T2> rightTypeSerializer,
            ProcessJoinFunction<T1, T2, OUT> udf) {

        super(Preconditions.checkNotNull(udf));

        Preconditions.checkArgument(lowerBound <= upperBound,
            "lowerBound <= upperBound must be fulfilled");

        // Move buffer by +1 / -1 depending on inclusiveness in order not needing
        // to check for inclusiveness later on
        this.lowerBound = (lowerBoundInclusive) ? lowerBound : lowerBound + 1L;
        this.upperBound = (upperBoundInclusive) ? upperBound : upperBound - 1L;

        this.leftTypeSerializer = Preconditions.checkNotNull(leftTypeSerializer);
        this.rightTypeSerializer = Preconditions.checkNotNull(rightTypeSerializer);
    }

    @Override
    public void open() throws Exception {
        super.open();

        collector = new TimestampedCollector<>(output);
        context = new ContextImpl(userFunction);
        internalTimerService =
            getInternalTimerService(CLEANUP_TIMER_NAME, StringSerializer.INSTANCE, this);
    }

    @Override
    public void initializeState(StateInitializationContext context) throws Exception {
        super.initializeState(context);

        this.leftBuffer = context.getKeyedStateStore().getMapState(new MapStateDescriptor<>(
            LEFT_BUFFER,
            LongSerializer.INSTANCE,
            new ListSeriawelizer<>(new BufferEntrySerializer<>(leftTypeSerializer))
        ));

        this.rightBuffer = context.getKeyedStateStore().getMapState(new MapStateDescriptor<>(
            RIGHT_BUFFER,
            LongSerializer.INSTANCE,
            new ListSerializer<>(new BufferEntrySerializer<>(rightTypeSerializer))
        ));
    }

    @Override
    public void processElement1(StreamRecord<T1> record) throws Exception {
        processElement(record, leftBuffer, rightBuffer, lowerBound, upperBound, true);
    }

    @Override
    public void processElement2(StreamRecord<T2> record) throws Exception {
        processElement(record, rightBuffer, leftBuffer, -upperBound, -lowerBound, false);
    }

    @SuppressWarnings("unchecked")
    private <THIS, OTHER> void processElement(
            final StreamRecord<THIS> record,
            final MapState<Long, List<IntervalJoinOperator.BufferEntry<THIS>>> ourBuffer,
            final MapState<Long, List<IntervalJoinOperator.BufferEntry<OTHER>>> otherBuffer,
            final long relativeLowerBound,
            final long relativeUpperBound,
            final boolean isLeft) throws Exception {

        final THIS ourValue = record.getValue();
        final long ourTimestamp = record.getTimestamp();

        if (ourTimestamp == Long.MIN_VALUE) {
            throw new FlinkException("Long.MIN_VALUE timestamp: Elements used in " +
                    "interval stream joins need to have timestamps meaningful timestamps.");
        }

        if (isLate(ourTimestamp)) {
            return;
        }

        addToBuffer(ourBuffer, ourValue, ourTimestamp);

        for (Map.Entry<Long, List<BufferEntry<OTHER>>> bucket: otherBuffer.entries()) {
            final long timestamp  = bucket.getKey();

            if (timestamp < ourTimestamp + relativeLowerBound ||
                    timestamp > ourTimestamp + relativeUpperBound) {
                continue;
            }

            for (BufferEntry<OTHER> entry: bucket.getValue()) {
                if (isLeft) {
                    collect((T1) ourValue, (T2) entry.element, ourTimestamp, timestamp);
                } else {
                    collect((T1) entry.element, (T2) ourValue, timestamp, ourTimestamp);
                }
            }
        }

        long cleanupTime = (relativeUpperBound > 0L) ? ourTimestamp + relativeUpperBound : ourTimestamp;
        if (isLeft) {
            internalTimerService.registerEventTimeTimer(CLEANUP_NAMESPACE_LEFT, cleanupTime);
        } else {
            internalTimerService.registerEventTimeTimer(CLEANUP_NAMESPACE_RIGHT, cleanupTime);
        }
    }

    private boolean isLate(long timestamp) {
        long currentWatermark = internalTimerService.currentWatermark();
        return currentWatermark != Long.MIN_VALUE && timestamp < currentWatermark;
    }

    private void collect(T1 left, T2 right, long leftTimestamp, long rightTimestamp) throws Exception {
        final long resultTimestamp = Math.max(leftTimestamp, rightTimestamp);

        collector.setAbsoluteTimestamp(resultTimestamp);
        context.updateTimestamps(leftTimestamp, rightTimestamp, resultTimestamp);

        userFunction.processElement(left, right, context, collector);
    }

    @Override
    public void onEventTime(InternalTimer<K, String> timer) throws Exception {

        long timerTimestamp = timer.getTimestamp();
        String namespace = timer.getNamespace();

        logger.trace("onEventTime @ {}", timerTimestamp);

        switch (namespace) {
            case CLEANUP_NAMESPACE_LEFT: {
                long timestamp = (upperBound <= 0L) ? timerTimestamp : timerTimestamp - upperBound;
                logger.trace("Removing from left buffer @ {}", timestamp);
                leftBuffer.remove(timestamp);
                break;
            }
            case CLEANUP_NAMESPACE_RIGHT: {
                long timestamp = (lowerBound <= 0L) ? timerTimestamp + lowerBound : timerTimestamp;
                logger.trace("Removing from right buffer @ {}", timestamp);
                rightBuffer.remove(timestamp);
                break;
            }
            default:
                throw new RuntimeException("Invalid namespace " + namespace);
        }
    }

    @Override
    public void onProcessingTime(InternalTimer<K, String> timer) throws Exception {
        // do nothing.
    }

    //......
}
  • IntervalJoinOperator继承了AbstractUdfStreamOperator抽象类,实现了TwoInputStreamOperator及Triggerable接口
  • IntervalJoinOperator覆盖了AbstractUdfStreamOperator(StreamOperator定义)的open、initializeState方法,它在open方法里头创建了InternalTimerService,传递的Triggerable参数为this,即自身实现的Triggerable接口;在initializeState方法里头创建了leftBuffer和rightBuffer两个MapState
  • IntervalJoinOperator实现了TwoInputStreamOperator接口定义的processElement1、processElement2方法(TwoInputStreamOperator接口定义的其他一些方法在AbstractUdfStreamOperator的父类AbstractStreamOperator中有实现);processElement1、processElement2方法内部都调用了processElement方法,只是传递的relativeLowerBound、relativeUpperBound、isLeft参数不同以及leftBuffer和rightBuffer的传参顺序不同
  • processElement方法里头实现了intervalJoin的时间匹配逻辑,它会从internalTimerService获取currentWatermark,然后判断element是否late,如果late直接返回,否则继续往下执行;之后就是把element的value添加到ourBuffer中(对于processElement1来说ourBuffer为leftBuffer,对于processElement2来说ourBuffer为rightBuffer);之后就是遍历otherBuffer中的每个元素,挨个判断时间是否满足要求(即ourTimestamp + relativeLowerBound <= timestamp <= ourTimestamp + relativeUpperBound),不满足要求的直接跳过,满足要求的就调用collect方法(collect方法里头执行的是userFunction.processElement,即调用用户定义的ProcessJoinFunction的processElement方法);之后就是计算cleanupTime,调用internalTimerService.registerEventTimeTimer注册清理该element的timer
  • IntervalJoinOperator实现了Triggerable接口定义的onEventTime及onProcessingTime方法,其中onProcessingTime不做任何操作,而onEventTime则会根据timestamp清理leftBuffer或者rightBuffer中的element

小结

  • flink的intervalJoin操作要求是KeyedStream,而且必须是TimeCharacteristic.EventTime;KeyedStream的intervalJoin创建并返回IntervalJoin;IntervalJoin提供了between操作,用于设置interval的lowerBound及upperBound,该操作创建并返回IntervalJoined
  • IntervalJoined提供了process操作,接收的是ProcessJoinFunction;process操作里头创建了IntervalJoinOperator,然后执行left.connect(right).keyBy(keySelector1, keySelector2).transform("Interval Join", outputType, operator),返回的是SingleOutputStreamOperator
  • IntervalJoinOperator继承了AbstractUdfStreamOperator抽象类,实现了TwoInputStreamOperator及Triggerable接口;它覆盖了AbstractUdfStreamOperator(StreamOperator定义)的open、initializeState方法,它在open方法里头创建了InternalTimerService,传递的Triggerable参数为this,即自身实现的Triggerable接口;在initializeState方法里头创建了leftBuffer和rightBuffer两个MapState;它实现了TwoInputStreamOperator接口定义的processElement1、processElement2方法,processElement1、processElement2方法内部都调用了processElement方法,只是传递的relativeLowerBound、relativeUpperBound、isLeft参数不同以及leftBuffer和rightBuffer的传参顺序不同
  • IntervalJoinOperator的processElement方法里头实现了intervalJoin的时间匹配逻辑,它首先判断element是否late,如果late直接返回,之后将element添加到buffer中,然后对之后就是遍历otherBuffer中的每个元素,挨个判断时间是否满足要求(即ourTimestamp + relativeLowerBound <= timestamp <= ourTimestamp + relativeUpperBound),不满足要求的直接跳过,满足要求的就调用collect方法(collect方法里头执行的是userFunction.processElement,即调用用户定义的ProcessJoinFunction的processElement方法);之后就是计算cleanupTime,调用internalTimerService.registerEventTimeTimer注册清理该element的timer
  • IntervalJoinOperator实现了Triggerable接口定义的onEventTime及onProcessingTime方法,其中onProcessingTime不做任何操作,而onEventTime则会根据timestamp清理leftBuffer或者rightBuffer中的element

doc

©著作权归作者所有,转载或内容合作请联系作者
  • 序言:七十年代末,一起剥皮案震惊了整个滨河市,随后出现的几起案子,更是在滨河造成了极大的恐慌,老刑警刘岩,带你破解...
    沈念sama阅读 204,590评论 6 478
  • 序言:滨河连续发生了三起死亡事件,死亡现场离奇诡异,居然都是意外死亡,警方通过查阅死者的电脑和手机,发现死者居然都...
    沈念sama阅读 86,808评论 2 381
  • 文/潘晓璐 我一进店门,熙熙楼的掌柜王于贵愁眉苦脸地迎上来,“玉大人,你说我怎么就摊上这事。” “怎么了?”我有些...
    开封第一讲书人阅读 151,151评论 0 337
  • 文/不坏的土叔 我叫张陵,是天一观的道长。 经常有香客问我,道长,这世上最难降的妖魔是什么? 我笑而不...
    开封第一讲书人阅读 54,779评论 1 277
  • 正文 为了忘掉前任,我火速办了婚礼,结果婚礼上,老公的妹妹穿的比我还像新娘。我一直安慰自己,他们只是感情好,可当我...
    茶点故事阅读 63,773评论 5 367
  • 文/花漫 我一把揭开白布。 她就那样静静地躺着,像睡着了一般。 火红的嫁衣衬着肌肤如雪。 梳的纹丝不乱的头发上,一...
    开封第一讲书人阅读 48,656评论 1 281
  • 那天,我揣着相机与录音,去河边找鬼。 笑死,一个胖子当着我的面吹牛,可吹牛的内容都是我干的。 我是一名探鬼主播,决...
    沈念sama阅读 38,022评论 3 398
  • 文/苍兰香墨 我猛地睁开眼,长吁一口气:“原来是场噩梦啊……” “哼!你这毒妇竟也来了?” 一声冷哼从身侧响起,我...
    开封第一讲书人阅读 36,678评论 0 258
  • 序言:老挝万荣一对情侣失踪,失踪者是张志新(化名)和其女友刘颖,没想到半个月后,有当地人在树林里发现了一具尸体,经...
    沈念sama阅读 41,038评论 1 299
  • 正文 独居荒郊野岭守林人离奇死亡,尸身上长有42处带血的脓包…… 初始之章·张勋 以下内容为张勋视角 年9月15日...
    茶点故事阅读 35,659评论 2 321
  • 正文 我和宋清朗相恋三年,在试婚纱的时候发现自己被绿了。 大学时的朋友给我发了我未婚夫和他白月光在一起吃饭的照片。...
    茶点故事阅读 37,756评论 1 330
  • 序言:一个原本活蹦乱跳的男人离奇死亡,死状恐怖,灵堂内的尸体忽然破棺而出,到底是诈尸还是另有隐情,我是刑警宁泽,带...
    沈念sama阅读 33,411评论 4 321
  • 正文 年R本政府宣布,位于F岛的核电站,受9级特大地震影响,放射性物质发生泄漏。R本人自食恶果不足惜,却给世界环境...
    茶点故事阅读 39,005评论 3 307
  • 文/蒙蒙 一、第九天 我趴在偏房一处隐蔽的房顶上张望。 院中可真热闹,春花似锦、人声如沸。这庄子的主人今日做“春日...
    开封第一讲书人阅读 29,973评论 0 19
  • 文/苍兰香墨 我抬头看了看天上的太阳。三九已至,却和暖如春,着一层夹袄步出监牢的瞬间,已是汗流浃背。 一阵脚步声响...
    开封第一讲书人阅读 31,203评论 1 260
  • 我被黑心中介骗来泰国打工, 没想到刚下飞机就差点儿被人妖公主榨干…… 1. 我叫王不留,地道东北人。 一个月前我还...
    沈念sama阅读 45,053评论 2 350
  • 正文 我出身青楼,却偏偏与公主长得像,于是被迫代替她去往敌国和亲。 传闻我的和亲对象是个残疾皇子,可洞房花烛夜当晚...
    茶点故事阅读 42,495评论 2 343

推荐阅读更多精彩内容