MySqlSourceEnumerator类核心方法
start方法
@Override
public void start() {
// 如果启动参数是INITIAL
// splitAssigner 是MySqlHybridSplitAssigner, 否则是MySqlBinlogSplitAssigner
// 启动 SplitAssigner
splitAssigner.open();
// 如果是新增同步表, 重新启动CDC任务场景下:
//当新添加的表的快照完成分配时,请求更新binlog的分割状态
requestBinlogSplitUpdateIfNeed();
// 周期性的执行syncWithReaders 方法
this.context.callAsync(
this::getRegisteredReader,
this::syncWithReaders,
CHECK_EVENT_INTERVAL,
CHECK_EVENT_INTERVAL);
}
/**
* 1、检查是否新添加的表的快照已经完成分配。
* 2、如果是,则遍历已注册的读取器。
* 3、对每个读取器,发送更新binlog分割状态的请求事件。
*/
private void requestBinlogSplitUpdateIfNeed() {
if (isNewlyAddedAssigningSnapshotFinished(splitAssigner.getAssignerStatus())) {
for (int subtaskId : getRegisteredReader()) {
LOG.info(
"The enumerator requests subtask {} to update the binlog split after newly added table.",
subtaskId);
context.sendEventToSourceReader(subtaskId, new BinlogSplitUpdateRequestEvent());
}
}
}
/**
* 当SourceEnumerator恢复或者SourceEnumerator和SourceReader之间的通信失败时,
* 可能会错过一些通知事件。告诉所有的SourceReader报告它们已完成但未确认的分片。
*/
private void syncWithReaders(int[] subtaskIds, Throwable t) {
if (t != null) {
throw new FlinkRuntimeException("Failed to list obtain registered readers due to:", t);
}
// when the SourceEnumerator restores or the communication failed between
// SourceEnumerator and SourceReader, it may missed some notification event.
// tell all SourceReader(s) to report there finished but unacked splits.
if (splitAssigner.waitingForFinishedSplits()) {
for (int subtaskId : subtaskIds) {
context.sendEventToSourceReader(
subtaskId, new FinishedSnapshotSplitsRequestEvent());
}
}
requestBinlogSplitUpdateIfNeed();
}
其中splitAssigner.open()方法后续再分析
handleSplitRequest 方法
接收来自 reader 的请求,对数据进行分片,通过 assignSplits 对分片进行分配, 注意assignSplits方法在CK完成回调notifyCheckpointComplete方法中也会调用, 如果已经是split分配完成后的一个ck,就可以标志着全量阶段已经完成,可以开始准备下发BinlogSplit了
@Override
public void handleSplitRequest(int subtaskId, @Nullable String requesterHostname) {
if (!context.registeredReaders().containsKey(subtaskId)) {
// reader failed between sending the request and now. skip this request.
return;
}
// 接收来自reader的分片请求, 根据reader的taskId, 放入到readersAwaitingSplit TreeSet中, 表示等待分配split的reader请求
readersAwaitingSplit.add(subtaskId);
// 为reader分配split, 分配完成后, 从readersAwaitingSplit集合中 remove掉
assignSplits();
}
private void assignSplits() {
final Iterator<Integer> awaitingReader = readersAwaitingSplit.iterator();
while (awaitingReader.hasNext()) {
int nextAwaiting = awaitingReader.next();
// if the reader that requested another split has failed in the meantime, remove
// it from the list of waiting readers
// 判断当前注册的reader集合是否存在
if (!context.registeredReaders().containsKey(nextAwaiting)) {
awaitingReader.remove();
continue;
}
// 判断当前的Snapshot全量阶段是否完成, 如果完成根据设置的closeIdleReaders参数, 是否关闭空闲Reader
/**
* Flink CDC 的增量快照框架有两个主要阶段: 全量阶段和增量阶段。
* 这两个阶段的并行度并不相同,全量阶段支持多并行度,加快大量数据的同步过程,
* 增量阶段读取Binlog变更日志,需要使用单并发保证事件的顺序和正确性。
* 在全量阶段读取结束后,由于增量阶段只需要一个并发,会出现大量的空闲 Reader,比较浪费资源。
* 2.4 版本使用增量快照连接器时,支持配置打开自动关闭空闲 Reader 的功能来关闭这些空闲 Reader
*/
if (splitAssigner.isStreamSplitAssigned()
&& sourceConfig.isCloseIdleReaders()
&& noMoreSnapshotSplits()
&& (binlogSplitTaskId != null && !binlogSplitTaskId.equals(nextAwaiting))) {
// close idle readers when snapshot phase finished.
// 向reader 发送关闭请求, 释放资源
context.signalNoMoreSplits(nextAwaiting);
awaitingReader.remove();
LOG.info("Close idle reader of subtask {}", nextAwaiting);
continue;
}
// TODO: 待分析
Optional<MySqlSplit> split = splitAssigner.getNext();
if (split.isPresent()) {
final MySqlSplit mySqlSplit = split.get();
/**
* 向reader 分配MySqlSplit
*/
context.assignSplit(mySqlSplit, nextAwaiting);
if (mySqlSplit instanceof MySqlBinlogSplit) {
this.binlogSplitTaskId = nextAwaiting;
}
awaitingReader.remove();
LOG.info("The enumerator assigns split {} to subtask {}", mySqlSplit, nextAwaiting);
} else {
// there is no available splits by now, skip assigning
requestBinlogSplitUpdateIfNeed();
break;
}
}
}
handleSourceEvent方法
处理 SourceEvent,是 SplitEnumerator 和 SourceReader 之间来回传递的自定义事件。可以利用此机制来执行复杂的协调任务
处理事件类型:
- FinishedSnapshotSplitsReportEvent: reader完成读取chunk数据事件
- BinlogSplitMetaRequestEvent: reader请求分配BinlogSplit
- BinlogSplitUpdateAckEvent: BinlogSplit分配确认ACK事件
- LatestFinishedSplitsNumberRequestEvent: 请求获取当前最新已完成的Snapshot chunk数量
- BinlogSplitAssignedEvent: reader已分配到BinlogSplit事件
@Override
public void handleSourceEvent(int subtaskId, SourceEvent sourceEvent) {
// 接收到reader FinishedSnapshotSplitsReportEvent 事件
if (sourceEvent instanceof FinishedSnapshotSplitsReportEvent) {
LOG.info(
"The enumerator under {} receives finished split offsets {} from subtask {}.",
splitAssigner.getAssignerStatus(),
sourceEvent,
subtaskId);
FinishedSnapshotSplitsReportEvent reportEvent =
(FinishedSnapshotSplitsReportEvent) sourceEvent;
Map<String, BinlogOffset> finishedOffsets = reportEvent.getFinishedOffsets();
// 在Snapshot全量阶段, 保存每个reader读取chunk完成时候的binlog点位
splitAssigner.onFinishedSplits(finishedOffsets);
requestBinlogSplitUpdateIfNeed();
// send acknowledge event
// 发送ack event
FinishedSnapshotSplitsAckEvent ackEvent =
new FinishedSnapshotSplitsAckEvent(new ArrayList<>(finishedOffsets.keySet()));
context.sendEventToSourceReader(subtaskId, ackEvent);
} else if (sourceEvent instanceof BinlogSplitMetaRequestEvent) { // 处理BinlogSplitMetaRequestEvent事件请求
LOG.debug(
"The enumerator receives request for binlog split meta from subtask {}.",
subtaskId);
sendBinlogMeta(subtaskId, (BinlogSplitMetaRequestEvent) sourceEvent);
} else if (sourceEvent instanceof BinlogSplitUpdateAckEvent) {
LOG.info(
"The enumerator receives event that the binlog split has been updated from subtask {}. ",
subtaskId);
splitAssigner.onBinlogSplitUpdated();
} else if (sourceEvent instanceof LatestFinishedSplitsNumberRequestEvent) {
LOG.info(
"The enumerator receives request from subtask {} for the latest finished splits number after added newly tables. ",
subtaskId);
// 告诉reader 当前已完成的Snapshot FinishedSplitInfos size
handleLatestFinishedSplitNumberRequest(subtaskId);
} else if (sourceEvent instanceof BinlogSplitAssignedEvent) { // 处理BinlogSplitAssignedEvent事件
LOG.info(
"The enumerator receives notice from subtask {} for the binlog split assignment. ",
subtaskId);
binlogSplitTaskId = subtaskId;
}
}
至此已梳理MySqlSourceEnumerator类中的核心方法及作用, 但是全量阶段的chunk划分及向reader分配MySqlSplit逻辑在哪呢?
- chunk划分: 在start方法中的splitAssigner.open() 实现
- MySqlSplit分配: handleSplitRequest()->assignSplits()->splitAssigner.getNext()