flink-connector-kafka consumer checkpoint源码分析
转发请注明原创地址:http://www.cnblogs.com/dongxiao-yang/p/7700600.html
《flink-connector-kafka consumer的topic分区分配源码》一文提到了在flink-connector-kafka的consumer初始化的时候有三种offset提交模式:KAFKA_PERIODIC,DISABLED和ON_CHECKPOINTS。
其中ON_CHECKPOINTS表示在flink做完checkpoint后主动向kafka提交offset的方法,本文主要分析一下flink-connector-kafka在源码如何使用checkpoint机制实现offset的恢复和提交。
flink conusmer的实现基类FlinkKafkaConsumerBase定义如下,这个类实现了了与checkpoin相关的三个接口CheckpointedFunction,CheckpointedRestoring<HashMap<KafkaTopicPartition, Long>>,CheckpointListener。根据官网文档,CheckpointedRestoring的restoreState()方法已经被CheckpointedFunction的initializeState取代,所以重点关注三个方法实现
1initializeState() 实例初始化或者recover的时候调用
2snapshotState() 每次创建checkpoint的时候调用
3 notifyCheckpointComplete() 每次checkpoint结束的时候调用
public abstract class FlinkKafkaConsumerBase<T> extends RichParallelSourceFunction<T> implements
CheckpointListener,
ResultTypeQueryable<T>,
CheckpointedFunction,
CheckpointedRestoring<HashMap<KafkaTopicPartition, Long>> {
initializeState
@Override
public final void initializeState(FunctionInitializationContext context) throws Exception { // we might have been restored via restoreState() which restores from legacy operator state
if (!restored) {
restored = context.isRestored();
} OperatorStateStore stateStore = context.getOperatorStateStore();
offsetsStateForCheckpoint = stateStore.getSerializableListState(DefaultOperatorStateBackend.DEFAULT_OPERATOR_STATE_NAME); if (context.isRestored()) {
if (restoredState == null) {
restoredState = new HashMap<>();
for (Tuple2<KafkaTopicPartition, Long> kafkaOffset : offsetsStateForCheckpoint.get()) {
restoredState.put(kafkaOffset.f0, kafkaOffset.f1);
} LOG.info("Setting restore state in the FlinkKafkaConsumer.");
if (LOG.isDebugEnabled()) {
LOG.debug("Using the following offsets: {}", restoredState);
}
}
} else {
LOG.info("No restore state for FlinkKafkaConsumer.");
}
}
这个方法的逻辑比较简单,在task恢复的时候从stateStore中序列化出来之前存储的ListState<Tuple2<KafkaTopicPartition, Long>> 状态数据,并放到restoredState这个变量,用于下面open方法直接恢复对应的分区和offset起始值。
snapshotState
@Override
public final void snapshotState(FunctionSnapshotContext context) throws Exception {
if (!running) {
LOG.debug("snapshotState() called on closed source");
} else { offsetsStateForCheckpoint.clear(); final AbstractFetcher<?, ?> fetcher = this.kafkaFetcher;
if (fetcher == null) {
// the fetcher has not yet been initialized, which means we need to return the
// originally restored offsets or the assigned partitions
for (Map.Entry<KafkaTopicPartition, Long> subscribedPartition : subscribedPartitionsToStartOffsets.entrySet()) {
offsetsStateForCheckpoint.add(Tuple2.of(subscribedPartition.getKey(), subscribedPartition.getValue()));
} if (offsetCommitMode == OffsetCommitMode.ON_CHECKPOINTS) {
// the map cannot be asynchronously updated, because only one checkpoint call can happen
// on this function at a time: either snapshotState() or notifyCheckpointComplete()
pendingOffsetsToCommit.put(context.getCheckpointId(), restoredState);
}
} else {
HashMap<KafkaTopicPartition, Long> currentOffsets = fetcher.snapshotCurrentState(); if (offsetCommitMode == OffsetCommitMode.ON_CHECKPOINTS) {
// the map cannot be asynchronously updated, because only one checkpoint call can happen
// on this function at a time: either snapshotState() or notifyCheckpointComplete()
pendingOffsetsToCommit.put(context.getCheckpointId(), currentOffsets);
} for (Map.Entry<KafkaTopicPartition, Long> kafkaTopicPartitionLongEntry : currentOffsets.entrySet()) {
offsetsStateForCheckpoint.add(
Tuple2.of(kafkaTopicPartitionLongEntry.getKey(), kafkaTopicPartitionLongEntry.getValue()));
}
} if (offsetCommitMode == OffsetCommitMode.ON_CHECKPOINTS) {
// truncate the map of pending offsets to commit, to prevent infinite growth
while (pendingOffsetsToCommit.size() > MAX_NUM_PENDING_CHECKPOINTS) {
pendingOffsetsToCommit.remove(0);
}
}
}
}
snapshot方法创建checkpoint的做法是把当前的KafkaTopicPartition和目前消费到的offset值不断存放到offsetsStateForCheckpoint这个state对象里,然后把当前的checkpointid和对应的offset存到pendingOffsetsToCommit这个linkmap。当前offset的获取分两个情况,初始化的时候(if (fetcher == null) {...})和fetcher已经初始化成功,初始化的时候从restoredState获取,正常运行中获取fetcher.snapshotCurrentState()。
notifyCheckpointComplete
public final void notifyCheckpointComplete(long checkpointId) throws Exception {
if (!running) {
LOG.debug("notifyCheckpointComplete() called on closed source");
return;
} final AbstractFetcher<?, ?> fetcher = this.kafkaFetcher;
if (fetcher == null) {
LOG.debug("notifyCheckpointComplete() called on uninitialized source");
return;
} if (offsetCommitMode == OffsetCommitMode.ON_CHECKPOINTS) {
// only one commit operation must be in progress
if (LOG.isDebugEnabled()) {
LOG.debug("Committing offsets to Kafka/ZooKeeper for checkpoint " + checkpointId);
} try {
final int posInMap = pendingOffsetsToCommit.indexOf(checkpointId);
if (posInMap == -1) {
LOG.warn("Received confirmation for unknown checkpoint id {}", checkpointId);
return;
} @SuppressWarnings("unchecked")
HashMap<KafkaTopicPartition, Long> offsets =
(HashMap<KafkaTopicPartition, Long>) pendingOffsetsToCommit.remove(posInMap); // remove older checkpoints in map
for (int i = 0; i < posInMap; i++) {
pendingOffsetsToCommit.remove(0);
} if (offsets == null || offsets.size() == 0) {
LOG.debug("Checkpoint state was empty.");
return;
} fetcher.commitInternalOffsetsToKafka(offsets, offsetCommitCallback);
} catch (Exception e) {
if (running) {
throw e;
}
// else ignore exception if we are no longer running
}
}
}
notifyCheckpointComplete主要是在checkpoint结束后在ON_CHECKPOINTS的情况下向kafka集群commit offset,方法调用时会拿到已经完成的checkpointid,从前文的pendingOffsetsToCommit列表里找到对应的offset。如果判断索引不存在,则直接退出。否则,移除该索引对应的快照信息,然后将小于当前索引(较旧的)的快照信息也一并移除(这一点我之前解释过,因为所有的检查点都是按时间递增有序的)。最后将当前完成的检查点对应的消息的偏移量进行commit,也即commitOffsets
。只不过这里该方法被定义为抽象方法,因为Kafka不同版本的API差别的原因,由适配不同版本的consumer各自实现,目前kafka09和010实现都是在Kafka09Fetcher内实现的commitInternalOffsetsToKafka方法。
参考文档:
http://blog.csdn.net/yanghua_kobe/article/details/51503885
flink-connector-kafka consumer checkpoint源码分析的更多相关文章
- flink checkpoint 源码分析 (二)
转发请注明原创地址http://www.cnblogs.com/dongxiao-yang/p/8260370.html flink checkpoint 源码分析 (一)一文主要讲述了在JobMan ...
- Flink源码阅读(二)——checkpoint源码分析
前言 在Flink原理——容错机制一文中,已对checkpoint的机制有了较为基础的介绍,本文着重从源码方面去分析checkpoint的过程.当然本文只是分析做checkpoint的调度过程,只是尽 ...
- Kafka 探险 - 生产者源码分析: 核心组件
这个 Kafka 的专题,我会从系统整体架构,设计到代码落地.和大家一起杠源码,学技巧,涨知识.希望大家持续关注一起见证成长! 我相信:技术的道路,十年如一日!十年磨一剑! 往期文章 Kafka 探险 ...
- 高吞吐量的分布式发布订阅消息系统Kafka之Producer源码分析
引言 Kafka是一款很棒的消息系统,今天我们就来深入了解一下它的实现细节,首先关注Producer这一方. 要使用kafka首先要实例化一个KafkaProducer,需要有brokerIP.序列化 ...
- Kafka 0.8源码分析—ZookeeperConsumerConnector
1.HighLevelApi High Level Api是多线程的应用程序,以Topic的Partition数量为中心.消费的规则如下: 一个partition只能被同一个ConsumersGrou ...
- flink checkpoint 源码分析 (一)
转发请注明原创地址http://www.cnblogs.com/dongxiao-yang/p/8029356.html checkpoint是Flink Fault Tolerance机制的重要构成 ...
- flink1.7 checkpoint源码分析
初始化state类 //org.apache.flink.streaming.runtime.tasks.StreamTask#initializeState initializeState(); p ...
- Flink命令行提交job (源码分析)
这篇文章主要介绍从命令行到任务在Driver端运行的过程 通过flink run 命令提交jar包运行程序 以yarn 模式提交任务命令类似于: flink run -m yarn-cluster X ...
- Kafka#4:存储设计 分布式设计 源码分析
https://sites.google.com/a/mammatustech.com/mammatusmain/kafka-architecture/4-kafka-detailed-archite ...
随机推荐
- (转) 网络GET、POST方法
在iOS中,常见的发送HTTP请求(GET和POST)的解决方案有 苹果原生(自带) NSURLConnection:用法简单,最古老最直接的一种方案 NSURLSession:iOS7新出的技术,功 ...
- [BZOJ 1566] 管道取珠
Link:https://www.lydsy.com/JudgeOnline/problem.php?id=1566 Solution: 思路十分精奇的一道题目 题目要求的是$\sum_{i=1}^k ...
- 【树状数组】bzoj1935 [Shoi2007]Tree 园丁的烦恼
把y坐标离散化后,按x坐标排序,把询问拆成四个点,每次询问某个点左下角的点的个数,注意处理边界和重叠的情况. #include<cstdio> #include<algorithm& ...
- 【最短路】【最大流】bzoj3931 [CQOI2015]网络吞吐量
跑出最短路图,然后把结点拆点跑最大流. #include<cstdio> #include<queue> #include<cstring> #include< ...
- 【权值分块】bzoj3685 普通van Emde Boas树
权值分块,虽然渐进复杂度不忍直视,但其极小的常数使得实际运行起来比平衡树快,大多数情况和递归版权值线段树差不多,有时甚至更快.但是被zkw线段树完虐. #include<cstdio> # ...
- 【OpenJudge8464】【序列DP】股票买卖
股票买卖 总时间限制: 1000ms 内存限制: 65536kB [描述] 最近越来越多的人都投身股市,阿福也有点心动了.谨记着“股市有风险,入市需谨慎”,阿福决定先来研究一下简化版的股票买卖问题. ...
- Missing iOS Distribution signing identity解决方案
相信很多朋友跟我遇到相同的问题,之前iOS发布打包的证书没问题,现在莫名其妙的总是打包失败,并且报如下错误 第一反应,是不是证书被别人搞乱了.于是去Developer Member Center,把所 ...
- Looking deeper into SQL Server using Minidumps
https://blogs.msdn.microsoft.com/sqlcat/2009/09/11/looking-deeper-into-sql-server-using-minidumps/ A ...
- asp.mvc展示model
1. ASP.Net MVC 3 Model 简介 通过一简单的事例一步一步的介绍2. ASP.Net MVC 3 Model 的一些验证 MVC 中 Model 主要负责维持数据状态,将数据从数据存 ...
- 规约模式Specification Pattern
什么是规约模式 规约模式允许我们将一小块领域知识封装到一个单元中,即规约,然后可以在code base中对其进行复用. 它可以用来解决在查询中泛滥着GetBySomething方法的问题,以及对查询条 ...