关于kafka定期清理日志后再消费报错kafka.common.OffsetOutOfRangeException的解决

环境:
kafka 0.10
spark 2.1.0
zookeeper 3.4.5-cdh5.14.0
公司阿里云测试机,十月一放假前,没有在继续消费,假期过后回来再使用spark streaming消费某个消费组下的kafka时报错如下:
As I regularly kill the servers running Kafka and the producers feeding it (yes, just for fun), things sometimes go a bit crazy, not entirely sure why but I got the error: kafka.common.OffsetOutOfRangeError: FetchResponse(topic='my_messages', partition=0, error=1, highwaterMark=-1, messages=)
To fix it I added the “seek” setting: consumer.seek(0,2)
出现问题的原因:
kafka会定时清理日志
当我们的任务开始的时候,如果之前消费过某个topic,那么这个topic会在zk上设置offset,我们一般会去获取这个offset来继续从上次结束的地方继续消费,但是kafka定时清理日志的功能,比如定时一天一清理,那么如果你的offset是前天消费的offset,那么这个时候你再去消费,自然而然的你的offset肯定已经不在有效范围内,所以就报OffsetOutOfRangeException了
解决:
需要在发现zk_offset<earliest_offset>时矫正zk_offset为合法值
前期完整代码
https://www.cnblogs.com/niutao/p/10547831.html
改正后的关键代码:
/**
* 获取最小offset
* Returns the earliest (lowest) available offsets, taking new partitions into account.
*
* @param kafkaParams kafka客户端配置
* @param topics 获取获取offset的topic
*/
def getEarliestOffsets(kafkaParams: Map[String, Object], topics: Iterable[String]): Map[TopicPartition, Long] = {
val newKafkaParams = mutable.Map[String, Object]()
newKafkaParams ++= kafkaParams
newKafkaParams.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
val consumer: KafkaConsumer[String, Array[Byte]] = new KafkaConsumer[String, Array[Byte]](newKafkaParams)
consumer.subscribe(topics)
val notOffsetTopicPartition = mutable.Set[TopicPartition]()
try {
consumer.poll(0)
} catch {
case ex: NoOffsetForPartitionException =>
log.warn(s"consumer topic partition offset not found:${ex.partition()}")
notOffsetTopicPartition.add(ex.partition())
}
val parts = consumer.assignment().toSet
consumer.pause(parts)
consumer.seekToBeginning(parts)
consumer.pause(parts)
val offsets = parts.map(tp => tp -> consumer.position(tp)).toMap
consumer.unsubscribe()
consumer.close()
offsets
}
/**
* 获取最大offset
* Returns the latest (highest) available offsets, taking new partitions into account.
*
* @param kafkaParams kafka客户端配置
* @param topics 需要获取offset的topic
**/
def getLatestOffsets(kafkaParams: Map[String, Object], topics: Iterable[String]): Map[TopicPartition, Long] = {
val newKafkaParams = mutable.Map[String, Object]()
newKafkaParams ++= kafkaParams
newKafkaParams.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest")
val consumer: KafkaConsumer[String, Array[Byte]] = new KafkaConsumer[String, Array[Byte]](newKafkaParams)
consumer.subscribe(topics)
val notOffsetTopicPartition = mutable.Set[TopicPartition]()
try {
consumer.poll(0)
} catch {
case ex: NoOffsetForPartitionException =>
log.warn(s"consumer topic partition offset not found:${ex.partition()}")
notOffsetTopicPartition.add(ex.partition())
}
val parts = consumer.assignment().toSet
consumer.pause(parts)
consumer.seekToEnd(parts)
val offsets = parts.map(tp => tp -> consumer.position(tp)).toMap
consumer.unsubscribe()
consumer.close()
offsets
}
val earliestOffsets = getEarliestOffsets(kafkaParams , topics)
val latestOffsets = getLatestOffsets(kafkaParams , topics)
for((k,v) <- topicPartOffsetMap.toMap){
val current = v
val earliest = earliestOffsets.get(k).get
val latest = latestOffsets.get(k).get
if (current > latest || current < earliest) {
log.warn("矫正offset: " + current +" -> "+ earliest);
topicPartOffsetMap.put(k , earliest)
}
}
完整代码,拿去直接用就可以了
import kafka.utils.{ZKGroupTopicDirs, ZkUtils}
import org.apache.kafka.clients.consumer.{ConsumerRecord, KafkaConsumer}
import org.apache.spark.rdd.RDD
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.dstream.InputDStream
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.{ConsumerStrategies, HasOffsetRanges, KafkaUtils}
import org.slf4j.LoggerFactory
import scala.collection.JavaConversions._
import scala.reflect.ClassTag
import scala.util.Try
import org.apache.kafka.clients.consumer.{Consumer, ConsumerConfig, KafkaConsumer, NoOffsetForPartitionException}
import org.apache.kafka.common.TopicPartition
import org.apache.zookeeper.data.Stat
import scala.collection.JavaConversions._
import scala.collection.mutable
/**
* Kafka的连接和Offset管理工具类
*
* @param zkHosts Zookeeper地址
* @param kafkaParams Kafka启动参数
*/
class KafkaManager(zkHosts: String, kafkaParams: Map[String, Object]) extends Serializable {
//Logback日志对象,使用slf4j框架
@transient private lazy val log = LoggerFactory.getLogger(getClass)
//建立ZkUtils对象所需的参数
val (zkClient, zkConnection) = ZkUtils.createZkClientAndConnection(zkHosts, 10000, 10000)
// zkClient.setZkSerializer(new MyZkSerializer())
//ZkUtils对象,用于访问Zookeeper
val zkUtils = new ZkUtils(zkClient, zkConnection, false)
/**
* 获取最小offset
* Returns the earliest (lowest) available offsets, taking new partitions into account.
*
* @param kafkaParams kafka客户端配置
* @param topics 获取获取offset的topic
*/
def getEarliestOffsets(kafkaParams: Map[String, Object], topics: Iterable[String]): Map[TopicPartition, Long] = {
val newKafkaParams = mutable.Map[String, Object]()
newKafkaParams ++= kafkaParams
newKafkaParams.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
val consumer: KafkaConsumer[String, Array[Byte]] = new KafkaConsumer[String, Array[Byte]](newKafkaParams)
consumer.subscribe(topics)
val notOffsetTopicPartition = mutable.Set[TopicPartition]()
try {
consumer.poll(0)
} catch {
case ex: NoOffsetForPartitionException =>
log.warn(s"consumer topic partition offset not found:${ex.partition()}")
notOffsetTopicPartition.add(ex.partition())
}
val parts = consumer.assignment().toSet
consumer.pause(parts)
consumer.seekToBeginning(parts)
consumer.pause(parts)
val offsets = parts.map(tp => tp -> consumer.position(tp)).toMap
consumer.unsubscribe()
consumer.close()
offsets
}
/**
* 获取最大offset
* Returns the latest (highest) available offsets, taking new partitions into account.
*
* @param kafkaParams kafka客户端配置
* @param topics 需要获取offset的topic
**/
def getLatestOffsets(kafkaParams: Map[String, Object], topics: Iterable[String]): Map[TopicPartition, Long] = {
val newKafkaParams = mutable.Map[String, Object]()
newKafkaParams ++= kafkaParams
newKafkaParams.put(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "latest")
val consumer: KafkaConsumer[String, Array[Byte]] = new KafkaConsumer[String, Array[Byte]](newKafkaParams)
consumer.subscribe(topics)
val notOffsetTopicPartition = mutable.Set[TopicPartition]()
try {
consumer.poll(0)
} catch {
case ex: NoOffsetForPartitionException =>
log.warn(s"consumer topic partition offset not found:${ex.partition()}")
notOffsetTopicPartition.add(ex.partition())
}
val parts = consumer.assignment().toSet
consumer.pause(parts)
consumer.seekToEnd(parts)
val offsets = parts.map(tp => tp -> consumer.position(tp)).toMap
consumer.unsubscribe()
consumer.close()
offsets
}
/**
* 获取消费者当前offset
*
* @param consumer 消费者
* @param partitions topic分区
* @return
*/
def getCurrentOffsets(consumer: Consumer[_, _], partitions: Set[TopicPartition]): Map[TopicPartition, Long] = {
partitions.map(tp => tp -> consumer.position(tp)).toMap
}
/**
* 从Zookeeper读取Kafka消息队列的Offset
*
* @param topics Kafka话题
* @param groupId Kafka Group ID
* @return 返回一个Map[TopicPartition, Long],记录每个话题每个Partition上的offset,如果还没消费,则offset为0
*/
def readOffsets(topics: Seq[String], groupId: String): Map[TopicPartition, Long] = {
val topicPartOffsetMap = collection.mutable.HashMap.empty[TopicPartition, Long]
val partitionMap = zkUtils.getPartitionsForTopics(topics)
// /consumers/<groupId>/offsets/<topic>/
partitionMap.foreach(topicPartitions => {
val zkGroupTopicDirs = new ZKGroupTopicDirs(groupId, topicPartitions._1)
topicPartitions._2.foreach(partition => {
val offsetPath = zkGroupTopicDirs.consumerOffsetDir + "/" + partition
val tryGetKafkaOffset = Try {
val offsetStatTuple = zkUtils.readData(offsetPath)
if (offsetStatTuple != null) {
log.info("查询Kafka消息偏移量详情: 话题:{}, 分区:{}, 偏移量:{}, ZK节点路径:{}", Seq[AnyRef](topicPartitions._1, partition.toString, offsetStatTuple._1, offsetPath): _*)
topicPartOffsetMap.put(new TopicPartition(topicPartitions._1, Integer.valueOf(partition)), offsetStatTuple._1.toLong)
}
}
if(tryGetKafkaOffset.isFailure){
//http://kafka.apache.org/0110/javadoc/index.html?org/apache/kafka/clients/consumer/KafkaConsumer.html
val consumer = new KafkaConsumer[String, Object](kafkaParams)
val partitionList = List(new TopicPartition(topicPartitions._1, partition))
consumer.assign(partitionList)
val minAvailableOffset = consumer.beginningOffsets(partitionList).values.head
consumer.close()
log.warn("查询Kafka消息偏移量详情: 没有上一次的ZK节点:{}, 话题:{}, 分区:{}, ZK节点路径:{}, 使用最小可用偏移量:{}", Seq[AnyRef](tryGetKafkaOffset.failed.get.getMessage, topicPartitions._1, partition.toString, offsetPath, minAvailableOffset): _*)
topicPartOffsetMap.put(new TopicPartition(topicPartitions._1, Integer.valueOf(partition)), minAvailableOffset)
}
})
})
//TODO 解决kafka中数据还没来得及消费,数据就已经丢失或者过期了#########################
//Offsets out of range with no configured reset policy for partition
//获取EarliestOffsets
val earliestOffsets = getEarliestOffsets(kafkaParams , topics)
val latestOffsets = getLatestOffsets(kafkaParams , topics)
for((k,v) <- topicPartOffsetMap.toMap){
val current = v
val earliest = earliestOffsets.get(k).get
val latest = latestOffsets.get(k).get
if (current > latest || current < earliest) {
log.warn("矫正offset: " + current +" -> "+ earliest);
topicPartOffsetMap.put(k , earliest)
}
}
topicPartOffsetMap.toMap
}
//#########################################################
/**
* 包装createDirectStream方法,支持Kafka Offset,用于创建Kafka Streaming流
*
* @param ssc Spark Streaming Context
* @param topics Kafka话题
* @tparam K Kafka消息Key类型
* @tparam V Kafka消息Value类型
* @return Kafka Streaming流
*/
def createDirectStream[K: ClassTag, V: ClassTag](ssc: StreamingContext, topics: Seq[String]): InputDStream[ConsumerRecord[K, V]] = {
val groupId = kafkaParams("group.id").toString
//TODO
val storedOffsets: Map[TopicPartition, Long] = readOffsets(topics, groupId)
// val storedOffsets: Map[TopicPartition, Long] = getCurrentOffset(kafkaParams , topics)
log.info("Kafka消息偏移量汇总(格式:(话题,分区号,偏移量)):{}", storedOffsets.map(off => (off._1.topic, off._1.partition(), off._2)))
val kafkaStream = KafkaUtils.createDirectStream[K, V](ssc, PreferConsistent, ConsumerStrategies.Subscribe[K, V](topics, kafkaParams, storedOffsets))
kafkaStream
}
/**
* 保存Kafka消息队列消费的Offset
*
* @param rdd SparkStreaming的Kafka RDD,RDD[ConsumerRecord[K, V]
* @param storeEndOffset true=保存结束offset, false=保存起始offset
*/
def persistOffsets[K, V](rdd: RDD[ConsumerRecord[K, V]], storeEndOffset: Boolean = true): Unit = {
val groupId = kafkaParams("group.id").toString
val offsetsList = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
offsetsList.foreach(or => {
val zkGroupTopicDirs = new ZKGroupTopicDirs(groupId, or.topic)
val offsetPath = zkGroupTopicDirs.consumerOffsetDir + "/" + or.partition
val offsetVal = if (storeEndOffset) or.untilOffset else or.fromOffset
zkUtils.updatePersistentPath(zkGroupTopicDirs.consumerOffsetDir + "/" + or.partition, offsetVal + "" /*, JavaConversions.bufferAsJavaList(acls)*/)
log.debug("保存Kafka消息偏移量详情: 话题:{}, 分区:{}, 偏移量:{}, ZK节点路径:{}", Seq[AnyRef](or.topic, or.partition.toString, offsetVal.toString, offsetPath): _*)
})
}
}
kafka的offset管理代码
关于kafka定期清理日志后再消费报错kafka.common.OffsetOutOfRangeException的解决的更多相关文章
- kafka删除topic后再创建同名的topic报错(ERROR org.apache.kafka.common.errors.TopicExistsException)
[hadoop@datanode3 logs]$ kafka-topics.sh --delete --zookeeper datanode1:2181 --topic firstTopic firs ...
- php表单提交后再后退 内容则默认清空的解决方法
转载原文地址: http://www.jquerycn.cn/a_14422 在session_start()之后,字符输出之前加上header("Cache-control: privat ...
- Kafka学习之(六)搭建kafka集群
想要搭建kafka集群,必须具备zookeeper集群,关于zookeeper集群的搭建,在Kafka学习之(五)搭建kafka集群之Zookeeper集群搭建博客有说明.需要具备两台以上装有zook ...
- Extjs4---Cannot read property 'addCls' of null 或者 el is null 关于tab关闭后再打开不显示或者报错
做后台管理系统时遇到的问题,关于tab关闭后再打开不显示,或者报错 我在新的tabpanel中加入了一个grid,当我关闭再次打开就会报错Cannot read property 'addCls' o ...
- 将线上服务器生成的日志信息实时导入kafka,采用agent和collector分层传输,app的数据通过thrift传给agent,agent通过avro sink将数据发给collector,collector将数据汇集后,发送给kafka
记flume部署过程中遇到的问题以及解决方法(持续更新) - CSDN博客 https://blog.csdn.net/lijinqi1987/article/details/77449889 现将调 ...
- Flume下读取kafka数据后再打把数据输出到kafka,利用拦截器解决topic覆盖问题
1:如果在一个Flume Agent中同时使用Kafka Source和Kafka Sink来处理events,便会遇到Kafka Topic覆盖问题,具体表现为,Kafka Source可以正常从指 ...
- ELK+kafka构建日志收集系统
ELK+kafka构建日志收集系统 原文 http://lx.wxqrcode.com/index.php/post/101.html 背景: 最近线上上了ELK,但是只用了一台Redis在 ...
- ELK+Kafka 企业日志收集平台(一)
背景: 最近线上上了ELK,但是只用了一台Redis在中间作为消息队列,以减轻前端es集群的压力,Redis的集群解决方案暂时没有接触过,并且Redis作为消息队列并不是它的强项:所以最近将Redis ...
- 企业日志大数据分析系统ELK+KAFKA实现【转】
背景: 最近线上上了ELK,但是只用了一台Redis在中间作为消息队列,以减轻前端es集群的压力,Redis的集群解决方案暂时没有接触过,并且Redis作为消息队列并不是它的强项:所以最近将Redis ...
随机推荐
- QT调用CHM方法
QDesktopServices desktopServices;QString strUrl=QCoreApplication::applicationDirPath () ;strUrl=QStr ...
- LeetCode 腾讯精选50题--2的幂
在二进制中,2的幂的数字用二进制表示时只会有一位表示为1,其余都为0,基于这个前提,可以有两种方案: 1. 做位移操作 2. 与数值取反并与原数值做与操作,判断是否与原来的数值相同 对于方案1,我的想 ...
- 【解决方案】K2 BPM_赋能房地产业务高效运营_全球领先的工作流引擎
随着房地产行业步入成熟期,行业整合及转型速度变快,房企要在数字经济的背景下实现稳步发展,需要由原本的粗放式管理逐渐向集团性管理.精细化管控转变,从决策分析.项目开发到市场营销的各个环节,都要求更为科学 ...
- maskrcnn-benchmark错误:ImportError: cannot import name rnn_compat
错误: from apex import amp File "build/bdist.linux-x86_64/egg/apex/__init__.py", line 5, in ...
- 6.Tray Monitor服务(监控服务)
1. Tray Monitor服务(监控服务) 该服务需要运行在gui环境下,用于查看baclua client.存储等状态.下面以windows下安装为例. 1.1. Tray Monito ...
- Prometheus(1) 概念
Prometheus Prometheus是一套开源的监控&报警&时间序列数据库的组合.对我来说,它跟 zabbix 最大的区别就是它没有模板,所有的告警规则都得自己写... 它有一套 ...
- Ubuntu系统---EasyECD安装记录
说明:因解决Ubuntu花屏和频繁死机的问题(后来证实本人的电脑显卡驱动有问题),手残毁坏了系统,需重装.之前从未装过系统,经过三天,反复折腾装了近十次的系统,现总结如下. 第一步:Windows 系 ...
- okhttp拦截器之RetryAndFollowUpInterceptor&BridgeInterceptor分析
在上一次[https://www.cnblogs.com/webor2006/p/9096412.html]对okhttp的拦截器有了一个初步的认识,接下来则对具体的拦截器一个个进行了解. Retry ...
- 按照教程自动安装RFNoC时.在使用pip安装pybombs时出现报错,解决办法
$ sudo apt-get install git $ sudo apt-get install python-setuptools python-dev python-pip build-esse ...
- 创建yum本地仓库,将阿里仓库同步到本地,并定时更新
很多时候为了加速自己内部的rpm包安装速度,都会搭建自己的yum源仓库,而使用系统光盘自带的源,由于软件版本比较落后,所以不太适用,而大家都在用的阿里仓库比较好用,所以就想到了把阿里仓库的rpm全部拉 ...