1. install kafkacat

Ubuntu

apt-get install kafkacat

CentOS

install deepenency

yum install librdkafka-devel

download source from github

build source on centos

./configure <usual-configure-options>
make
sudo make install

2. watch the target topic data

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ bin/kafka-console-consumer  --bootstrap-server localhost: --from-beginning --topic connect-offsets    --property print.key=true
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}

there is only one record in kafka topic connect-offsets.

3. dump the record from topic

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ kafkacat -b localhost: -t connect-offsets  -C -K# -o-
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":}
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset

the value:

["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1005}

is what we want!

4. use the value get in step 3 as template to and send to the topic again

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ echo '["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1}' | \
> kafkacat -b localhost: -t connect-offsets -P -Z -K#

here, we modify the incrementing value from 1005 to 1.

for timestamp+increment

echo '["jdbc_source_inventory_orders",{"query":"query"}]#{"timestamp_nanos":0,"incrementing":0,"timestamp":0}' | \
kafkacat -b localhost: -t connect-offsets -P -Z -K#

5. watch the topic again

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ bin/kafka-console-consumer  --bootstrap-server localhost: --from-beginning --topic connect-offsets    --property print.key=true
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}

we can see, there are two values with the same key in the topic now.

refernce

https://docs.confluent.io/current/app-development/kafkacat-usage.html

using kafkacat reset kafka offset的更多相关文章

  1. Kafka Offset相关命令总结

    Kafka Offset相关命令总结 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.查询topic的offset的范围 1>.查询某个topic的offset的最小值 [ ...

  2. kafka集群监控工具之三--kafka Offset Monitor

    1.介绍 一般情况下,功能简单的kafka项目  使用运维命令+kafka Offset Monitor 就足够用了. 2.使用2.1 部署 github下载jar包 KafkaOffsetMonit ...

  3. Kafka Offset 1

    Kafka Offset Storage   1.概述 目前,Kafka 官网最新版[0.10.1.1],已默认将消费的 offset 迁入到了 Kafka 一个名为 __consumer_offse ...

  4. Spark createDirectStream 维护 Kafka offset(Scala)

    createDirectStream方式需要自己维护offset,使程序可以实现中断后从中断处继续消费数据. KafkaManager.scala import kafka.common.TopicA ...

  5. Kafka Offset Storage

    1.概述 目前,Kafka 官网最新版[0.10.1.1],已默认将消费的 offset 迁入到了 Kafka 一个名为 __consumer_offsets 的Topic中.其实,早在 0.8.2. ...

  6. kafka offset 设置

    from kafka import KafkaConsumer from kafka import TopicPartition from kafka.structs import OffsetAnd ...

  7. 关于 Kafka offset

    查询topic的offset的范围 用下面命令可以查询到topic:Mytopic broker:SparkMaster:9092的offset的最小值: bin/kafka-run-class.sh ...

  8. kafka offset的存储问题

    注意:从kafka-0.9版本及以后,kafka的消费者组和offset信息就不存zookeeper了,而是存到broker服务器上,所以,如果你为某个消费者指定了一个消费者组名称(group.id) ...

  9. kafka offset存储

    存储方式 方式 方式来源 存储位置 自动提交 kafka kafka 异步提交 kafka kafka checkpoint spark streaming hdfs hbase存储 程序开发 hba ...

随机推荐

  1. 301、404、200、304等HTTP状态

    在网站建设的实际应用中,容易出现很多小小的失误,就像mysql当初优化不到位,影响整体网站的浏览效果一样,其实,网站的常规http状态码的表现也是一样,Google无法验证网站几种解决办法,提及到由于 ...

  2. P2P system: Introduction

    P2P system : peer-to-peer system 一些流行的P2P system: Napster, Gnutella 我们为什么研究P2P system 大型的分布式系统有成千上万个 ...

  3. python-Redis cluster基础指标监控

    #!/usr/local/python/shims/python from rediscluster import StrictRedisCluster ''' 需要在宿主机python中安装redi ...

  4. MySQL Navicat Premium 保存sql语句

    一.新建查询 二.编写sql语句并保存 1.保存到内部 1.Ctrl+s保存当前查询文件 2.下次打开可点击查询点击上次保存的查询文件名打开上次查询的文件 2.保存到外部 1.默认保存至 C:\Use ...

  5. 12-Vue的使用-安装 - 条件渲染

    一.安装 1. 去vue官网:  https://cn.vuejs.org/ 2. 引入vue.js文件 <body> <script src="vue.js"& ...

  6. 微信公众号开发--微信JS-SDK分享到朋友圈和分享给朋友

    之前写过一篇使用微信JS-SDK来实现扫一扫功能的博客 微信公众号开发–微信JS-SDK扫一扫功能 在该博客里介绍了微信JS-SDK的基本用法,其中包括以下几个步骤 还详细介绍了通过config接口注 ...

  7. 25 | MySQL是怎么保证高可用的?

    在上一篇文章中,我和你介绍了binlog的基本内容,在一个主备关系中,每个备库接收主库的binlog并执行. 正常情况下,只要主库执行更新生成的所有binlog,都可以传到备库并被正确地执行,备库就能 ...

  8. 内存原理与PHP的执行过程

    一.内存结构 栈区:保存的是变量名(术语:引用),对于cpu来说,读写速度很快 堆区:存储“复杂”的数据,数组.对象.字符串(字符串比较特殊)等 数据段:又分为数据段全局区(用于存储简单的数据,如数字 ...

  9. 在其他博客里看到的比较好的map用法,进行储存啦啦~ x

    1.map简介 map是一类关联式容器.它的特点是增加和删除节点对迭代器的影响很小,除了那个操作节点,对其他的节点都没有什么影响.对于迭代器来说,可以修改实值,而不能修改key. 2.map的功能 自 ...

  10. List集合类

    1.1:  List.add方法——向集合列表中添加对象 public static void main(String[] args) { List<String> list=new Ar ...