1. install kafkacat

Ubuntu

apt-get install kafkacat

CentOS

install deepenency

yum install librdkafka-devel

download source from github

build source on centos

./configure <usual-configure-options>
make
sudo make install

2. watch the target topic data

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ bin/kafka-console-consumer  --bootstrap-server localhost: --from-beginning --topic connect-offsets    --property print.key=true
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}

there is only one record in kafka topic connect-offsets.

3. dump the record from topic

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ kafkacat -b localhost: -t connect-offsets  -C -K# -o-
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":}
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset
% Reached end of topic connect-offsets [] at offset

the value:

["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1005}

is what we want!

4. use the value get in step 3 as template to and send to the topic again

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ echo '["jdbc_source_inventory_customers",{"query":"query"}]#{"incrementing":1}' | \
> kafkacat -b localhost: -t connect-offsets -P -Z -K#

here, we modify the incrementing value from 1005 to 1.

for timestamp+increment

echo '["jdbc_source_inventory_orders",{"query":"query"}]#{"timestamp_nanos":0,"incrementing":0,"timestamp":0}' | \
kafkacat -b localhost: -t connect-offsets -P -Z -K#

5. watch the topic again

lenmom@M1701:~/workspace/software/confluent-community-5.1.-2.11$ bin/kafka-console-consumer  --bootstrap-server localhost: --from-beginning --topic connect-offsets    --property print.key=true
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}
["jdbc_source_inventory_customers",{"query":"query"}] {"incrementing":}

we can see, there are two values with the same key in the topic now.

refernce

https://docs.confluent.io/current/app-development/kafkacat-usage.html

using kafkacat reset kafka offset的更多相关文章

  1. Kafka Offset相关命令总结

    Kafka Offset相关命令总结 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.查询topic的offset的范围 1>.查询某个topic的offset的最小值 [ ...

  2. kafka集群监控工具之三--kafka Offset Monitor

    1.介绍 一般情况下,功能简单的kafka项目  使用运维命令+kafka Offset Monitor 就足够用了. 2.使用2.1 部署 github下载jar包 KafkaOffsetMonit ...

  3. Kafka Offset 1

    Kafka Offset Storage   1.概述 目前,Kafka 官网最新版[0.10.1.1],已默认将消费的 offset 迁入到了 Kafka 一个名为 __consumer_offse ...

  4. Spark createDirectStream 维护 Kafka offset(Scala)

    createDirectStream方式需要自己维护offset,使程序可以实现中断后从中断处继续消费数据. KafkaManager.scala import kafka.common.TopicA ...

  5. Kafka Offset Storage

    1.概述 目前,Kafka 官网最新版[0.10.1.1],已默认将消费的 offset 迁入到了 Kafka 一个名为 __consumer_offsets 的Topic中.其实,早在 0.8.2. ...

  6. kafka offset 设置

    from kafka import KafkaConsumer from kafka import TopicPartition from kafka.structs import OffsetAnd ...

  7. 关于 Kafka offset

    查询topic的offset的范围 用下面命令可以查询到topic:Mytopic broker:SparkMaster:9092的offset的最小值: bin/kafka-run-class.sh ...

  8. kafka offset的存储问题

    注意:从kafka-0.9版本及以后,kafka的消费者组和offset信息就不存zookeeper了,而是存到broker服务器上,所以,如果你为某个消费者指定了一个消费者组名称(group.id) ...

  9. kafka offset存储

    存储方式 方式 方式来源 存储位置 自动提交 kafka kafka 异步提交 kafka kafka checkpoint spark streaming hdfs hbase存储 程序开发 hba ...

随机推荐

  1. Vim使用技巧(0) -- 博主的vim配置

    vim ~/.vimrc "插入模式时 光标的上下左右移动 inoremap <C-l> <Right> inoremap <C-h> <Left& ...

  2. 26.C# 文件系统

    1.流的含义 流是一系列具有方向性的字节序列,比如水管中的水流,只不过现在管道中装的不是水,而是字节序列.当流是用于向外部目标比如磁盘输出数据时称为输出流,当流是用于把数据从外部目标读入程序称为输入流 ...

  3. ASP.NET 下配置请求大小、请求时间等参数

    在邮件发送系统或者其他一些传送文件的网站中,用户传送文件的大小是有限制的,因为这样不但可以节省服务器的空间,还可以提高传送文件的速度.下面介绍如何在Web.Config文件中配置限制上传文件大小与时间 ...

  4. 一、XML DOM、XMLDocument

    一.XML DOM概述 XML 文档大小写敏感.属性用引号括起来,每一个标记都要闭合. DOM是XML文档的内存中树状的表示形式. 继承关系图: XmlNode;//XML节点 ......XmlDo ...

  5. Mybatis获取自增主键值

    1.配置文件变化 <?xml version="1.0" encoding="UTF-8" ?> <!DOCTYPE mapper PUBLI ...

  6. 【轉】mantis安裝

    一.mantis简介 可以看出,mantis是一个基于php技术的,个人觉得这个系统还是很完善的.  安装mantis,需要安装一下软件:  phpMyAdmin      下载地址https://w ...

  7. Vue -- filters 过滤器、倒计时效果

    局部过滤器 时间.***号 <div id="wrapper" class="wrapper" style="display: none;&qu ...

  8. postgresql学习笔记--基础篇

    1. 客户端程序和服务器端程序 1.1 客户端程序 Command Example Describe clusterdb clusterdb -h pghost1 -p 1921 -d mydb Cl ...

  9. How to troubleshoot the "Could not create 'CDO.Message'" error message

     https://support.microsoft.com/en-us/kb/910360 Method 1: Make sure that the Cdosys.dll file is cor ...

  10. bzoj 2563: 阿狸和桃子的游戏 贪心

    这个真的好巧妙啊~ 如果只考虑点权的话显然直接按照权值大小排序即可. 但是加入了边权,就有了一个决策的问题. 于是,我们将边权分一半,分给两个端点. 如果一个人拿了两个端点,则边权都会加上. 否则,边 ...