kafka高可用探究
kafka高可用探究
众所周知 kafka 的 topic 可以使用 --replication-factor 数和 partitions 数来保证服务的高可用性
问题发现
但在最近的运维过程中,3台集群的kafka,副本与分区都为3,有其中一台 broker 挂了导致整个集群成了不可用状态,消费者消费不到信息,这是为什么呢?
查了很多资料后发现是kafka本身的 topic __consumer_offsets 搞的鬼。
问题分析
在高版本的kakfa中,消费者的offset偏移量会保存在kafka自身一个叫做__consumer_offsets的topic中,由于这个topic是由kafka本身默认创建,所以副本数会配置文件中指定的默认副本数,一般为1。
查看副本分区情况一般为:
./kafka-topics.sh --zookeeper localhost:2181 --describe __consumer_offsets
Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:1 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
Topic: __consumer_offsets Partition: 0 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 1 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 2 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 3 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 4 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 5 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 6 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 7 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 8 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 9 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 10 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 11 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 12 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 13 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 14 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 15 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 16 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 17 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 18 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 19 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 20 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 21 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 22 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 23 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 24 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 25 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 26 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 27 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 28 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 29 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 30 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 31 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 32 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 33 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 34 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 35 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 36 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 37 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 38 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 39 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 40 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 41 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 42 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 43 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 44 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 45 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 46 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 47 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 48 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 49 Leader: 1 Replicas: 1 Isr: 1
50个分区,每个分区1个副本。
50个分区是遍布在3台broker的,这就导致如果有其中一台broker服务挂了,在其broker的所有Partition将不能正常使用,就导致此Partition的消费者不知道自己的offset偏移量,就导致无法正常消费。
问题解决
方法1
由于现在kafka已经开始正常提供服务,所以只能动态修改:
先准备分区副本规划 json 文件
vim /data/vfan/consumer.json
{
"version": 1,
"partitions": [
{
"topic": "__consumer_offsets",
"partition": 0,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 1,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 2,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 3,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 4,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 5,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 6,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 7,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 8,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 9,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 10,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 11,
"replicas": [
3,
1,
2
]
},
{
"topic": "__consumer_offsets",
"partition": 12,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 13,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 14,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 15,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 16,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 17,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 18,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 19,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 20,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 21,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 22,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 23,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 24,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 25,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 26,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 27,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 28,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 29,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 30,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 31,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 32,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 33,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 34,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 35,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 36,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 37,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 38,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 39,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 40,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 41,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 42,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 43,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 44,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 45,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 46,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 47,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 48,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 49,
"replicas": [
3,
2,
1
]
}
]
}
各 replicas 所在的 broker id可以自定义修改,但不能有重复的broker
开始执行变更
./kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file /data/vfan/consumer.json --execute
校验变更是否完成
./kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file /data/vfan/consumer.json --verify
检查变更后效果
./kafka-topics.sh --zookeeper localhost:2181 --describe --topic __consumer_offsets
Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:3 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
Topic: __consumer_offsets Partition: 0 Leader: 1 Replicas: 1,2,3 Isr: 3,2,1
Topic: __consumer_offsets Partition: 1 Leader: 2 Replicas: 2,1,3 Isr: 1,2,3
Topic: __consumer_offsets Partition: 2 Leader: 3 Replicas: 3,2,1 Isr: 2,3,1
Topic: __consumer_offsets Partition: 3 Leader: 1 Replicas: 1,2,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 4 Leader: 2 Replicas: 2,1,3 Isr: 1,2,3
Topic: __consumer_offsets Partition: 5 Leader: 3 Replicas: 3,2,1 Isr: 2,3,1
Topic: __consumer_offsets Partition: 6 Leader: 1 Replicas: 1,2,3 Isr: 3,2,1
Topic: __consumer_offsets Partition: 7 Leader: 2 Replicas: 2,1,3 Isr: 1,2,3
Topic: __consumer_offsets Partition: 8 Leader: 3 Replicas: 3,2,1 Isr: 2,1,3
Topic: __consumer_offsets Partition: 9 Leader: 1 Replicas: 1,2,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 10 Leader: 2 Replicas: 2,1,3 Isr: 1,2,3
Topic: __consumer_offsets Partition: 11 Leader: 3 Replicas: 3,1,2 Isr: 2,3,1
Topic: __consumer_offsets Partition: 12 Leader: 2 Replicas: 2,1,3 Isr: 3,2,1
Topic: __consumer_offsets Partition: 13 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: __consumer_offsets Partition: 14 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 15 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 16 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 17 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 18 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 19 Leader: 3 Replicas: 3,2,1 Isr: 1,3,2
Topic: __consumer_offsets Partition: 20 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 21 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 22 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 23 Leader: 1 Replicas: 1,2,3 Isr: 2,1,3
Topic: __consumer_offsets Partition: 24 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 25 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 26 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 27 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 28 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 29 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 30 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 31 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 32 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 33 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 34 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 35 Leader: 1 Replicas: 1,2,3 Isr: 2,1,3
Topic: __consumer_offsets Partition: 36 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 37 Leader: 3 Replicas: 3,2,1 Isr: 1,3,2
Topic: __consumer_offsets Partition: 38 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 39 Leader: 2 Replicas: 2,1,3 Isr: 3,2,1
Topic: __consumer_offsets Partition: 40 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 41 Leader: 1 Replicas: 1,2,3 Isr: 2,1,3
Topic: __consumer_offsets Partition: 42 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 43 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 44 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 45 Leader: 2 Replicas: 2,1,3 Isr: 3,2,1
Topic: __consumer_offsets Partition: 46 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 47 Leader: 1 Replicas: 1,2,3 Isr: 2,1,3
Topic: __consumer_offsets Partition: 48 Leader: 2 Replicas: 2,1,3 Isr: 3,2,1
Topic: __consumer_offsets Partition: 49 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
副本数已经成为三个并分布在三个broker中,实现高可用。
方法2
直接在kafka服务启动前,修改系统创建topic默认副本分区参数
num.partitions=3 ;当topic不存在系统自动创建时的分区数
default.replication.factor=3 ;当topic不存在系统自动创建时的副本数
offsets.topic.replication.factor=3 ;表示kafka的内部topic consumer_offsets副本数,默认为1
设置完毕后,启动 zk kafka,随后测试生产 消费
## 生产
./kafka-console-producer.sh --broker-list localhost:9092 --topic test
## 消费,--from-beginning参数表示从头开始
./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
查看系统生成的topic 分区及副本数
## test
./kafka-topics.sh --zookeeper localhost:2181 --describe --topic test
Topic:test PartitionCount:3 ReplicationFactor:3 Configs:
## __consumer_offsets
./kafka-topics.sh --zookeeper localhost:2181 --describe --topic __consumer_offsets
Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:3 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
系统自动生成的topic也都已实现高可用
kafka高可用探究的更多相关文章
- Kafka 高可用设计
Kafka 高可用设计 2016-02-28 杜亦舒 Kafka在早期版本中,并不提供高可用机制,一旦某个Broker宕机,其上所有Partition都无法继续提供服务,甚至发生数据丢失对于分布式系统 ...
- Kafka高可用环境搭建
Apache Kafka是分布式发布-订阅消息系统,在 kafka官网上对 kafka 的定义:一个分布式发布-订阅消息传递系统. 它最初由LinkedIn公司开发,Linkedin于2010年贡献给 ...
- Kafka —— 基于 ZooKeeper 搭建 Kafka 高可用集群
一.Zookeeper集群搭建 为保证集群高可用,Zookeeper集群的节点数最好是奇数,最少有三个节点,所以这里搭建一个三个节点的集群. 1.1 下载 & 解压 下载对应版本Zookeep ...
- Kafka 学习之路(二)—— 基于ZooKeeper搭建Kafka高可用集群
一.Zookeeper集群搭建 为保证集群高可用,Zookeeper集群的节点数最好是奇数,最少有三个节点,所以这里搭建一个三个节点的集群. 1.1 下载 & 解压 下载对应版本Zookeep ...
- Kafka 系列(二)—— 基于 ZooKeeper 搭建 Kafka 高可用集群
一.Zookeeper集群搭建 为保证集群高可用,Zookeeper 集群的节点数最好是奇数,最少有三个节点,所以这里搭建一个三个节点的集群. 1.1 下载 & 解压 下载对应版本 Zooke ...
- 入门大数据---基于Zookeeper搭建Kafka高可用集群
一.Zookeeper集群搭建 为保证集群高可用,Zookeeper 集群的节点数最好是奇数,最少有三个节点,所以这里搭建一个三个节点的集群. 1.1 下载 & 解压 下载对应版本 Zooke ...
- mysql高可用探究 MMM高可用mysql方案
1 MMM高可用mysql方案 1.1 方案简介 MMM即Master-Master Replication Manager for MySQL(mysql主主复制管理器)关于mysql主主复 ...
- Kafka高可用实现原理
数据存储格式 Kafka的高可靠性的保障来源于其健壮的副本(replication)策略.一个Topic可以分成多个Partition,而一个Partition物理上由多个Segment组成. Seg ...
- Kafka高可用实现
数据存储格式 Kafka的高可靠性的保障来源于其健壮的副本(replication)策略.一个Topic可以分成多个Partition,而一个Partition物理上由多个Segment组成. Seg ...
随机推荐
- dataTemplate 的使用之listView
<Window x:Class="WpfApplication1.Window39" xmlns="http://schemas.microsoft.com/win ...
- 关于servlet中doGet和doPost乱码再一次理解
今天系统的整理了在web项目下,出现的编码问题,下面就做一些总结: 首先对HTTP协议中对GET和POST的定义: GET POST 后退按钮/刷新 无害 数据会被重新提交(浏览器应该告知用户数据 ...
- Windows系统搭建Redis集群三种模式(零坑、最新版)
目录 主从复制 修改配置文件 启动各节点 验证 哨兵模式 修改配置文件 启动实例 验证 集群模式 修改配置文件 启动实例 验证 主从复制 新建以下三个目录,用来部署一主二从 redis 的安装在另外一 ...
- springcloud超时重试机制的先后顺序
https://blog.csdn.net/zzzgd_666/article/details/83314833
- nacos配置
server: port: 3377 spring: application: name: nacos-config-client cloud: nacos: discovery: #nacos 服务 ...
- Linux CentOS7 安装配置 IPtables
2021-08-11 1. 前言 防火墙其实就是实现 Linux 下访问控制功能的,分为硬件和软件的防火墙两种类型.无论在何网络中,防火墙工作的地方一定是网络的边缘.防火墙的策略.规则就是去定义防火墙 ...
- cmd(命令行)超好用的技巧,很不错的打开方式
超快速打开管理cmd widows + x 按a 直接打开文件位置,在地址栏输入cmd 地址----直接cmd打开到所在文件位置 ex:cmd D:\work cd ../../../ 返回上几层的方 ...
- 云原生学习筑基 ~ 组网必备知识点 ~ DNS服务
@ 目录 一.为啥写这篇文章? 二.DNS的作用 三.域 四.DNS工作原理 五.搭建DNS服务器 5.1.Bind 5.2.系统环境准备 5.3.安装 5.4.查看bind的相关文件 5.5.查看b ...
- Hexo+Butterfly主题美化
前言 本博客基于Hexo框架搭建,用到 hexo-theme-butterfly 主题(本人博客Butterfly版本3.4.0),hexo-theme-butterfly是基于Molunerfinn ...
- python 加速运算
原文链接:https://blog.csdn.net/qq_27009517/article/details/103805099 一.加速查找 1.用set而非list import time dat ...