kafka高可用探究

众所周知 kafka 的 topic 可以使用 --replication-factor 数和 partitions 数来保证服务的高可用性

问题发现

但在最近的运维过程中,3台集群的kafka,副本与分区都为3,有其中一台 broker 挂了导致整个集群成了不可用状态,消费者消费不到信息,这是为什么呢?

查了很多资料后发现是kafka本身的 topic __consumer_offsets 搞的鬼。

问题分析

在高版本的kakfa中,消费者的offset偏移量会保存在kafka自身一个叫做__consumer_offsets的topic中,由于这个topic是由kafka本身默认创建,所以副本数会配置文件中指定的默认副本数,一般为1。

查看副本分区情况一般为:

./kafka-topics.sh --zookeeper localhost:2181 --describe __consumer_offsets
Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:1 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
Topic: __consumer_offsets Partition: 0 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 1 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 2 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 3 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 4 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 5 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 6 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 7 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 8 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 9 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 10 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 11 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 12 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 13 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 14 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 15 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 16 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 17 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 18 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 19 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 20 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 21 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 22 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 23 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 24 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 25 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 26 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 27 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 28 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 29 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 30 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 31 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 32 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 33 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 34 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 35 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 36 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 37 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 38 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 39 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 40 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 41 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 42 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 43 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 44 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 45 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 46 Leader: 1 Replicas: 1 Isr: 1
Topic: __consumer_offsets Partition: 47 Leader: 2 Replicas: 2 Isr: 2
Topic: __consumer_offsets Partition: 48 Leader: 3 Replicas: 3 Isr: 3
Topic: __consumer_offsets Partition: 49 Leader: 1 Replicas: 1 Isr: 1

50个分区,每个分区1个副本。

50个分区是遍布在3台broker的,这就导致如果有其中一台broker服务挂了,在其broker的所有Partition将不能正常使用,就导致此Partition的消费者不知道自己的offset偏移量,就导致无法正常消费。

问题解决

方法1

由于现在kafka已经开始正常提供服务,所以只能动态修改:

先准备分区副本规划 json 文件

vim /data/vfan/consumer.json

{
"version": 1,
"partitions": [
{
"topic": "__consumer_offsets",
"partition": 0,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 1,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 2,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 3,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 4,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 5,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 6,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 7,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 8,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 9,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 10,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 11,
"replicas": [
3,
1,
2
]
},
{
"topic": "__consumer_offsets",
"partition": 12,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 13,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 14,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 15,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 16,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 17,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 18,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 19,
"replicas": [
3,
2,
1
]
},

{
"topic": "__consumer_offsets",
"partition": 20,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 21,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 22,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 23,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 24,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 25,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 26,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 27,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 28,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 29,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 30,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 31,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 32,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 33,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 34,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 35,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 36,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 37,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 38,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 39,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 40,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 41,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 42,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 43,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 44,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 45,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 46,
"replicas": [
3,
2,
1
]
},
{
"topic": "__consumer_offsets",
"partition": 47,
"replicas": [
1,
2,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 48,
"replicas": [
2,
1,
3
]
},
{
"topic": "__consumer_offsets",
"partition": 49,
"replicas": [
3,
2,
1
]
}
]
}
各 replicas 所在的 broker id可以自定义修改,但不能有重复的broker

开始执行变更

./kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file /data/vfan/consumer.json --execute

校验变更是否完成

./kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file /data/vfan/consumer.json --verify

检查变更后效果

./kafka-topics.sh --zookeeper localhost:2181 --describe --topic __consumer_offsets
Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:3 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer
Topic: __consumer_offsets Partition: 0 Leader: 1 Replicas: 1,2,3 Isr: 3,2,1
Topic: __consumer_offsets Partition: 1 Leader: 2 Replicas: 2,1,3 Isr: 1,2,3
Topic: __consumer_offsets Partition: 2 Leader: 3 Replicas: 3,2,1 Isr: 2,3,1
Topic: __consumer_offsets Partition: 3 Leader: 1 Replicas: 1,2,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 4 Leader: 2 Replicas: 2,1,3 Isr: 1,2,3
Topic: __consumer_offsets Partition: 5 Leader: 3 Replicas: 3,2,1 Isr: 2,3,1
Topic: __consumer_offsets Partition: 6 Leader: 1 Replicas: 1,2,3 Isr: 3,2,1
Topic: __consumer_offsets Partition: 7 Leader: 2 Replicas: 2,1,3 Isr: 1,2,3
Topic: __consumer_offsets Partition: 8 Leader: 3 Replicas: 3,2,1 Isr: 2,1,3
Topic: __consumer_offsets Partition: 9 Leader: 1 Replicas: 1,2,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 10 Leader: 2 Replicas: 2,1,3 Isr: 1,2,3
Topic: __consumer_offsets Partition: 11 Leader: 3 Replicas: 3,1,2 Isr: 2,3,1
Topic: __consumer_offsets Partition: 12 Leader: 2 Replicas: 2,1,3 Isr: 3,2,1
Topic: __consumer_offsets Partition: 13 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: __consumer_offsets Partition: 14 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 15 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 16 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 17 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 18 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 19 Leader: 3 Replicas: 3,2,1 Isr: 1,3,2
Topic: __consumer_offsets Partition: 20 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 21 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 22 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 23 Leader: 1 Replicas: 1,2,3 Isr: 2,1,3
Topic: __consumer_offsets Partition: 24 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 25 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 26 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 27 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 28 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 29 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 30 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 31 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 32 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 33 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 34 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 35 Leader: 1 Replicas: 1,2,3 Isr: 2,1,3
Topic: __consumer_offsets Partition: 36 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 37 Leader: 3 Replicas: 3,2,1 Isr: 1,3,2
Topic: __consumer_offsets Partition: 38 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 39 Leader: 2 Replicas: 2,1,3 Isr: 3,2,1
Topic: __consumer_offsets Partition: 40 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 41 Leader: 1 Replicas: 1,2,3 Isr: 2,1,3
Topic: __consumer_offsets Partition: 42 Leader: 2 Replicas: 2,1,3 Isr: 3,1,2
Topic: __consumer_offsets Partition: 43 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 44 Leader: 1 Replicas: 1,2,3 Isr: 2,3,1
Topic: __consumer_offsets Partition: 45 Leader: 2 Replicas: 2,1,3 Isr: 3,2,1
Topic: __consumer_offsets Partition: 46 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3
Topic: __consumer_offsets Partition: 47 Leader: 1 Replicas: 1,2,3 Isr: 2,1,3
Topic: __consumer_offsets Partition: 48 Leader: 2 Replicas: 2,1,3 Isr: 3,2,1
Topic: __consumer_offsets Partition: 49 Leader: 3 Replicas: 3,2,1 Isr: 1,2,3

副本数已经成为三个并分布在三个broker中,实现高可用。

方法2

直接在kafka服务启动前,修改系统创建topic默认副本分区参数

num.partitions=3 ;当topic不存在系统自动创建时的分区数
default.replication.factor=3 ;当topic不存在系统自动创建时的副本数
offsets.topic.replication.factor=3 ;表示kafka的内部topic consumer_offsets副本数,默认为1

设置完毕后,启动 zk kafka,随后测试生产 消费

## 生产
./kafka-console-producer.sh --broker-list localhost:9092 --topic test

## 消费,--from-beginning参数表示从头开始
./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

查看系统生成的topic 分区及副本数

## test
./kafka-topics.sh --zookeeper localhost:2181 --describe --topic test
Topic:test PartitionCount:3 ReplicationFactor:3 Configs:

## __consumer_offsets
./kafka-topics.sh --zookeeper localhost:2181 --describe --topic __consumer_offsets
Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:3 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer

系统自动生成的topic也都已实现高可用

kafka高可用探究的更多相关文章

  1. Kafka 高可用设计

    Kafka 高可用设计 2016-02-28 杜亦舒 Kafka在早期版本中,并不提供高可用机制,一旦某个Broker宕机,其上所有Partition都无法继续提供服务,甚至发生数据丢失对于分布式系统 ...

  2. Kafka高可用环境搭建

    Apache Kafka是分布式发布-订阅消息系统,在 kafka官网上对 kafka 的定义:一个分布式发布-订阅消息传递系统. 它最初由LinkedIn公司开发,Linkedin于2010年贡献给 ...

  3. Kafka —— 基于 ZooKeeper 搭建 Kafka 高可用集群

    一.Zookeeper集群搭建 为保证集群高可用,Zookeeper集群的节点数最好是奇数,最少有三个节点,所以这里搭建一个三个节点的集群. 1.1 下载 & 解压 下载对应版本Zookeep ...

  4. Kafka 学习之路(二)—— 基于ZooKeeper搭建Kafka高可用集群

    一.Zookeeper集群搭建 为保证集群高可用,Zookeeper集群的节点数最好是奇数,最少有三个节点,所以这里搭建一个三个节点的集群. 1.1 下载 & 解压 下载对应版本Zookeep ...

  5. Kafka 系列(二)—— 基于 ZooKeeper 搭建 Kafka 高可用集群

    一.Zookeeper集群搭建 为保证集群高可用,Zookeeper 集群的节点数最好是奇数,最少有三个节点,所以这里搭建一个三个节点的集群. 1.1 下载 & 解压 下载对应版本 Zooke ...

  6. 入门大数据---基于Zookeeper搭建Kafka高可用集群

    一.Zookeeper集群搭建 为保证集群高可用,Zookeeper 集群的节点数最好是奇数,最少有三个节点,所以这里搭建一个三个节点的集群. 1.1 下载 & 解压 下载对应版本 Zooke ...

  7. mysql高可用探究 MMM高可用mysql方案

    1    MMM高可用mysql方案 1.1  方案简介 MMM即Master-Master Replication Manager for MySQL(mysql主主复制管理器)关于mysql主主复 ...

  8. Kafka高可用实现原理

    数据存储格式 Kafka的高可靠性的保障来源于其健壮的副本(replication)策略.一个Topic可以分成多个Partition,而一个Partition物理上由多个Segment组成. Seg ...

  9. Kafka高可用实现

    数据存储格式 Kafka的高可靠性的保障来源于其健壮的副本(replication)策略.一个Topic可以分成多个Partition,而一个Partition物理上由多个Segment组成. Seg ...

随机推荐

  1. springboot分页插件的使用

    在springboot工程下的pom.xml中添加依赖 <!--分页 pagehelper --> <dependency> <groupId>com.github ...

  2. springboot项目中进行XSS过滤

    简单介绍 XSS : 跨站脚本攻击(Cross Site Scripting),为不和层叠样式表(Cascading Style Sheets, CSS)的缩写混淆,故将跨站脚本攻击缩写为XSS.恶意 ...

  3. Flink API

    一.Flink API 1.DataSet:对静态数据进行批处理操作.将静态数据抽象成分布式数据集,使用Flink各种操作符处理数据,支持 Java .Scala.Python 2.DataStrea ...

  4. Linkerd 2.10(Step by Step)—3. 自动轮换控制平面 TLS &Webhook TLS 凭证

    Linkerd 2.10 系列 快速上手 Linkerd v2 Service Mesh(服务网格) 腾讯云 K8S 集群实战 Service Mesh-Linkerd2 & Traefik2 ...

  5. 已知三角形ABC为锐角三角形,求 sinA + sinB·sin(C/2) 的最大值。

    已知三角形ABC为锐角三角形,求 sinA + sinBsin(C/2) 的最大值. 解:Δ := sinA + sinB·sin(C/2) = sin(B+C) + sinB·sin(C/2) = ...

  6. 阿里云服务器部署mongodb

    在阿里云上买了个服务器,部署mongodb遇到一些坑,解决办法也是从网上搜集而来,把零零碎碎的整理记录一下. 服务器是:Alibaba Cloud Linux 下载安装 mongodb官网下载实在是太 ...

  7. tomcat配置启动不了

    关于ideatomcat配置问题 1.第一步配置tomcat启动器 2.配置启动的网址 3.配置启动器的启动 ---更多java学习,请见本人小博客:https://zhangjzm.gitee.io ...

  8. golang GC 垃圾回收机制

    垃圾回收(Garbage Collection,简称GC)是编程语言中提供的自动的内存管理机制,自动释放不需要的对象,让出存储器资源,无需程序员手动执行. Golang中的垃圾回收主要应用三色标记法, ...

  9. golang 模板 html/template与text/template

    html模板生成: html/template包实现了数据驱动的模板,用于生成可对抗代码注入的安全HTML输出.它提供了和text/template包相同的接口,Go语言中输出HTML的场景都应使用t ...

  10. JDK1.8源码阅读笔记(1)Object类

    JDK1.8源码阅读笔记(1)Object类 ​ Object 类属于 java.lang 包,此包下的所有类在使⽤时⽆需⼿动导⼊,系统会在程序编译期间⾃动 导⼊.Object 类是所有类的基类,当⼀ ...