MetadataCache更新
MetadataCache什么时候更新
updateCache方法用来更新缓存的。
发起线程 controller-event-thread
controller选举的时候
| CLASS_NAME | METHOD_NAME | LINE_NUM |
| kafka/controller/KafkaController | sendUpdateMetadataRequest | 1043 |
| kafka/controller/KafkaController | onControllerFailover | 288 |
| kafka/controller/KafkaController | elect | 1658 |
| kafka/controller/KafkaController$Startup$ | process | 1581 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply$mcV$sp | 53 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply | 53 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply | 53 |
| kafka/metrics/KafkaTimer | time | 32 |
| kafka/controller/ControllerEventManager$ControllerEventThread | doWork | 64 |
| kafka/utils/ShutdownableThread | run | 70 |
启动的时候选举,启动这个动作也是个事件
// KafkaController.scala
case object Startup extends ControllerEvent {
def state = ControllerState.ControllerChange
override def process(): Unit = {
registerSessionExpirationListener()
registerControllerChangeListener()
elect()
}
}
broker启动的时候
| CLASS_NAME | METHOD_NAME | LINE_NUM |
| kafka/controller/KafkaController | sendUpdateMetadataRequest | 1043 |
| kafka/controller/KafkaController | onBrokerStartup | 387 |
| kafka/controller/KafkaController$BrokerChange | process | 1208 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply$mcV$sp | 53 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply | 53 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply | 53 |
| kafka/metrics/KafkaTimer | time | 32 |
| kafka/controller/ControllerEventManager$ControllerEventThread | doWork | 64 |
| kafka/utils/ShutdownableThread | run | 70 |
topic删除的时候
| CLASS_NAME | METHOD_NAME | LINE_NUM |
| kafka/controller/KafkaController | sendUpdateMetadataRequest | 1043 |
| kafka/controller/TopicDeletionManager | kafka$controller$TopicDeletionManager$$onTopicDeletion | 268 |
| kafka/controller/TopicDeletionManager$$anonfun$resumeDeletions$2 | apply | 333 |
| kafka/controller/TopicDeletionManager$$anonfun$resumeDeletions$2 | apply | 333 |
| scala/collection/immutable/Set$Set1 | foreach | 94 |
| kafka/controller/TopicDeletionManager | resumeDeletions | 333 |
| kafka/controller/TopicDeletionManager | enqueueTopicsForDeletion | 110 |
| kafka/controller/KafkaController$TopicDeletion | process | 1280 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply$mcV$sp | 53 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply | 53 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply | 53 |
| kafka/metrics/KafkaTimer | time | 32 |
| kafka/controller/ControllerEventManager$ControllerEventThread | doWork | 64 |
| kafka/utils/ShutdownableThread | run | 70 |
topic创建或者修改的时候
| CLASS_NAME | METHOD_NAME | LINE_NUM |
| kafka/controller/ControllerBrokerRequestBatch | updateMetadataRequestBrokerSet | 291 |
| kafka/controller/ControllerBrokerRequestBatch | newBatch | 294 |
| kafka/controller/PartitionStateMachine | handleStateChanges | 105 |
| kafka/controller/KafkaController | onNewPartitionCreation | 499 |
| kafka/controller/KafkaController | onNewTopicCreation | 485 |
| kafka/controller/KafkaController$TopicChange | process | 1237 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply$mcV$sp | 53 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply | 53 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply | 53 |
| kafka/metrics/KafkaTimer | time | 32 |
| kafka/controller/ControllerEventManager$ControllerEventThread | doWork | 64 |
| kafka/utils/ShutdownableThread | run | 70 |
topic创建这个是从队列中拿到事件再处理的方式
队列是kafka.controller.ControllerEventManager.queue
放入过程如下,本质还是监听zk的path的child的变化:
| CLASS_NAME | METHOD_NAME | LINE_NUM |
| kafka/controller/ControllerEventManager | put | 44 |
| kafka/controller/TopicChangeListener | handleChildChange | 1712 |
| org/I0Itec/zkclient/ZkClient$10 | run | 848 |
| org/I0Itec/zkclient/ZkEventThread | run | 85 |
注册监听器的代码如下:
// class KafkaController
private def registerTopicChangeListener() = {
zkUtils.subscribeChildChanges(BrokerTopicsPath, topicChangeListener)
}
顺带说一下有6个地方订阅了zk的子节点的变化:
- DynamicConfigManager.startup
- registerTopicChangeListener
- registerIsrChangeNotificationListener
- registerTopicDeletionListener
- registerBrokerChangeListener
- registerLogDirEventNotificationListener
处理创建topic事件:
// ControllerChannelManager.scala class ControllerBrokerRequestBatch
def sendRequestsToBrokers(controllerEpoch: Int) {
// .......
val updateMetadataRequest = {
val liveBrokers = if (updateMetadataRequestVersion == 0) {
// .......
} else {
controllerContext.liveOrShuttingDownBrokers.map { broker =>
val endPoints = broker.endPoints.map { endPoint =>
new UpdateMetadataRequest.EndPoint(endPoint.host, endPoint.port, endPoint.securityProtocol, endPoint.listenerName)
}
new UpdateMetadataRequest.Broker(broker.id, endPoints.asJava, broker.rack.orNull)
}
}
new UpdateMetadataRequest.Builder(updateMetadataRequestVersion, controllerId, controllerEpoch, partitionStates.asJava,
liveBrokers.asJava)
}
updateMetadataRequestBrokerSet.foreach { broker =>
controller.sendRequest(broker, ApiKeys.UPDATE_METADATA, updateMetadataRequest, null)
}
// .......
}
topic创建时更新metadata再进一步的过程
构建发送请求事件放入发送队列等待发送线程发送
构建发送请求事件代码如下:
// ControllerChannelManager
def sendRequest(brokerId: Int, apiKey: ApiKeys, request: AbstractRequest.Builder[_ <: AbstractRequest],
callback: AbstractResponse => Unit = null) {
brokerLock synchronized {
val stateInfoOpt = brokerStateInfo.get(brokerId)
stateInfoOpt match {
case Some(stateInfo) =>
stateInfo.messageQueue.put(QueueItem(apiKey, request, callback))
case None =>
warn("Not sending request %s to broker %d, since it is offline.".format(request, brokerId))
}
}
}
调用栈:
| CLASS_NAME | METHOD_NAME | LINE_NUM |
| kafka/controller/ControllerChannelManager | sendRequest | 81 |
| kafka/controller/KafkaController | sendRequest | 662 |
| kafka/controller/ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2 | apply | 405 |
| kafka/controller/ControllerBrokerRequestBatch$$anonfun$sendRequestsToBrokers$2 | apply | 405 |
| scala/collection/mutable/HashMap$$anonfun$foreach$1 | apply | 130 |
| scala/collection/mutable/HashMap$$anonfun$foreach$1 | apply | 130 |
| scala/collection/mutable/HashTable$class | foreachEntry | 241 |
| scala/collection/mutable/HashMap | foreachEntry | 40 |
| scala/collection/mutable/HashMap | foreach | 130 |
| kafka/controller/ControllerBrokerRequestBatch | sendRequestsToBrokers | 502 |
| kafka/controller/PartitionStateMachine | handleStateChanges | 105 |
| kafka/controller/KafkaController | onNewPartitionCreation | 499 |
| kafka/controller/KafkaController | onNewTopicCreation | 485 |
| kafka/controller/KafkaController$TopicChange | process | 1237 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply$mcV$sp | 53 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply | 53 |
| kafka/controller/ControllerEventManager$ControllerEventThread$$anonfun$doWork$1 | apply | 53 |
| kafka/metrics/KafkaTimer | time | 32 |
| kafka/controller/ControllerEventManager$ControllerEventThread | doWork | 64 |
| kafka/utils/ShutdownableThread | run | 70 |
发送线程发送请求:
代码如下:
// ControllerChannelManager.scala class RequestSendThread
override def doWork(): Unit = {
def backoff(): Unit = CoreUtils.swallowTrace(Thread.sleep(100))
val QueueItem(apiKey, requestBuilder, callback) = queue.take()
//...
while (isRunning.get() && !isSendSuccessful) {
// if a broker goes down for a long time, then at some point the controller's zookeeper listener will trigger a
// removeBroker which will invoke shutdown() on this thread. At that point, we will stop retrying.
try {
if (!brokerReady()) {
isSendSuccessful = false
backoff()
}
else {
val clientRequest = networkClient.newClientRequest(brokerNode.idString, requestBuilder,
time.milliseconds(), true)
clientResponse = NetworkClientUtils.sendAndReceive(networkClient, clientRequest, time)
isSendSuccessful = true
}
} catch {
case e: Throwable => // if the send was not successful, reconnect to broker and resend the message
warn(("Controller %d epoch %d fails to send request %s to broker %s. " +
"Reconnecting to broker.").format(controllerId, controllerContext.epoch,
requestBuilder.toString, brokerNode.toString), e)
networkClient.close(brokerNode.idString)
isSendSuccessful = false
backoff()
}
}
// ......
}
响应线程
| CLASS_NAME | METHOD_NAME | LINE_NUM |
| kafka/server/MetadataCache | kafka$server$MetadataCache$$addOrUpdatePartitionInfo | 150 |
| kafka/utils/CoreUtils$ | inLock | 219 |
| kafka/utils/CoreUtils$ | inWriteLock | 225 |
| kafka/server/MetadataCache | updateCache | 184 |
| kafka/server/ReplicaManager | maybeUpdateMetadataCache | 988 |
| kafka/server/KafkaApis | handleUpdateMetadataRequest | 212 |
| kafka/server/KafkaApis | handle | 142 |
| kafka/server/KafkaRequestHandler | run | 72 |
线程信息: kafka-request-handler-5
靠 partitionMetadataLock读写锁控制cache数据的读取与写入的线程安全。元数据信息在发送请求中已经构造好了。此处还涉live broker的更新等。
应该还要补充:leader切换和isr变化等
MetadataCache更新的更多相关文章
- kafka-clients 1.0 高阶API消费消息(未完)
消费消息的请求(按序) org/apache/kafka/common/requests/RequestHeader org/apache/kafka/common/requests/ApiVersi ...
- 【原】Android热更新开源项目Tinker源码解析系列之三:so热更新
本系列将从以下三个方面对Tinker进行源码解析: Android热更新开源项目Tinker源码解析系列之一:Dex热更新 Android热更新开源项目Tinker源码解析系列之二:资源文件热更新 A ...
- 使用TSQL查询和更新 JSON 数据
JSON是一个非常流行的,用于数据交换的文本数据(textual data)格式,主要用于Web和移动应用程序中.JSON 使用“键/值对”(Key:Value pair)存储数据,能够表示嵌套键值对 ...
- 【原】Android热更新开源项目Tinker源码解析系列之一:Dex热更新
[原]Android热更新开源项目Tinker源码解析系列之一:Dex热更新 Tinker是微信的第一个开源项目,主要用于安卓应用bug的热修复和功能的迭代. Tinker github地址:http ...
- 【原】Android热更新开源项目Tinker源码解析系列之二:资源文件热更新
上一篇文章介绍了Dex文件的热更新流程,本文将会分析Tinker中对资源文件的热更新流程. 同Dex,资源文件的热更新同样包括三个部分:资源补丁生成,资源补丁合成及资源补丁加载. 本系列将从以下三个方 ...
- Entity Framework 6 Recipes 2nd Edition 译 -> 目录 -持续更新
因为看了<Entity Framework 6 Recipes 2nd Edition>这本书前面8章的翻译,感谢china_fucan. 从第九章开始,我是边看边译的,没有通读,加之英语 ...
- iOS热更新-8种实现方式
一.JSPatch 热更新时,从服务器拉去js脚本.理论上可以修改和新建所有的模块,但是不建议这样做. 建议 用来做紧急的小需求和 修复严重的线上bug. 二.lua脚本 比如: wax.热更新时,从 ...
- 【.net 深呼吸】程序集的热更新
当一个程序集被加载使用的时候,出于数据的完整性和安全性考虑,程序集文件(在99.9998%的情况下是.dll文件)会被锁定,如果此时你想更新程序集(实际上是替换dll文件),是不可以操作的,这时你得把 ...
- ASP.NET MVC5+EF6+EasyUI 后台管理系统(1)-前言与目录(持续更新中...)
开发工具:VS2015(2012以上)+SQL2008R2以上数据库 您可以有偿获取一份最新源码联系QQ:729994997 价格 666RMB 升级后界面效果如下: 任务调度系统界面 http: ...
随机推荐
- Win7安装Python失败 提示Setup failed
一.安装报错 如图所示,双击Python安装包后进行安装显示Setup failed 安装失败: 二.错误排除 1.首先查看自己的计算机是否已经安装了 Win7 Service Pack 1大补丁,没 ...
- CAS实现SSO 单点登录
结构 CAS分为两部分,CAS Server和CAS Client CAS Server用来负责用户的认证工作,就像是把第一次登录用户的一个标识存在这里,以便此用户在其他系统登录时验证其需不需要再次登 ...
- springcloud之简介
springcloud官方文档翻译网站:https://springcloud.cc/ 一.网站架构的演变过程.(这些架构描述的不是很到位,之后需要从新学习) 传统架构 —> 分布式架构 —&g ...
- Python网络编程基础|百度网盘免费下载|零基础入门学习资料
百度网盘免费下载:Python网络编程基础|零基础学习资料 提取码:k7a1 目录: 第1部分 底层网络 第1章 客户/服务器网络介绍 第2章 网络客户端 第3章 网络服务器 第4章 域名系统 第5章 ...
- Go语言系列之手把手教你撸一个ORM(一)
项目地址:https://github.com/yoyofxteam/yoyodata 欢迎星星,感谢 前言:最近在学习Go语言,就出于学习目的手撸个小架子,欢迎提出宝贵意见,项目使用Mysql数据库 ...
- 自动化不知如何参数化?xlrd来帮你解决
平时在做自动化测试的时候,一直都是要求数据与业务逻辑分离.把测试数据都写在业务里面的话,比较混杂.为了方便管理测试数据,所以引入了python的一个扩展库--xlrd.该库使用简单,能满足自动化测试的 ...
- PHPSTORM断点调试配置
一.安装Xdebug xdebug官方提供了一个非常友好的安装指导: https://xdebug.org/wizard.php 打开上面的网站,将你的phpinfo页面输出的内容复制到表单中,然后点 ...
- Django开发之模态框提交内容到后台[Object Object]
版本 Python 3.8.2 Django 3.0.6 场景 前端页面:使用bootstrap-table展示后台传入数据,选中多行提交修改,弹出bootstrap模态框 模态框内容:根据选中表格行 ...
- PHP is_resource() 函数
is_resource() 函数用于检测变量是否为资源类型. PHP 版本要求: PHP 4, P+-HP 5, PHP 7高佣联盟 www.cgewang.com 语法 bool is_resour ...
- PHP strcoll() 函数
实例 比较字符串: <?php高佣联盟 www.cgewang.comsetlocale (LC_COLLATE, 'NL');echo strcoll("Hello World!&q ...