Open Sourcing Kafka Monitor
https://engineering.linkedin.com/blog/2016/05/open-sourcing-kafka-monitor
https://github.com/linkedin/kafka-monitor
https://github.com/Microsoft/Availability-Monitor-for-Kafka
Design Overview
Kafka Monitor makes it easy to develop and execute long-running Kafka-specific system tests in real clusters and to monitor existing Kafka deployment's SLAs provided by users.
Developers can create new tests by composing reusable modules to emulate various scenarios (e.g. GC pauses, broker hard-kills, rolling bounces, disk failures, etc.) and collect metrics; users can run Kafka Monitor tests that execute these scenarios at a user-defined schedule on a test cluster or production cluster and validate that Kafka still functions as expected in these scenarios. Kafka Monitor is modeled as manager for a collection of tests and services in order to achieve these goals.
A given Kafka Monitor instance runs in a single Java process and can spawn multiple tests/services in the same process. The diagram below demonstrates the relations between service, test and Kafka Monitor instance, as well as how Kafka Monitor interacts with a Kafka cluster and user.

这个平台比较有意思在于,不光是监控那么简单,
还包含完整的test框架,可以定义任意test,test由各种service,即组件,组合而成
- Produce service, which produces messages to Kafka and measures metrics such as produce rate and availability.
- Consume service, which consumes messages from Kafka and measures metrics including message loss rate, message duplicate rate and end-to-end latency. This service depends on the produce service to provide messages that embed a message sequence number and timestamp.
- Broker bounce service, which bounces a given broker at some pre-defined schedule.
用上面的3个services,就可以组合出一个测试broker bounce的test
或者上面的case,通过两个kafka monitor,可以测试多datacenter之间的同步
Kafka Monitor Usage at LinkedIn
Monitoring Kafka Cluster Deployments
In early 2016 we deployed Kafka Monitor to monitor availability and end-to-end latency of every Kafka cluster at LinkedIn. This project wiki goes into the details of how these metrics are measured. These basic but critical metrics have been extremely useful to actively monitor the SLAs provided by our Kafka cluster deployment.
Validate Client Libraries Using End-to-End Workflows
As an earlier blog post explains, we have a client library that wraps around the vanilla Apache Kafka producer and consumer to provide various features that are not available in Apache Kafka such as Avro encoding, auditing and support for large messages. We also have a REST client that allows non-Java application to produce and consume from Kafka. It is important to validate the functionality of these client libraries with each new Kafka release. Kafka Monitor allows users to plug in custom client libraries to be used in its end-to-end workflow. We have deployed Kafka Monitor instances that use our wrapper client and REST client in tests, to validate that their performance and functionality meet the requirement for every new release of these client libraries and Apache Kafka.
Certify New Internal Releases of Apache Kafka
We generally run off Apache Kafka trunk and cut a new internal release every quarter or so to pick up new features from Apache Kafka. A significant benefit of running off trunk is that deploying Kafka in LinkedIn’s production cluster has often detected problems in Apache Kafka trunk that can be fixed before official Apache Kafka releases.
Given the risk of running off Apache Kafka trunk, we take extra care to certify every internal release in a test cluster—which accepts traffic mirrored from production cluster(s)—for a few weeks before deploying the new release in production. For example, we do rolling bounces or hard kill brokers, while checking JMX metrics to verify that there is exactly one controller and no offline partitions, in order to validate Kafka’s availability under failover scenarios. In the past, these steps were manual, which is very time-consuming and doesn’t scale well with the number of events and types of scenarios we want to test. We are switching to Kafka Monitor to automate this process and cover more failover scenarios on a continual basis.
Open Sourcing Kafka Monitor的更多相关文章
- 消息中间件选型分析——从Kafka与RabbitMQ的对比来看全局
一.前言 消息队列中间件(简称消息中间件)是指利用高效可靠的消息传递机制进行与平台无关的数据交流,并基于数据通信来进行分布式系统的集成.通过提供消息传递和消息排队模型,它可以在分布式环境下提供应用解耦 ...
- 关于Kafka监控方案的讨论
之前在知乎上尝试过回答这个问题,后来问的人挺多,干脆在博客里面保存一下. 目前Kafka监控方案看似很多,然而并没有一个"大而全"的通用解决方案.各家框架也是各有千秋,以下是我了解 ...
- DataPipeline |《Apache Kafka实战》作者胡夕:Apache Kafka监控与调优
胡夕 <Apache Kafka实战>作者,北航计算机硕士毕业,现任某互金公司计算平台总监,曾就职于IBM.搜狗.微博等公司.国内活跃的Kafka代码贡献者. 前言 虽然目前Apache ...
- Kafka监控工具kafka-monitor v0.1简要介绍
Kafka Monitor为Kafka的可视化管理与监控工具,为Kafka的稳定运维提供高效.可靠.稳定的保障,这里主要简单介绍Kafka Monitor的相关功能与页面的介绍: Kafka Moni ...
- Kafka(3)--kafka消息的存储及Partition副本原理
消息的存储原理: 消息的文件存储机制: 前面我们知道了一个 topic 的多个 partition 在物理磁盘上的保存路径,那么我们再来分析日志的存储方式.通过 [root@localhost ~]# ...
- IM系统的MQ消息中间件选型:Kafka还是RabbitMQ?
1.前言 在IM这种讲究高并发.高消息吞吐的互联网场景下,MQ消息中间件是个很重要的基础设施,它在IM系统的服务端架构中担当消息中转.消息削峰.消息交换异步化等等角色,当然MQ消息中间件的作用远不止于 ...
- 《Apache kafka实战》读书笔记-kafka集群监控工具
<Apache kafka实战>读书笔记-kafka集群监控工具 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 如官网所述,Kafka使用基于yammer metric ...
- Kafka监控框架介绍
前段时间在想Kafka怎么监控.怎么知道生产的消息或消费的消费是否有丢失,目前有几个开源的Kafka监控框架这里整理了下,不过这几个框架都有各自的问题侧重点不一样: 1.Kafka Monitor 2 ...
- Kafka设计解析(二十三)关于Kafka监控方案的讨论
转载自 huxihx,原文链接 关于Kafka监控方案的讨论 目前Kafka监控方案看似很多,然而并没有一个“大而全”的通用解决方案.各家框架也是各有千秋,以下是我了解到的一些内容: 一.Kafka ...
随机推荐
- Sql server之路 (三)添加本地数据库SDF文件
12月25日 今天搞了半天 添加本地数据库Sdf文件到项目里.总是出现问题. 安装环境 Vs2008 没有安装的环境 1.Vs2008 sp1 2. 适用于 Windows 桌面的 Microsoft ...
- Query通过Ajax向PHP服务端发送请求并返回JSON数据
Query通过Ajax向PHP服务端发送请求并返回JSON数据 服务端PHP读取MYSQL数据,并转换成JSON数据,传递给前端Javascript,并操作JSON数据.本文将通过实例演示了jQuer ...
- hdu 4268 multiset+贪心
Alice和Bob有n个长方形,有长度和宽度,一个矩形可以覆盖另一个矩形的条件的是,本身长度大于等于另一个矩形,且宽度大于等于另一个矩形,矩形不可旋转,问你Alice最多能覆盖Bob的几个矩形? /* ...
- struts.xml中可以使用el表达式和ognl表达式
转自:http://blog.csdn.net/hzc543806053/article/details/7538723 文件上传链接: 1)Servlet 文件上传 ———— 点击打开链接 2)S ...
- 版本引发的血案check the manual that corresponds to your MySQL server version for the right syntax
该错误mysql5.1有问题,mysql5.3版本没问题
- DEDECMS教程:上/下一篇文章标题长度的截取方法
对dedecms了解的朋友们,想必对如何获取上一篇.下一篇文章的标签也是非常熟悉.dedecms获取上一篇.下一篇文章的标签分别为:{dede:prenext get='pre'/}.{dede:pr ...
- JVM常用参数配置
Trace跟踪参数 -verbose:gc -XX:+printGC 打印GC的简要信息 -XX:+PrintGCDetails 打印GC详细信息 -XX:+PrintGCTimeStamps 打印C ...
- HDU3729 I'm Telling the Truth(字典序最大的最大流)
题目大概说n个学生,都各自有一个互不相同的成绩排名,他们各自说了他们成绩排名所在区间,问最多有几个学生没说谎以及字典序最大的没说谎的学生序列. 学生作为一个X部的点,排名作为Y部的点,学生与其成绩排名 ...
- splice JavaScript Array 对象
定义和用法 splice() 方法向/从数组中添加/删除项目,然后返回被删除的项目. 注释:该方法会改变原始数组. 语法 arrayObject.splice(index,howmany,item1, ...
- HDU 4003 (树形DP+背包)
题目链接: http://acm.hdu.edu.cn/showproblem.php?pid=4003 题目大意:有K个机器人,走完树上的全部路径,每条路径有个消费.对于一个点,机器人可以出去再回来 ...