kafka常用运维命令
列出所有topic:
bin/kafka-topics.sh --zookeeper localhost:2181 --list
说明:其实就是去检查zk上节点的/brokers/topics子节点,打印出来
创建topic:
bin/kafka-topics.sh --zookeeper localhost:2181 --create --topic order_ledger_check_complete_test --partitions 4 --replication-factor 3
说明:线上环境我们会将自动创建topic禁用掉,改为手动创建(auto.create.topics.enable=false),parttitions和replication-factor是两个必备选项,第一个参数是消费并行度的一个重要参数,第二个极大提高了topic的可用性.备份因子默认是1,相当于没有备份,注意其值不能大于broker个数,否则会报错。同时还可以指定topic级别的配置参数,这种特定的配置会覆盖掉默认配置,并且存储在zookeeper的/config/topics/[topic_name]节点数据里。--alter --config --deleteConfig
删除topic:
bin/kafka-topics.sh --zookeeper localhost:2181 --topic payment_completed --delete
说明:在0.8.2.1之前的版本一般不建议之行删除操作,因为有各种各样的bug存在,目前的版本稳定些,同时我们需要将配置参数打开(delete.topic.enable=true),删除操作其实是通过更改一个zk节点,由另外的删除线程异步做的topicdeletionmanager。
增加partitions:
bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic topic_test --partitions 10
说明:只能增加,不能减少。如果原有分散策略是hash的方式,将会受影响。发送端(默认10分钟会刷新本地存储元信息)和消费端都无需重启即可生效。
增加broker:
需要将安装包拷贝到对应服务器上,修改broker.id,不能与现有系统中broker id冲突,然后创建好对应的日志目录和数据目录等。注意的是,现有partition会继续保持到其他broker上面,新创建topic才可能分配到该新机器上面。如果需要保持整个kafka集群比较均衡,需要手动对现有数据进行迁移,尽量迁移非leader的partition,利用partition reassignment tools
1.--generate
也可以对候选的进行适当调整
[hadoop@i-o1oh kafka_2.10-0.8.2.1]$ cat topics-to-move.json
{"version":1,
"partitions":[{"topic":"production_process_flow_test"},
{"topic":"production_process_flow_dev"}
}
[hadoop@i-o1ohkafka_2.10-0.8.2.1]$ bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --topics-to-move-json-file topics-to-move.json --broker-list "1,2" --generate
Current partition replica assignment
{"version":1,"partitions":[{"topic":"production_process_flow_test","partition":2,"replicas":[1,3]},{"topic":"production_process_flow_dev","partition":2,"replicas":[1,3]},{"topic":"production_process_flow_dev","partition":1,"replicas":[3,2]},{"topic":"production_process_flow_dev","partition":3,"replicas":[2,1]},{"topic":"production_process_flow_test","partition":3,"replicas":[2,1]},{"topic":"production_process_flow_test","partition":1,"replicas":[3,2]},{"topic":"production_process_flow_test","partition":0,"replicas":[2,1]},{"topic":"production_process_flow_dev","partition":0,"replicas":[2,1]}]}
Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"production_process_flow_test","partition":2,"replicas":[1,2]},{"topic":"production_process_flow_dev","partition":2,"replicas":[2,1]},{"topic":"production_process_flow_dev","partition":1,"replicas":[1,2]},{"topic":"production_process_flow_test","partition":3,"replicas":[2,1]},{"topic":"production_process_flow_test","partition":1,"replicas":[2,1]},{"topic":"production_process_flow_dev","partition":3,"replicas":[1,2]},{"topic":"production_process_flow_test","partition":0,"replicas":[1,2]},{"topic":"production_process_flow_dev","partition":0,"replicas":[2,1]}]}
2.--execute
bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file a.json --execute
Current partition replica assignment
{"version":1,"partitions":[{"topic":"production_process_flow_test","partition":2,"replicas":[1,3]},{"topic":"production_process_flow_dev","partition":2,"replicas":[1,3]},{"topic":"production_process_flow_dev","partition":1,"replicas":[3,2]},{"topic":"production_process_flow_dev","partition":3,"replicas":[2,1]},{"topic":"production_process_flow_test","partition":3,"replicas":[2,1]},{"topic":"production_process_flow_test","partition":1,"replicas":[3,2]},{"topic":"production_process_flow_test","partition":0,"replicas":[2,1]},{"topic":"production_process_flow_dev","partition":0,"replicas":[2,1]}]}
Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions {"version":1,"partitions":[{"topic":"production_process_flow_test","partition":1,"replicas":[2,1]},{"topic":"production_process_flow_dev","partition":3,"replicas":[1,2]},{"topic":"production_process_flow_test","partition":3,"replicas":[2,1]},{"topic":"production_process_flow_dev","partition":0,"replicas":[2,1]},{"topic":"production_process_flow_dev","partition":1,"replicas":[1,2]},{"topic":"production_process_flow_test","partition":0,"replicas":[1,2]},{"topic":"production_process_flow_test","partition":2,"replicas":[1,2]},{"topic":"production_process_flow_dev","partition":2,"replicas":[2,1]}]}
3.--verify
bin/kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file a.json --verify
Status of partition reassignment:
Reassignment of partition [production_process_flow_test,1] completed successfully
Reassignment of partition [production_process_flow_dev,3] completed successfully
Reassignment of partition [production_process_flow_test,3] completed successfully
Reassignment of partition [production_process_flow_dev,0] completed successfully
Reassignment of partition [production_process_flow_dev,1] completed successfully
Reassignment of partition [production_process_flow_test,0] completed successfully
Reassignment of partition [production_process_flow_test,2] completed successfully
Reassignment of partition [production_process_flow_dev,2] completed successfully
下线某台机器分两种情况:
1.数据完整:直接将数据目录拷贝到新机器上,同时保持kafka安装配置不变,启动起来就行
2.数据不完整:这种情况下比较麻烦,需要对所有topic进行describe,如果发现有replica在这台下线机器上,需要用partition reassignment tool进行迁移工作
增减replica:
只需要编辑 --reassignment-json-file,添加或者减少broker id即可。其他操作可以依赖partition reassignment tool 的--execute --verify.
消费消息:
watch --interval=2 bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic payment_completed_dev --from-beginning
查询消费信息:
bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper localhost:2181 --group group_order_ledger --topic payment_completed_dev
写入消息性能测试:
bin/kafka-producer-perf-test.sh --broker-list kafka1.weibo.com:9092 --batch-size 1 --message-size 1024 --messages 10000 --sync --topics topic_test
kafka常用运维命令的更多相关文章
- linux基本语法和常用运维命令
linux上的操作一般是命令行操作,看起来很高大上,让人畏而远之. Help!Help! 忽然间闯入的linux黑黑的世界,怎么办,不要慌.赶紧敲出一个help命令,然后回车,黑色的窗口就会展示一些常 ...
- DB2常用运维命令
DB2是IBM公司推出关系型数据库管理系统.主要应用于银行.医院等大型机构.现今DB2主要包含以下三个系列:DB2 for Linux, UNIX and Windows(LUW) . DB2在Lin ...
- Liunx常用运维命令整理记录
前言 作为后端开发者,掌握一些常用的运维命令也是很有必要的,本文记录常用Liunx运维命令 基本命令 目录切换 cd base-admin/ 切换到当前目录下的base-admin目录 cd .. 切 ...
- DG常用运维命令及常见问题解决
DG常见运维命令及常见问题解决方法 l> DG库启动.关闭标准操作Dataguard关闭1).先取消日志应用alter database recover managed standby data ...
- linux系列之常用运维命令整理笔录
目录 本博客记录工作中需要的linux运维命令,大学时候开始接触linux,会一些基本操作,可是都没有整理起来,加上是做开发,不做运维,有些命令忘记了,所以现在整理成博客,当然vi,文件操作等就不介绍 ...
- linux常用运维命令【转】
自己的小网站跑在阿里云的ECS上面,偶尔也去分析分析自己网站服务器日志,看看网站的访问量.看看有没有黑阔搞破坏!于是收集,整理一些服务器日志分析命令,大家可以试试! 1.查看有多少个IP访问: awk ...
- Oracle 常用运维命令整理
点击上方"开源Linux",选择"设为星标" 回复"学习"获取独家整理的学习资料! 一.oracle建库与删库命令 (1)oracle11g ...
- (转)ceph 常用 运维命令--查看信息 - 不错的文档
下面是测试验证环节 1. 创建一个 pool rbd create foo --size 4 --image-format 2 --image-feature layering 2. 挂载和格式化 r ...
- linux系统常用运维命令
目录/文件处理命令 mkdir dirname 创建文件夹 mkdir -p /tmp/a/b 递归创建目录 rm -rf dirname 删除目录及内 ...
随机推荐
- HTML save data to CSV or excel
/********************************************************************************* * HTML save data ...
- Laravel学习之旅(一)
路由 1.简介:简单的说就是将用户的请求转发给相应的程序进行处理: 2.作用:就是建立url和程序之间的映射. 3.请求类型:get.post.put.patch.delete 相比于thinkphp ...
- 中南林业科技大学第十一届程序设计大赛-C:有趣的二进制
链接:https://www.nowcoder.com/acm/contest/124/C来源:牛客网 时间限制:C/C++ 1秒,其他语言2秒空间限制:C/C++ 131072K,其他语言26214 ...
- pe如何安装ios系统
1.进PE系统(老毛桃) 2.虚拟光驱加载ios系统 3.然后打开我的电脑,里面有个光盘,就像光盘插在光驱里打开电脑后的样子,双击安装系统.
- lua 二进制函数使用
由于 Lua 脚本语言本身不支持对数字的二进制操作(例如 与,或,非 等操作),MUSHclient 为此提供了一套专门用于二进制操作的函数,它们都定义在一个"bit"表中,使用时 ...
- Javascript 全局函数是 window 的函数
比如以下函数,看起来不属于任何对象,但它是一个全局对象. 它属于 HTML页面的函数. function myFunction(a, b){ return a * b; } window.myFunc ...
- windows 内存分配回收检查工具
LeakDiag是微软一款检测memory leak的工具,使用比较简单 首先去下载一个ftp://ftp.microsoft.com/PSS/Tools/Developer%20Support%20 ...
- 关于OkHttp–支持SPDY协议的高效HTTP库 com.squareup.okhttp
转载:http://liuzhichao.com/p/1707.html OkHttp–支持SPDY协议的高效HTTP库 柳志超博客 » Program » Andriod » OkHttp–支持SP ...
- log4net 使用指南,最常遇到的问题整理。。。
一. Log4net特征 Log4net是一个用于.NET开发环境的日志记录组件,由于它的超快及超灵活,很多大型的应用都会用到. 它有如下特点: 1.自定义日志输出级别 ...
- 编译sass,遇到报错error style.scss (Line 3: Invalid GBK character "\xE5")
今天学习sass,写了一行中文注释,结果却遇到了报错: cmd.exe /D /C call C:/Ruby23-x64/bin/scss.bat --no-cache --update style. ...