Kafka之--自动启动zookeeper & kafka 脚本
1) 首先配置SSH免密登录,在这里我用kafka(151)这台机器来作为启动脚本的存放和执行机器
[root@kafaka3 .ssh]# pwd #生成SSH KEY
/root/.ssh
[root@kafaka .ssh]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
a7:82:b2:ce:c2:e0:21:7d:4e:63:7c:03:d5:3c:98:25 root@kafaka
The key's randomart image is:
+--[ RSA 2048]----+
| E*. |
| +.+ |
| . . |
| . |
| . . . S . |
|o.. *.o o |
|= o=.o... |
|.+ o. . |
| o+ |
+-----------------+
[root@kafaka .ssh]# ls -l
total 8
-rw------- 1 root root 1675 Jul 13 20:12 id_rsa
-rw-r--r-- 1 root root 393 Jul 13 20:12 id_rsa.pub #复制到另外两台机器上
[root@kafaka .ssh]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.56.152
The authenticity of host '192.168.56.152 (192.168.56.152)' can't be established.
ECDSA key fingerprint is e6:c4:48:fa:0d:76:3e:2c:3b:60:e7:61:90:ad:9a:ee.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.56.152's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@192.168.56.152'"
and check to make sure that only the key(s) you wanted were added. [root@kafaka .ssh]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.56.153
The authenticity of host '192.168.56.153 (192.168.56.153)' can't be established.
ECDSA key fingerprint is e6:c4:48:fa:0d:76:3e:2c:3b:60:e7:61:90:ad:9a:ee.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.56.153's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'root@192.168.56.153'"
and check to make sure that only the key(s) you wanted were added. #修改kafka2 & kafka3目录和文件的权限。如果不修改,可能无法正常免密登录成功
#kafka2文件权限修改
[root@kafaka2 .ssh]# chmod 700 ~/.ssh
[root@kafaka2 .ssh]# chmod 600 ~/.ssh/authorized_keys
#kafka3文件权限修改
[root@kafaka3 .ssh]# chmod 700 ~/.ssh
[root@kafaka3 .ssh]# chmod 600 ~/.ssh/authorized_keys #免密登录测试
[root@kafaka .ssh]# ssh 192.168.56.152
Last login: Tue Jul 13 20:04:08 2021 from 192.168.56.1
[root@kafaka2 ~]# exit
logout
Connection to 192.168.56.152 closed.
2)启动脚本内容
2.1)启动zookeeper脚本
[root@kafaka ~]# cat zooman.sh
#! /bin/bash
case $1 in
"start"){
echo " --------Start 192.168.56.151 Zookeeper-------"
zkServer.sh start for i in 192.168.56.152 192.168.56.153
do
echo " --------Start $i Zookeeper-------"
ssh $i "source /etc/profile; zkServer.sh start"
done
};;
"stop"){
echo " --------Stop 192.168.56.151 Zookeeper-------"
zkServer.sh stop
for i in 192.168.56.152 192.168.56.153
do
echo " --------Stop $i Zookeeper-------"
ssh $i "source /etc/profile; zkServer.sh stop"
done
};;
"status"){
echo " --------192.168.56.151 Zookeeper Status-------"
zkServer.sh status
for i in 192.168.56.152 192.168.56.153
do
echo " --------$i Zookeeper status-------"
ssh $i "source /etc/profile; zkServer.sh status"
done
};;
esac
可以用zooman.sh start启动。启动完后,可以用zooman.sh status查看zookeeper状态
[root@kafaka ~]# ./zooman.sh status
--------192.168.56.151 Zookeeper Status-------
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.7.0-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
--------192.168.56.152 Zookeeper status-------
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.7.0-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
--------192.168.56.153 Zookeeper status-------
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /usr/local/apache-zookeeper-3.7.0-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower
4)kafka启动停止脚本
[root@kafaka ~]# cat kfman.sh
#! /bin/bash
case $1 in
"start"){
echo " --------Start 192.168.56.151 Kafka Broker $j-------"
# 用于KafkaManager监控
export JMX_PORT=9988 && /usr/local/kafka_2.13-2.7.0/bin/kafka-server-start.sh -daemon /usr/local/kafka_2.13-2.7.0/config/server.properties for i in 192.168.56.152 192.168.56.153
do
echo " --------Start $i Kafka Broker $j-------"
# 用于KafkaManager监控
ssh $i "export JMX_PORT=9988 && /usr/local/kafka_2.13-2.7.0/bin/kafka-server-start.sh -daemon /usr/local/kafka_2.13-2.7.0/config/server.properties "
done
};;
"stop"){
echo " --------Stop 192.168.56.151 Kafka-------"
/usr/local/kafka_2.13-2.7.0/bin/kafka-server-stop.sh stop for i in 192.168.56.152 192.168.56.153
do
echo " --------Stop $i Kafka-------"
ssh $i "/usr/local/kafka_2.13-2.7.0/bin/kafka-server-stop.sh stop"
done
};;
esac
5)启动完成后,可以用如下命令查看brokers
(注:因为用daemon启动,一台物理服务器只能启动一个实例,所以这里我只启动了3个broker。
至于能否多个我还没研究透,如果有知道的朋友也请回复告知我一声)
[zk: localhost:2181(CONNECTED) 0] ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, isr_change_notification, latest_producer_id_block, log_dir_event_notification, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /brokers
[ids, seqid, topics]
[zk: localhost:2181(CONNECTED) 2] ls /brokers/ids
[0, 2, 4]
[zk: localhost:2181(CONNECTED) 3]
最后说明,因为我为了节约一台机器,所以脚本放在了其中的一台机器。
如果放在另一台不同的机器上,脚本可以写得更简洁,不需要分开本地与远程机器分别处理。
Kafka之--自动启动zookeeper & kafka 脚本的更多相关文章
- bat脚本:windows下一键启动zookeeper+kafka
bat脚本:windows下一键启动zookeeper+kafka 把下面两行代码存为bat文件,双击执行即可.注意更改相应的目录 这里用ping来控制时间(先zookeeper,ping 4 次后 ...
- Zookeeper+Kafka完全分布式实战部署
Zookeeper+Kafka完全分布式实战部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 其实我之前部署过kafak和zookeeper的完全分布式,集群是可以正常使用没错, ...
- Redis&MongoDB&Zookeeper&Kafka
目录 Redis MongoDB Zookeeper Kafka Redis 概念 Redis是NoSQL中比较常典型的一个非关系型数据库,在日常工作中也是最为常见的.Redis是一个由C语言编写的开 ...
- kafak manager + zookeeper + kafka 消费队列快速清除
做性能测试时,kafka消息队列比较长,让程序自己消费完毕需要等待很长时间.就需要快速清理kafka队列 清理方式把 这kafak manager + zookeeper + kafka 这些应用情况 ...
- Docker搭建Zookeeper&Kafka集群
最近在学习Kafka,准备测试集群状态的时候感觉无论是开三台虚拟机或者在一台虚拟机开辟三个不同的端口号都太麻烦了(嗯..主要是懒). 环境准备 一台可以上网且有CentOS7虚拟机的电脑 为什么使用虚 ...
- zookeeper/kafka的部署
Ubuntu中安装zookeeper及kafka并配置环境变量 首先安装zookeeper zookeeper需要jdk环境,请在jdk安装完成的情况下安装zookeeper1.从官网下载zook ...
- window环境搭建zookeeper,kafka集群
为了演示集群的效果,这里准备一台虚拟机(window 7),在虚拟机中搭建了单IP多节点的zookeeper集群(多IP节点的也是同理的),并且在本机(win 7)和虚拟机中都安装了kafka. 前期 ...
- zookeeper+kafka集群安装之二
zookeeper+kafka集群安装之二 此为上一篇文章的续篇, kafka安装需要依赖zookeeper, 本文与上一篇文章都是真正分布式安装配置, 可以直接用于生产环境. zookeeper安装 ...
- zookeeper+kafka集群安装之一
zookeeper+kafka集群安装之一 准备3台虚拟机, 系统是RHEL64服务版. 1) 每台机器配置如下: $ cat /etc/hosts ... # zookeeper hostnames ...
随机推荐
- 实验7、Django VS Flask VS Node:如何选择
实验介绍 1. 实验内容 在本教程中,我们将详细介绍Django和Flask之间的比较.Flask和Django是基于Python的Web开发框架.许多正在朝着轻型微框架发展.这些框架敏捷,灵活,小巧 ...
- 【NX二次开发】Block UI 标签/位图
属性说明: 常规 类型 描述 BlockID String 控件ID Enable Logical 是否可操作 Group Logica ...
- Java源码分析:Guava之不可变集合ImmutableMap的源码分析
一.案例场景 遇到过这样的场景,在定义一个static修饰的Map时,使用了大量的put()方法赋值,就类似这样-- public static final Map<String,String& ...
- 百炼 POJ2393:Yogurt factory【把存储费用用递推的方式表达】
2393:Yogurt factory 总时间限制: 1000ms 内存限制: 65536kB 描述 The cows have purchased a yogurt factory that m ...
- 使用阿里云服务器部署jupyter notebook远程访问
安装annaconda 与jupyter notebook annaconda在已经自带了jupyter notebook.jupyter lab.ipython 等一系列工具,不需要再单独安装这些工 ...
- Redis 雪崩、穿透、击穿、并发、缓存讲解以及解决方案
1.缓存雪崩 数据未加载到缓存中,或者缓存同一时间大面积的失效,从而导致所有请求都去查数据库,导致数据库CPU和内存负载过高,甚至宕机. 比如一个雪崩的简单过程 1.redis集群大面积故障 2.缓存 ...
- Python中任务队列-芹菜celery的使用
一.关于celery 芹菜celery是一个python实现的异步任务队列,可以用于爬虫.web后台查询.计算等等.通过任务队列,当一个任务来临时不再傻傻等待. 他的架构如下: Broker 我们的生 ...
- sonarqube 8.9版本配置项目访问权限
soanrqube设置项目权限 admin->项目->要设置的项目 进行项目权限配置 选择权限 权限配置(公开,私有)如果是公司项目建议选择私有 根据项目团队成员的角色需求,进行勾选配置 ...
- Linux 动态库 undefined symbol 原因定位与解决方法
在使用动态库开发部署时,遇到最多的问题可能就是 undefined symbol 了,导致这个出现这个问题的原因有多种多样,快速找到原因,采用对应的方法解决是本文写作的目的. 可能的原因 依赖库未找到 ...
- SpringMVC(9)实现注解式权限验证
对大部分系统来说都需要权限管理来决定不同用户可以看到哪些内容,那么如何在Spring MVC中实现权限验证呢?当然我们可以继续使用servlet中的过滤器Filter来实现.但借助于Spring MV ...