Hadoop生态圈-Kafka的完全分布式部署
Hadoop生态圈-Kafka的完全分布式部署
作者:尹正杰
版权声明:原创作品,谢绝转载!否则将追究法律责任。
本篇博客主要内容就是搭建Kafka完全分布式,它是在kafka本地模式(https://www.cnblogs.com/yinzhengjie/p/9209058.html)的基础之上进一步实现完全分布式搭建过程。
一.试验环境
试验环境共计4台服务器
1>.管理服务器(s101)
2>.Kafka节点二(s102,已经部署好了zookeeper服务)
3>.Kafka节点三(s103,已经部署好了zookeeper服务)
4>.Kafka节点四(s104,已经部署好了zookeeper服务)
二.kafka完全分布式部署
1>.将kafka加压后的安装包发送到其他节点(s102,s103,s104)
[yinzhengjie@s101 data]$ more `which xrsync.sh` #!/bin/bash #@author :yinzhengjie #blog:http://www.cnblogs.com/yinzhengjie #EMAIL:y1053419035@qq.com #判断用户是否传参 ];then echo "请输入参数"; exit fi #获取文件路径 file=$@ #获取子路径 filename=`basename $file` #获取父路径 dirpath=`dirname $file` #获取完整路径 cd $dirpath fullpath=`pwd -P` #同步文件到DataNode ;i<=;i++ )) do #使终端变绿色 tput setaf echo =========== s$i %file =========== #使终端变回原来的颜色,即白灰色 tput setaf #远程执行命令 rsync -lr $filename `whoami`@s$i:$fullpath #判断命令是否执行成功 ];then echo "命令执行成功" fi done [yinzhengjie@s101 data]$
[yinzhengjie@s101 data]$ more `which xrsync.sh`
[yinzhengjie@s101 data]$ xrsync.sh /soft/kafka =========== s102 %file =========== 命令执行成功 =========== s103 %file =========== 命令执行成功 =========== s104 %file =========== 命令执行成功 =========== s105 %file =========== 命令执行成功 [yinzhengjie@s101 data]$
[yinzhengjie@s101 data]$ xrsync.sh /soft/kafka
[yinzhengjie@s101 data]$ xrsync.-/ =========== s102 %file =========== 命令执行成功 =========== s103 %file =========== 命令执行成功 =========== s104 %file =========== 命令执行成功 =========== s105 %file =========== 命令执行成功 [yinzhengjie@s101 data]$
[yinzhengjie@s101 data]$ xrsync.sh /soft/kafka_2.11-1.1.0/
2>.分发环境变量
[yinzhengjie@s101 data]$ su Password: [root@s101 data]# xrsync.sh /etc/profile =========== s102 %file =========== 命令执行成功 =========== s103 %file =========== 命令执行成功 =========== s104 %file =========== 命令执行成功 =========== s105 %file =========== 命令执行成功 [root@s101 data]# exit exit [yinzhengjie@s101 data]$
3>.修zk节点的改配置文件
[yinzhengjie@s102 ~]$ grep broker.id /soft/kafka/config/server.properties broker. [yinzhengjie@s102 ~]$ grep listeners /soft/kafka/config/server.properties | grep -v ^# listeners=PLAINTEXT://s102:9092 [yinzhengjie@s102 ~]$
修改s102配置文件(/soft/kafka/config/server.properties)
[yinzhengjie@s103 ~]$ grep broker.id /soft/kafka/config/server.properties broker. [yinzhengjie@s103 ~]$ grep listeners /soft/kafka/config/server.properties | grep -v ^# listeners=PLAINTEXT://s103:9092 [yinzhengjie@s103 ~]$ [yinzhengjie@s103 ~]$
修改s103配置文件(/soft/kafka/config/server.properties)
[yinzhengjie@s104 ~]$ grep broker.id /soft/kafka/config/server.properties broker. [yinzhengjie@s104 ~]$ [yinzhengjie@s104 ~]$ grep listeners /soft/kafka/config/server.properties | grep -v ^# listeners=PLAINTEXT://s104:9092 [yinzhengjie@s104 ~]$
修改s104配置文件(/soft/kafka/config/server.properties)
4>.进入zookeeper客户端并删除zk的kafka节点数据
[yinzhengjie@s104 ~]$ zkCli.sh Connecting to localhost: -- ::, [myid:] - INFO [main:Environment@] - Client environment:zookeeper.version=-e5259e437540f349646870ea94dc2658c4e44b3b, built on // : GMT -- ::, [myid:] - INFO [main:Environment@] - Client environment:host.name=s104 -- ::, [myid:] - INFO [main:Environment@] - Client environment:java.version=1.8.0_131 -- ::, [myid:] - INFO [main:Environment@] - Client environment:java.vendor=Oracle Corporation -- ::, [myid:] - INFO [main:Environment@] - Client environment:java.home=/soft/jdk1..0_131/jre -- ::, [myid:] - INFO [main:Environment@] - Client environment:java.class.path=/soft/zk/bin/../build/classes:/soft/zk/bin/../build/lib/*.jar:/soft/zk/bin/../lib/slf4j-log4j12-1.7.25.jar:/soft/zk/bin/../lib/slf4j-api-1.7.25.jar:/soft/zk/bin/../lib/netty-3.10.6.Final.jar:/soft/zk/bin/../lib/log4j-1.2.17.jar:/soft/zk/bin/../lib/jline-0.9.94.jar:/soft/zk/bin/../lib/audience-annotations-0.5.0.jar:/soft/zk/bin/../zookeeper-3.4.12.jar:/soft/zk/bin/../src/java/lib/*.jar:/soft/zk/bin/../conf: 2018-06-21 01:23:54,940 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib 2018-06-21 01:23:54,940 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp 2018-06-21 01:23:54,940 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA> 2018-06-21 01:23:54,940 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux 2018-06-21 01:23:54,941 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64 2018-06-21 01:23:54,941 [myid:] - INFO [main:Environment@100] - Client environment:os.version=3.10.0-327.el7.x86_64 2018-06-21 01:23:54,941 [myid:] - INFO [main:Environment@100] - Client environment:user.name=yinzhengjie 2018-06-21 01:23:54,941 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/home/yinzhengjie 2018-06-21 01:23:54,941 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/home/yinzhengjie 2018-06-21 01:23:54,942 [myid:] - INFO [main:ZooKeeper@441] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@277050dc Welcome to ZooKeeper! JLine support is enabled 2018-06-21 01:23:54,973 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1028] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error) 2018-06-21 01:23:55,031 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@878] - Socket connection established to localhost/127.0.0.1:2181, initiating session 2018-06-21 01:23:55,049 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1302] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x6800003ae7350004, negotiated timeout = 30000 WATCHER:: WatchedEvent state:SyncConnected type:None path:null [zk: localhost:2181(CONNECTED) 0] ls / [a, cluster, controller, brokers, zookeeper, yarn-leader-election, hadoop-ha, admin, isr_change_notification, log_dir_event_notification, controller_epoch, consumers, latest_producer_id_block, config, hbase] [zk: localhost:2181(CONNECTED) 1] rmr /controller /brokers /admin /controller_epoch /consumers /latest_producer_id_block /config /isr_change_notification /cluster /log_dir_event_notification [zk: localhost:2181(CONNECTED) 2]
5>.分别启动s102-s104的kafka
[yinzhengjie@s102 ~]$ kafka-server-start.sh -daemon /soft/kafka/config/server.properties [yinzhengjie@s102 ~]$
[yinzhengjie@s102 ~]$ kafka-server-start.sh -daemon /soft/kafka/config/server.properties
[yinzhengjie@s103 ~]$ kafka-server-start.sh -daemon /soft/kafka/config/server.properties [yinzhengjie@s103 ~]$
[yinzhengjie@s103 ~]$ kafka-server-start.sh -daemon /soft/kafka/config/server.properties
[yinzhengjie@s104 ~]$ kafka-server-start.sh -daemon /soft/kafka/config/server.properties [yinzhengjie@s104 ~]$
[yinzhengjie@s104 ~]$ kafka-server-start.sh -daemon /soft/kafka/config/server.properties
6>.创建主题
[yinzhengjie@s104 ~]$ kafka-topics. --list yinzhengjie [yinzhengjie@s104 ~]$
查看以及有的主题([yinzhengjie@s104 ~]$ kafka-topics.sh --zookeeper s102:2181 --list)
[yinzhengjie@s104 ~]$ kafka-topics. --create --partitions --replication-factor --topic yzj Created topic "yzj". [yinzhengjie@s104 ~]$
创建主题([yinzhengjie@s104 ~]$ kafka-topics.sh --zookeeper s104:2181 --create --partitions 2 --replication-factor 1 --topic yzj)
7>.在任意zk节点开启控制台生产者(例如:在s102上)
[yinzhengjie@s102 ~]$ kafka-server-start.sh -daemon /soft/kafka/config/server.properties [yinzhengjie@s102 ~]$ kafka-console-producer. --topic yzj >尹正杰到此一游! >
[yinzhengjie@s102 ~]$ kafka-server-start.sh -daemon /soft/kafka/config/server.properties
8>.在任意zk节点开启控制台消费者(例如:在s103上)
[yinzhengjie@s103 ~]$ kafka-console-consumer. --topic yzj --from-beginning Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper]. 尹正杰到此一游!
[yinzhengjie@s103 ~]$ kafka-console-consumer.sh --zookeeper s102:2181 --topic yzj --from-beginning
三.编写kafka启动脚本(“/usr/local/bin/xkafka.sh”,别忘记添加执行权限,而且需要你提前配置好秘钥对哟!)
[yinzhengjie@s101 ~]$ more /usr/local/bin/xkafka.sh #!/bin/bash #@author :yinzhengjie #blog:http://www.cnblogs.com/yinzhengjie #EMAIL:y1053419035@qq.com #判断用户是否传参 ];then echo "无效参数,用法为: $0 {start|stop}" exit fi #获取用户输入的命令 cmd=$ ; i<= ; i++ )) ; do tput setaf echo ========== s$i $cmd ================ tput setaf case $cmd in start) ssh s$i "source /etc/profile ; kafka-server-start.sh -daemon /soft/kafka/config/server.properties" echo s$i "服务已启动" ;; stop) ssh s$i "source /etc/profile ; kafka-server-stop.sh" echo s$i "服务已停止" ;; *) echo "无效参数,用法为: $0 {start|stop}" exit ;; esac done [yinzhengjie@s101 ~]$ sudo chmod a+x /usr/local/bin/xkafka.sh [yinzhengjie@s101 ~]$ [yinzhengjie@s101 ~]$ ll /usr/local/bin/xkafka.sh -rwxr-xr-x root root Jun : /usr/local/bin/xkafka.sh [yinzhengjie@s101 ~]$
Hadoop生态圈-Kafka的完全分布式部署的更多相关文章
- Hadoop生态圈-Kafka的本地模式部署
Hadoop生态圈-Kafka的本地模式部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.Kafka简介 1>.什么是JMS 答:在Java中有一个角消息系统的东西,我 ...
- Hadoop生态圈-通过CDH5.15.1部署spark1.6与spark2.3.0的版本兼容运行
Hadoop生态圈-通过CDH5.15.1部署spark1.6与spark2.3.0的版本兼容运行 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 在我的CDH5.15.1集群中,默 ...
- Apache Hadoop 2.9.2 完全分布式部署
Apache Hadoop 2.9.2 完全分布式部署(HDFS) 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.环境准备 1>.操作平台 [root@node101.y ...
- Hadoop生态圈-kafka事务控制以及性能测试
Hadoop生态圈-kafka事务控制以及性能测试 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任.
- Hadoop生态圈-Kafka的新API实现生产者-消费者
Hadoop生态圈-Kafka的新API实现生产者-消费者 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任.
- Hadoop生态圈-Kafka配置文件详解
Hadoop生态圈-Kafka配置文件详解 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.默认kafka配置文件内容([yinzhengjie@s101 ~]$ more /s ...
- Hadoop生态圈-Kafka的旧API实现生产者-消费者
Hadoop生态圈-Kafka的旧API实现生产者-消费者 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.旧API实现生产者-消费者 1>.开启kafka集群 [yinz ...
- Hadoop生态圈-hbase介绍-完全分布式搭建
Hadoop生态圈-hbase介绍-完全分布式搭建 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任.
- Hadoop生态圈-hbase介绍-伪分布式安装
Hadoop生态圈-hbase介绍-伪分布式安装 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.HBase简介 HBase是一个分布式的,持久的,强一致性的存储系统,具有近似最 ...
随机推荐
- Windows下面的常用的快捷键
最小化的快捷键: 最小化当前窗口:Alt+ESC 还原刚刚最小化的窗口:Alt+Tab(次快捷键组合可以在多个窗口中切换) 显示桌面,切换之前的桌面:Win+D 在浏览器页面之间切换:Ctrl+T ...
- fastjson 使用记录
参考: https://www.cnblogs.com/cdf-opensource-007/p/7106018.html import com.alibaba.fastjson.JSON; impo ...
- Django实现websocket完成实时通讯、聊天室、在线客服等
一 什么是Websocket WebSocket是一种在单个TCP连接上进行全双工通信的协议 WebSocket使得客户端和服务器之间的数据交换变得更加简单,允许服务端主动向客户端推送数据.在WebS ...
- falsk之文件上传
在使用flask定义路由完成文件上传时,定义upload视图函数 from flask import Flask, render_template from werkzeug.utils import ...
- binlog2sql使用总结
binlog2sql是大众点评开源的一款用于解析binlog的工具,在测试环境试用了下,还不错. 其具有以下功能 1. 提取SQL 2. 生成回滚SQL 关于该工具的使用方法可参考github操作文档 ...
- stl源码剖析 详细学习笔记 算法(5)
//---------------------------15/04/01---------------------------- //inplace_merge(要求有序) template< ...
- git笔记:通过给grunt-inline打tag看tag操作
晚上review了下grunt-inline的issues,看到有个兄弟pull request,修正了0.3.0版本的一个bug.于是就merge了下,然后发布了0.3.1版本(这里). npm p ...
- HTML5 标签实例
html 5 学习1.<p></p> #段落元素定义2.<h1></h1> #标题 h1代表大号的字体.依此变小3.<br /> #实例 代 ...
- nginx location 正则匹配
nginx 统计语句1.根据访问IP统计UV awk '{print $1}' access.log|sort | uniq -c |wc -l2.统计访问URL统计PV awk '{print $7 ...
- DFA化简
首先是未化简DFA的转换表 NFA状态 DFA状态 a b {0,1,2,4,7} A B C {1,2,3,4,6,7,8} B B D {1,2,4,5,6,7} C B C {1,2,4,5,6 ...