hadoop kafka install multi-borker (7)
multi-borker function like cluster technology
First we make a config file for each of the brokers (on Windows use the copy command instead
> cp config/server.properties config/server-1.properties
> cp config/server.properties config/server-2.properties
Now edit these new files and set the following properties:
config/server-1.properties:
broker.id=1
listeners=PLAINTEXT://:9093
log.dirs=/tmp/kafka-logs-1
config/server-2.properties:
broker.id=2
listeners=PLAINTEXT://:9094
log.dirs=/tmp/kafka-logs-2
The broker.id property is the unique and permanent name of each node in the cluster. We have to override the port and log directory only because we are running these all on the same machine and we want to keep the brokers from all trying to register on the same port or overwrite each other's data.
We already have Zookeeper and our single node started, so we just need to start the two new nodes:
> bin/kafka-server-start.sh config/server-1.properties &
...
> bin/kafka-server-start.sh config/server-2.properties &
...
Now create a new topic with a replication factor of three:
> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic my-replicated-topic
Okay but now that we have a cluster how can we know which broker is doing what? To see that run the "describe topics" command:
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0
Here is an explanation of output. The first line gives a summary of all the partitions, each additional line gives information about one partition. Since we have only one partition for this topic there is only one line.
"leader" is the node responsible for all reads and writes for the given partition. Each node will be the leader for a randomly selected portion of the partitions.
"replicas" is the list of nodes that replicate the log for this partition regardless of whether they are the leader or even if they are currently alive.
"isr" is the set of "in-sync" replicas. This is the subset of the replicas list that is currently alive and caught-up to the leader.
Note that in my example node 1 is the leader for the only partition of the topic.
We can run the same command on the original topic we created to see where it is:
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
Topic:test PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test Partition: 0 Leader: 0 Replicas: 0 Isr: 0
So there is no surprise there—the original topic has no replicas and is on server 0, the only server in our cluster when we created it.
Let's publish a few messages to our new topic:
> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic my-replicated-topic
...
my test message 1
my test message 2
^C
Now let's consume these messages:
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
...
my test message 1
my test message 2
^C
Now let's test out fault-tolerance. Broker 1 was acting as the leader so let's kill it:
> ps aux | grep server-1.properties
7564 ttys002 0:15.91 /System/Library/Frameworks/JavaVM.framework/Versions/1.8/Home/bin/java...
> kill -9 7564
Leadership has switched to one of the slaves and node 1 is no longer in the in-sync replica set:
> bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount:1 ReplicationFactor:3 Configs:
Topic: my-replicated-topic Partition: 0 Leader: 2 Replicas: 1,2,0 Isr: 2,0
But the messages are still available for consumption even though the leader that took the writes originally is down:
> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic
...
my test message 1
my test message 2
^C
hadoop kafka install multi-borker (7)的更多相关文章
- hadoop kafka install (6)
reference: http://kafka.apache.org/quickstart http://dblab.xmu.edu.cn/blog/1096-2/ hadoop@iZuf68496 ...
- hadoop mysql install (5)
reference : http://dblab.xmu.edu.cn/blog/install-mysql/ http://wiki.ubuntu.org.cn/MySQL #install mys ...
- hadoop redis install (4)
reference: http://dblab.xmu.edu.cn/blog/131/ https://github.com/dmajkic/redis https://blog.csdn ...
- 大牛博客!Spark / Hadoop / Kafka / HBase / Storm
在这里,非常感谢下面的著名大牛们,一路的帮助和学习,给予了我很大的动力! 有了Hadoop,再次有了Spark,一次又一次,一晚又一晚的努力相伴! HBase简介(很好的梳理资料) 1. 博客主页:h ...
- hadoop kafka import/export data (8)
reference: http://kafka.apache.org/quickstart need to solve issue ISSUE 1: [2019-01-29 15:59:39,272] ...
- hadoop hive install (5)
reference : http://dblab.xmu.edu.cn/blog/install-hive/ http://dblab.xmu.edu.cn/blog/hive-in-practice ...
- hadoop mongodb install(3)
reference:http://dblab.xmu.edu.cn/blog/868-2/ root@iZuf68496ttdogcxs22w6sZ:~# mv mongodb-linux-x86_6 ...
- hadoop hbase install (2)
reference: http://dblab.xmu.edu.cn/blog/install-hbase/ reference: http://dblab.xmu.edu.cn/blog/2139- ...
- hadoop+yarn+hbase+storm+kafka+spark+zookeeper)高可用集群详细配置
配置 hadoop+yarn+hbase+storm+kafka+spark+zookeeper 高可用集群,同时安装相关组建:JDK,MySQL,Hive,Flume 文章目录 环境介绍 节点介绍 ...
随机推荐
- Educational Codeforces Round 21 Problem E(Codeforces 808E) - 动态规划 - 贪心
After several latest reforms many tourists are planning to visit Berland, and Berland people underst ...
- Python3基础 map+lambda 将指定系列元素乘2
Python : 3.7.0 OS : Ubuntu 18.04.1 LTS IDE : PyCharm 2018.2.4 Conda ...
- P2472 [SCOI2007]蜥蜴(网络最大流)
P2472 [SCOI2007]蜥蜴 题目描述 在一个r行c列的网格地图中有一些高度不同的石柱,一些石柱上站着一些蜥蜴,你的任务是让尽量多的蜥蜴逃到边界外. 每行每列中相邻石柱的距离为1,蜥蜴的跳跃距 ...
- POJ 1236 Network of Schools(tarjan)题解
题意:一个有向图.第一问:最少给几个点信息能让所有点都收到信息.第二问:最少加几个边能实现在任意点放信息就能传遍所有点 思路:把所有强连通分量缩成一点,然后判断各个点的入度和出度 tarjan算法:问 ...
- extgcd 扩展欧几里得算法模板
#include <bits/stdc++.h> using namespace std; int extgcd (int a,int b,int &x,int &y){ ...
- ios苹果机系统的1px显示解决方案
1px边框在iPhone高清屏下,其实会变成2个物理像素的边框. /* 解决一像素问题 */ .navigation:before{ content: ""; pointer-ev ...
- cent os下搭建简单的服务器
作为常和网络打交道的程序员,经常会遇到需要服务器的场合,比如搭建一个web服务器,一个代理服务器,又或者一个小型的游戏服务器. 我时常和朋友一起玩一款叫我的世界的游戏,为了能够长期稳定地联机玩,所以特 ...
- java中拼写xml
本文为博主原创,未经博主允许,不得转载: xml具有强大的功能,在很多地方都会用的到.比如在通信的时候,通过xml进行消息的发送和交互. 在项目中有很多拼写xml的地方,进行一个简单的总结. 先举例如 ...
- 进制转换 hdoj-2031
进制转换,原题目:hdoj-2031 题目描述: 输入两个整数,十进制数n(32位整数)和进制r(2<=r<=16 r!=10),求转换后的数. 输入: 7 2 23 12 -4 3 输出 ...
- hdu 5493 Queue 树状数组第K大或者二分
Queue Time Limit: 4000/2000 MS (Java/Others) Memory Limit: 65536/65536 K (Java/Others)Total Submi ...