一、Kafka简介

Kafka 被称为下一代分布式消息系统,是非营利性组织ASF(Apache Software Foundation,简称为ASF)基金会中的一个开源项目,比如HTTP Server、Hadoop、ActiveMQ、Tomcat等开源软件都属于Apache基金会的开源软件,类似的消息系统还有RbbitMQ、ActiveMQ、ZeroMQ,最主要的优势是其具备分布式功能、并且结合zookeeper可以实现动态扩容。

相关链接介绍:http://www.infoq.com/cn/articles/apache-kafka

二、安装环境准备

三台服务器配置hosts,并可以互相ping通,这里我选用centos系统

[root@kafka70 ~]# vim /etc/hosts
[root@kafka70 ~]# cat /etc/hosts
127.0.0.1 localhost
10.0.0.200 debian
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.47.70 kafka70
192.168.47.71 kafka71
192.168.47.72 kafka72
[root@kafka70 ~]# ping kafka71
PING kafka70 (192.168.47.70) 56(84) bytes of data.
64 bytes from kafka70 (192.168.47.70): icmp_seq=1 ttl=64 time=0.019 ms

三、下载安装并验证zookeeper

3.1 zookeeper 下载地址

http://zookeeper.apache.org/releases.html

3.2 Kafka 下载地址

http://kafka.apache.org/downloads.html

3.3 安装 zookeeper

  zookeeper 集群特性:整个集群中只有超过集群一半的 zookeeper 工作是正常的,那么整个集群对外就算可用的,例如有2台服务器做了一个zookeeper,只要有任何一台故障或宕机,那么这个zookeeper 集群就是不可用的了,因为剩下的一台没有超过集群的一半的数量,但是假如有三台zookeeper 组成一个集群,那么损坏一台还剩下两台,大于3台的一半,所以损坏一台还是可以正常运行的,但是再损坏一台就剩下一台,集群就不可用了。
  如果是4台组成,损坏一台正常,损坏两台还剩两台,不满足集群总数的一半,所以3台的集群和4台的集群算坏两台的结果都是集群不可用,所以这也是为什么集群一般是奇数的原因。

3.3.1 所有节点下载软件包  所有节点都操作

节点配置操作基本一样,只是最后 zookeeper的myid不一样而已
mkdir /opt/soft
cd /opt/soft wget https://mirrors.yangxingzhen.com/jdk/jdk-11.0.1_linux-x64_bin.tar.g
wget http://archive.apache.org/dist/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz
wget https://archive.apache.org/dist/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz

3.3.2 安装配置 Java 环境

tar xf jdk11.0.1_linux-x64_bin.tar.gz
mv jdk11.0.1 /usr/java cat >>/etc/profile<<EOF
export JAVA_HOME=/usr/java
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
EOF source /etc/profile
java -version

3.3.3 安装zookeeper

[root@kafka70 soft]# tar zxvf zookeeper-3.4.9.tar.gz -C/opt/
[root@kafka70 soft]# ln -s /opt/zookeeper-3.4.9/ /opt/zookeeper [root@kafka70 soft]# mkdir -p /data/zookeeper
[root@kafka70 soft]# cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
[root@kafka70 soft]# vim /opt/zookeeper/conf/zoo.cfg
[root@kafka70 soft]# grep "^[a-Z]" /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=192.168.47.70:2888:3888
server.2=192.168.47.71:2888:3888
server.3=192.168.47.72:2888:3888
[root@kafka70 soft]# echo "1" > /data/zookeeper/myid
[root@kafka70 soft]# ls -lh /data/zookeeper/
total 4.0K
-rw-r--r-- 1 root root 2 Mar 12 14:17 myid
[root@kafka70 soft]# cat /data/zookeeper/myid
1

节点2

[root@kafka70 soft]# cat /data/zookeeper/myid

节点3

[root@kafka70 soft]# cat /data/zookeeper/myid

3.3.5 各节点启动zookeeper

节点1
[root@kafka70 ~]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
节点2
[root@kafka71 soft]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
节点3
[root@kafka72 ~]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

3.3.6 查看各节点的zookeeper状态

节点1
[root@kafka70 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower 节点2
[root@kafka71 soft]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: leader
节点3
[root@kafka72 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower

3.3.7 zookeeper简单操作命令

连接到任意节点生成数据:
我们在节点1生成数据,然后在其他节点验证数据
[root@kafka70 ~]# /opt/zookeeper/bin/zkCli.sh -server 192.168.47.70:2181
Connecting to 192.168.47.70:2181
=================
WATCHER:: WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.47.70:2181(CONNECTED) 0] create /test "hello"
Created /test
[zk: 192.168.47.70:2181(CONNECTED) 1]
在其他节点上验证数据
[root@kafka71 ~]# /opt/zookeeper/bin/zkCli.sh -server 192.168.47.71:2181
Connecting to 192.168.47.71:2181
===========================
WATCHER:: WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.47.71:2181(CONNECTED) 0] get /test
hello
cZxid = 0x100000002
ctime = Mon Mar 12 15:15:52 CST 2018
mZxid = 0x100000002
mtime = Mon Mar 12 15:15:52 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
[zk: 192.168.47.71:2181(CONNECTED) 1] 在节点3上查看
[root@kafka72 ~]# /opt/zookeeper/bin/zkCli.sh -server 192.168.47.72:2181
Connecting to 192.168.47.72:2181
===========================
WATCHER:: WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.47.72:2181(CONNECTED) 0] get /test
hello
cZxid = 0x100000002
ctime = Mon Mar 12 15:15:52 CST 2018
mZxid = 0x100000002
mtime = Mon Mar 12 15:15:52 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
[zk: 192.168.47.72:2181(CONNECTED) 1]

3.4 安装并测试Kafka

节点1的配置
[root@kafka70 ~]# cd /opt/soft/
[root@kafka70 soft]# tar zxf kafka_2.11-1.0.0.tgz -C /opt/
[root@kafka70 soft]# ln -s /opt/kafka_2.11-1.0.0/ /opt/kafka
[root@kafka70 soft]# mkdir /opt/kafka/logs
[root@kafka70 soft]# vim /opt/kafka/config/server.properties
21 broker.id=1
31 listeners=PLAINTEXT://192.168.47.70:9092
60 log.dirs=/opt/kafka/logs
103 log.retention.hours=24
123 zookeeper.connect=192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 节点2的配置
[root@kafka71 ~]# cd /opt/soft/
[root@kafka71 soft]# tar zxf kafka_2.11-1.0.0.tgz -C /opt/
[root@kafka71 soft]# ln -s /opt/kafka_2.11-1.0.0/ /opt/kafka
[root@kafka71 soft]# mkdir /opt/kafka/logs
[root@kafka71 soft]# vim /opt/kafka/config/server.properties
21 broker.id=2
31 listeners=PLAINTEXT://192.168.47.71:9092
60 log.dirs=/opt/kafka/logs
103 log.retention.hours=24
123 zookeeper.connect=192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 节点3的配置
[root@kafka72 ~]# cd /opt/soft/
[root@kafka72 soft]# tar zxf kafka_2.11-1.0.0.tgz -C /opt/
[root@kafka72 soft]# ln -s /opt/kafka_2.11-1.0.0/ /opt/kafka
[root@kafka72 soft]# mkdir /opt/kafka/logs
[root@kafka72 soft]# vim /opt/kafka/config/server.properties
21 broker.id=3
31 listeners=PLAINTEXT://192.168.47.72:9092
60 log.dirs=/opt/kafka/logs
103 log.retention.hours=24
123 zookeeper.connect=192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181

3.4.1 各节点启动Kafka

节点1,可以先前台启动,方便查看错误日志
[root@kafka70 soft]# /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
===========================
[2018-03-14 11:04:05,397] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-03-14 11:04:05,397] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-03-14 11:04:05,414] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 最后一行出现KafkaServer id和started字样,就表明启动成功了,然后就可以放到后台启动了
[root@kafka70 logs]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@kafka70 logs]# tail -f /opt/kafka/logs/server.log
=========================
[2018-03-14 11:04:05,414] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 节点2,我们这次直接后台启动然后查看日志
[root@kafka71 kafka]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@kafka71 kafka]# tail -f /opt/kafka/logs/server.log
====================================
[2018-03-14 11:04:13,679] INFO [KafkaServer id=2] started (kafka.server.KafkaServer) 节点3,一样后台启动然后查看日志
[root@kafka72 kafka]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@kafka72 kafka]# tail -f /opt/kafka/logs/server.log
=======================================
[2018-03-14 11:06:38,274] INFO [KafkaServer id=3] started (kafka.server.KafkaServer)

3.4.2 验证进程

节点1
[root@kafka70 ~]# /opt/jdk/bin/jps
4531 Jps
4334 Kafka
1230 QuorumPeerMain
节点2
[root@kafka71 kafka]# /opt/jdk/bin/jps
2513 Kafka
2664 Jps
1163 QuorumPeerMain
节点3
[root@kafka72 kafka]# /opt/jdk/bin/jps
2835 Jps
2728 Kafka
1385 QuorumPeerMain

3.4.3 测试创建topic

创建名为kafkatest,partitions(分区)为3,replication(复制)为3的topic(主题),在任意机器操作即可

[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh  --create  --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --partitions 3 --replication-factor 3 --topic kafkatest
Created topic "kafkatest".

3.4.4 测试获取topic

可以在任意一台kafka服务器进行测试

节点1
[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic kafkatest
Topic:kafkatest PartitionCount:3 ReplicationFactor:3 Configs:
Topic: kafkatest Partition: 0 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: kafkatest Partition: 1 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: kafkatest Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3 节点2
[root@kafka71 ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic kafkatest
Topic:kafkatest PartitionCount:3 ReplicationFactor:3 Configs:
Topic: kafkatest Partition: 0 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: kafkatest Partition: 1 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: kafkatest Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3 节点3
[root@kafka72 ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic kafkatest
Topic:kafkatest PartitionCount:3 ReplicationFactor:3 Configs:
Topic: kafkatest Partition: 0 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: kafkatest Partition: 1 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: kafkatest Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3 状态说明:kafkatest有三个分区分别为1、2、3,分区0的leader是2(broker.id),分区0有三个副本,并且状态都为lsr(ln-sync,表示可以参加选举成为leader)。

3.4.5 测试获取topic

[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --delete --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181  --topic kafkatest
Topic kafkatest is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

3.4.6 验证是否真的删除

[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181  --topic kafkatest

3.4.7 测试获取所有的topic列表

首先创建两个topic
[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --partitions 3 --replication-factor 3 --topic kafkatest
Created topic "kafkatest".
[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --partitions 3 --replication-factor 3 --topic kafkatest2
Created topic "kafkatest2". 然后查看所有的topic列表
[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 kafkatest
kafkatest2

3.4.8 Kafka测试命令发送消息

创建一个名为messagetest的topic
[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --partitions 3 --replication-factor 3 --topic messagetest
Created topic "messagetest". 发送消息:注意,端口是 kafka的9092,而不是zookeeper的2181
[root@kafka70 ~]# /opt/kafka/bin/kafka-console-producer.sh --broker-list 192.168.47.70:9092,192.168.47.71:9092,192.168.47.72:9092 --topic messagetest
>hello
>mymy
>Yo!
>

3.4.9 其他Kafka服务器获取消息

[root@kafka70 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
mymy
Yo!
hello
[root@kafka71 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
mymy
Yo!
hello
[root@kafka72 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
hello
mymy
Yo!

3.5 报错解决

3.5.1 zookeeper配置文件的server写错导致zookeeper状态为standalone

配置文件里zoo.cfg里的server地址写错了,导致启动的时候只会查找自己的节点
[root@kafka70 soft]# grep "^[a-Z]" /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=192.168.47.70:2888:3888
server.1=192.168.47.71:2888:3888
server.1=192.168.47.72:2888:3888
[root@kafka70 ~]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@kafka70 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: standalone 解决:各节点修改标签为正确的数字,然后重启zookeeper服务,注意!所有节点都要操作!
[root@kafka70 soft]# grep "^[a-Z]" /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=192.168.47.70:2888:3888
server.2=192.168.47.71:2888:3888
server.3=192.168.47.72:2888:3888
[root@kafka70 soft]# /opt/zookeeper/bin/zkServer.sh restart
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@kafka71 soft]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower

3.5.2 发送消息失败

[root@kafka70 ~]# /opt/kafka/bin/kafka-console-producer.sh --broker-list  192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic  messagetest
>hellp mymy
meme
[2018-03-14 11:47:31,269] ERROR Error when sending message to topic messagetest with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
>hello
[2018-03-14 11:48:31,277] ERROR Error when sending message to topic messagetest with key: null, value: 0 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
> 报错原因.端口写错了,应该是kafka的9092,而不是zookeeper的2181
解决:使用正确的端口
[root@kafka70 ~]# /opt/kafka/bin/kafka-console-producer.sh --broker-list 192.168.47.70:9092,192.168.47.71:9092,192.168.47.72:9092 --topic messagetest
>hello
>mymy
>Yo!
>

3.5.2 接受消息失败报错

[root@kafka71 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
[2018-03-14 12:02:01,648] ERROR Unknown error when running consumer: (kafka.tools.ConsoleConsumer$)
java.net.UnknownHostException: kafka71: kafka71: Name or service not known
at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:135)
at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:159)
at kafka.consumer.Consumer$.create(ConsumerConnector.scala:112)
at kafka.consumer.OldConsumer.<init>(BaseConsumer.scala:130)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:72)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Caused by: java.net.UnknownHostException: kafka71: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getLocalHost(InetAddress.java:1500)
... 7 more 报错原因:主机名和hosts解析名不一致
[root@kafka71 ~]# cat /etc/hostname
kafka71
[root@kafka71 ~]# tail -3 /etc/hosts
192.168.47.70 kafka70
192.168.47.71 kafka71
192.168.47.72 kafka72 解决办法:所有主机的主机名和hosts解析名保持一致,然后重新获取
修改所有主机的主机名
[root@kafka70 ~]# hostname
kafka70
[root@kafka70 ~]# tail -3 /etc/hosts
192.168.47.70 kafka70
192.168.47.71 kafka71
192.168.47.72 kafka72
[root@kafka71 ~]# hostname
kafka71
[root@kafka71 ~]# tail -3 /etc/hosts
192.168.47.70 kafka70
192.168.47.71 kafka71
192.168.47.72 kafka72
[root@kafka72 ~]# hostname
kafka72
[root@kafka72 ~]# tail -3 /etc/hosts
192.168.47.70 kafka70
192.168.47.71 kafka71
192.168.47.72 kafka72 重新获取消息
[root@kafka71 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
mymy
Yo!
hello
[root@kafka72 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
hello
mymy
Yo!

kafka和zookeeper安装的更多相关文章

  1. kafka及zookeeper安装

    kafka_2.9.2-0.8.1.tgzzookeeper-3.4.8.tar.gz 安装 zookeeper1 export PATH=$PATH:/usr/local/zookeeper/bin ...

  2. kafka和zookeeper安装部署(版本弄不好就是坑)

    yum install -y unzip zip 配置host vi /etc/host172.19.68.10 zk1 1. zookeeper zookeeper下载地址 http://mirro ...

  3. elk、kafka、zookeeper 安装

    .elk解释 ELK分别是Elasticsearch.Logstash.Kibana三个开源框架缩写 Elasticsearch 开源分布式搜索引擎,提供存储.分析.搜索功能.特点:分布式.基于rea ...

  4. zookeeper+kafka集群安装之二

    zookeeper+kafka集群安装之二 此为上一篇文章的续篇, kafka安装需要依赖zookeeper, 本文与上一篇文章都是真正分布式安装配置, 可以直接用于生产环境. zookeeper安装 ...

  5. zookeeper+kafka集群安装之一

    zookeeper+kafka集群安装之一 准备3台虚拟机, 系统是RHEL64服务版. 1) 每台机器配置如下: $ cat /etc/hosts ... # zookeeper hostnames ...

  6. kubernetes(k8s) helm安装kafka、zookeeper

    通过helm在k8s上部署kafka.zookeeper 通过helm方法安装 k8s上安装kafka,可以使用helm,将kafka作为一个应用安装.当然这首先要你的k8s支持使用helm安装.he ...

  7. zookeeper+kafka集群安装之中的一个

    版权声明:本文为博主原创文章.未经博主同意不得转载. https://blog.csdn.net/cheungmine/article/details/26678877 zookeeper+kafka ...

  8. Kafka集群安装Version1.0.1(自带Zookeeper)

    1.说明 Kafka集群安装,基于版本1.0.1, 使用kafka_2.12-1.0.1.tgz安装包, 其中2.12是编译工具Scala的版本. 而且不需要另外安装Zookeeper服务, 使用Ka ...

  9. 安装kafka和zookeeper以及使用

    1.安装zookeeper zookeeper下载:http://zookeeper.apache.org/releases.html 从3.5.5开始,带有bin名称的包才是要下载的包可以直接使用 ...

  10. centOS7安装kafka和zookeeper

    wget http://mirrors.hust.edu.cn/apache/kafka/2.0.0/kafka_2.11-2.0.0.tgz tar zxvf kafka_2.-.tgz cd ka ...

随机推荐

  1. html实现原生table并设置表格边框的两种方式

    虽然第三方表格插件多不胜数,但是很多场景还是需要用到原生<table>,掌握html原生table的实现方法,是前端开发的必备技能.例如:print-js打印.html2canvas生成图 ...

  2. Cloudflare 重定向配置

    最近把之前的一个网站域名换成另一个域名,想要添加一下重定向,避免流量流失(虽然本来就没流量).然后在 Cloudflare 配置时尝试多次都失败了,遇到各种 Your connection is no ...

  3. Linux 脚本:shell

    # 以脚本所在目录作为脚本执行时的当前路径. -P 选项寻找物理上的地址,忽略软连接. SCRIPT_DIR=$(cd $(dirname $0); pwd -P) # 在任意位置执行自己的可执行程序 ...

  4. PyQt5实时刷新

    对于执行很耗时的程序来说,由于PyQt需要等待程序执行完毕才能进行下一步,这个过程表现在界面上就是卡顿,而如果需要执行这个耗时程序时不断的刷新界面.那么就可以使用QApplication.proces ...

  5. Crawpy - 一款python写的网站目录扫描工具

    国外网站看到的. 简贴一下谷歌翻译的介绍 是什么让这个工具与其他工具不同: 它被写入异步工作,允许达到最大限制.所以它非常快. 校准模式,自行应用过滤器 有一堆标志可以帮助你详细地模糊 给定状态代码和 ...

  6. 配置DHCP

    配置DHCP 条件:关闭防火墙 和selinux 1,安装dhcp服务 [root@localhost ~]#yum install dhcp -y#安装dhcp服务 2,查看配置文件 [root@l ...

  7. Typora 主题,设置代码块Mac风格三个小圆点

    目录 打造Typora主题 1 typoa样式修改步骤 1.1 第一步打开偏好设置 1.2 第二步打开主题文件夹 2 标题添加颜色 3 表格优化 4 代码块Mac风格三个圆点 5 主题总代码如下: 打 ...

  8. Docker Dockerfile指令大全

    FROM-指定基础镜像 指定基础镜像,并且Dockerfile中第一条指令必须是FROM指令,且在同一个Dockerfile中创建多个镜像时,可以使用多个FROM指令. # 语法格式 FROM < ...

  9. Combobox后台绑定

    本文主要介绍WPF中Combobox的后台绑定,我在这里主要讲解数据驱动 1.对于前台绑定,我们首先写出想要绑定的对象 新建一个Models文件夹,将Student类写入 public class S ...

  10. 部署Harbor镜像仓库

    Harbor介绍 Harbor是一个开源的企业级容器注册表服务.它由VMware和Pivotal联合开发,旨在为云原生应用程序提供一种安全可靠的容器镜像管理解决方案. Harbor是一个功能丰富.安全 ...