一、Kafka简介

Kafka 被称为下一代分布式消息系统,是非营利性组织ASF(Apache Software Foundation,简称为ASF)基金会中的一个开源项目,比如HTTP Server、Hadoop、ActiveMQ、Tomcat等开源软件都属于Apache基金会的开源软件,类似的消息系统还有RbbitMQ、ActiveMQ、ZeroMQ,最主要的优势是其具备分布式功能、并且结合zookeeper可以实现动态扩容。

相关链接介绍:http://www.infoq.com/cn/articles/apache-kafka

二、安装环境准备

三台服务器配置hosts,并可以互相ping通,这里我选用centos系统

[root@kafka70 ~]# vim /etc/hosts
[root@kafka70 ~]# cat /etc/hosts
127.0.0.1 localhost
10.0.0.200 debian
# The following lines are desirable for IPv6 capable hosts
::1 localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
192.168.47.70 kafka70
192.168.47.71 kafka71
192.168.47.72 kafka72
[root@kafka70 ~]# ping kafka71
PING kafka70 (192.168.47.70) 56(84) bytes of data.
64 bytes from kafka70 (192.168.47.70): icmp_seq=1 ttl=64 time=0.019 ms

三、下载安装并验证zookeeper

3.1 zookeeper 下载地址

http://zookeeper.apache.org/releases.html

3.2 Kafka 下载地址

http://kafka.apache.org/downloads.html

3.3 安装 zookeeper

  zookeeper 集群特性:整个集群中只有超过集群一半的 zookeeper 工作是正常的,那么整个集群对外就算可用的,例如有2台服务器做了一个zookeeper,只要有任何一台故障或宕机,那么这个zookeeper 集群就是不可用的了,因为剩下的一台没有超过集群的一半的数量,但是假如有三台zookeeper 组成一个集群,那么损坏一台还剩下两台,大于3台的一半,所以损坏一台还是可以正常运行的,但是再损坏一台就剩下一台,集群就不可用了。
  如果是4台组成,损坏一台正常,损坏两台还剩两台,不满足集群总数的一半,所以3台的集群和4台的集群算坏两台的结果都是集群不可用,所以这也是为什么集群一般是奇数的原因。

3.3.1 所有节点下载软件包  所有节点都操作

节点配置操作基本一样,只是最后 zookeeper的myid不一样而已
mkdir /opt/soft
cd /opt/soft wget https://mirrors.yangxingzhen.com/jdk/jdk-11.0.1_linux-x64_bin.tar.g
wget http://archive.apache.org/dist/zookeeper/zookeeper-3.4.9/zookeeper-3.4.9.tar.gz
wget https://archive.apache.org/dist/kafka/0.9.0.0/kafka_2.11-0.9.0.0.tgz

3.3.2 安装配置 Java 环境

tar xf jdk11.0.1_linux-x64_bin.tar.gz
mv jdk11.0.1 /usr/java cat >>/etc/profile<<EOF
export JAVA_HOME=/usr/java
export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH
EOF source /etc/profile
java -version

3.3.3 安装zookeeper

[root@kafka70 soft]# tar zxvf zookeeper-3.4.9.tar.gz -C/opt/
[root@kafka70 soft]# ln -s /opt/zookeeper-3.4.9/ /opt/zookeeper [root@kafka70 soft]# mkdir -p /data/zookeeper
[root@kafka70 soft]# cp /opt/zookeeper/conf/zoo_sample.cfg /opt/zookeeper/conf/zoo.cfg
[root@kafka70 soft]# vim /opt/zookeeper/conf/zoo.cfg
[root@kafka70 soft]# grep "^[a-Z]" /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=192.168.47.70:2888:3888
server.2=192.168.47.71:2888:3888
server.3=192.168.47.72:2888:3888
[root@kafka70 soft]# echo "1" > /data/zookeeper/myid
[root@kafka70 soft]# ls -lh /data/zookeeper/
total 4.0K
-rw-r--r-- 1 root root 2 Mar 12 14:17 myid
[root@kafka70 soft]# cat /data/zookeeper/myid
1

节点2

[root@kafka70 soft]# cat /data/zookeeper/myid

节点3

[root@kafka70 soft]# cat /data/zookeeper/myid

3.3.5 各节点启动zookeeper

节点1
[root@kafka70 ~]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
节点2
[root@kafka71 soft]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
节点3
[root@kafka72 ~]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

3.3.6 查看各节点的zookeeper状态

节点1
[root@kafka70 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower 节点2
[root@kafka71 soft]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: leader
节点3
[root@kafka72 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower

3.3.7 zookeeper简单操作命令

连接到任意节点生成数据:
我们在节点1生成数据,然后在其他节点验证数据
[root@kafka70 ~]# /opt/zookeeper/bin/zkCli.sh -server 192.168.47.70:2181
Connecting to 192.168.47.70:2181
=================
WATCHER:: WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.47.70:2181(CONNECTED) 0] create /test "hello"
Created /test
[zk: 192.168.47.70:2181(CONNECTED) 1]
在其他节点上验证数据
[root@kafka71 ~]# /opt/zookeeper/bin/zkCli.sh -server 192.168.47.71:2181
Connecting to 192.168.47.71:2181
===========================
WATCHER:: WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.47.71:2181(CONNECTED) 0] get /test
hello
cZxid = 0x100000002
ctime = Mon Mar 12 15:15:52 CST 2018
mZxid = 0x100000002
mtime = Mon Mar 12 15:15:52 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
[zk: 192.168.47.71:2181(CONNECTED) 1] 在节点3上查看
[root@kafka72 ~]# /opt/zookeeper/bin/zkCli.sh -server 192.168.47.72:2181
Connecting to 192.168.47.72:2181
===========================
WATCHER:: WatchedEvent state:SyncConnected type:None path:null
[zk: 192.168.47.72:2181(CONNECTED) 0] get /test
hello
cZxid = 0x100000002
ctime = Mon Mar 12 15:15:52 CST 2018
mZxid = 0x100000002
mtime = Mon Mar 12 15:15:52 CST 2018
pZxid = 0x100000002
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 5
numChildren = 0
[zk: 192.168.47.72:2181(CONNECTED) 1]

3.4 安装并测试Kafka

节点1的配置
[root@kafka70 ~]# cd /opt/soft/
[root@kafka70 soft]# tar zxf kafka_2.11-1.0.0.tgz -C /opt/
[root@kafka70 soft]# ln -s /opt/kafka_2.11-1.0.0/ /opt/kafka
[root@kafka70 soft]# mkdir /opt/kafka/logs
[root@kafka70 soft]# vim /opt/kafka/config/server.properties
21 broker.id=1
31 listeners=PLAINTEXT://192.168.47.70:9092
60 log.dirs=/opt/kafka/logs
103 log.retention.hours=24
123 zookeeper.connect=192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 节点2的配置
[root@kafka71 ~]# cd /opt/soft/
[root@kafka71 soft]# tar zxf kafka_2.11-1.0.0.tgz -C /opt/
[root@kafka71 soft]# ln -s /opt/kafka_2.11-1.0.0/ /opt/kafka
[root@kafka71 soft]# mkdir /opt/kafka/logs
[root@kafka71 soft]# vim /opt/kafka/config/server.properties
21 broker.id=2
31 listeners=PLAINTEXT://192.168.47.71:9092
60 log.dirs=/opt/kafka/logs
103 log.retention.hours=24
123 zookeeper.connect=192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 节点3的配置
[root@kafka72 ~]# cd /opt/soft/
[root@kafka72 soft]# tar zxf kafka_2.11-1.0.0.tgz -C /opt/
[root@kafka72 soft]# ln -s /opt/kafka_2.11-1.0.0/ /opt/kafka
[root@kafka72 soft]# mkdir /opt/kafka/logs
[root@kafka72 soft]# vim /opt/kafka/config/server.properties
21 broker.id=3
31 listeners=PLAINTEXT://192.168.47.72:9092
60 log.dirs=/opt/kafka/logs
103 log.retention.hours=24
123 zookeeper.connect=192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181

3.4.1 各节点启动Kafka

节点1,可以先前台启动,方便查看错误日志
[root@kafka70 soft]# /opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties
===========================
[2018-03-14 11:04:05,397] INFO Kafka version : 1.0.0 (org.apache.kafka.common.utils.AppInfoParser)
[2018-03-14 11:04:05,397] INFO Kafka commitId : aaa7af6d4a11b29d (org.apache.kafka.common.utils.AppInfoParser)
[2018-03-14 11:04:05,414] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 最后一行出现KafkaServer id和started字样,就表明启动成功了,然后就可以放到后台启动了
[root@kafka70 logs]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@kafka70 logs]# tail -f /opt/kafka/logs/server.log
=========================
[2018-03-14 11:04:05,414] INFO [KafkaServer id=1] started (kafka.server.KafkaServer) 节点2,我们这次直接后台启动然后查看日志
[root@kafka71 kafka]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@kafka71 kafka]# tail -f /opt/kafka/logs/server.log
====================================
[2018-03-14 11:04:13,679] INFO [KafkaServer id=2] started (kafka.server.KafkaServer) 节点3,一样后台启动然后查看日志
[root@kafka72 kafka]# /opt/kafka/bin/kafka-server-start.sh -daemon /opt/kafka/config/server.properties
[root@kafka72 kafka]# tail -f /opt/kafka/logs/server.log
=======================================
[2018-03-14 11:06:38,274] INFO [KafkaServer id=3] started (kafka.server.KafkaServer)

3.4.2 验证进程

节点1
[root@kafka70 ~]# /opt/jdk/bin/jps
4531 Jps
4334 Kafka
1230 QuorumPeerMain
节点2
[root@kafka71 kafka]# /opt/jdk/bin/jps
2513 Kafka
2664 Jps
1163 QuorumPeerMain
节点3
[root@kafka72 kafka]# /opt/jdk/bin/jps
2835 Jps
2728 Kafka
1385 QuorumPeerMain

3.4.3 测试创建topic

创建名为kafkatest,partitions(分区)为3,replication(复制)为3的topic(主题),在任意机器操作即可

[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh  --create  --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --partitions 3 --replication-factor 3 --topic kafkatest
Created topic "kafkatest".

3.4.4 测试获取topic

可以在任意一台kafka服务器进行测试

节点1
[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic kafkatest
Topic:kafkatest PartitionCount:3 ReplicationFactor:3 Configs:
Topic: kafkatest Partition: 0 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: kafkatest Partition: 1 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: kafkatest Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3 节点2
[root@kafka71 ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic kafkatest
Topic:kafkatest PartitionCount:3 ReplicationFactor:3 Configs:
Topic: kafkatest Partition: 0 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: kafkatest Partition: 1 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: kafkatest Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3 节点3
[root@kafka72 ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic kafkatest
Topic:kafkatest PartitionCount:3 ReplicationFactor:3 Configs:
Topic: kafkatest Partition: 0 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
Topic: kafkatest Partition: 1 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: kafkatest Partition: 2 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3 状态说明:kafkatest有三个分区分别为1、2、3,分区0的leader是2(broker.id),分区0有三个副本,并且状态都为lsr(ln-sync,表示可以参加选举成为leader)。

3.4.5 测试获取topic

[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --delete --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181  --topic kafkatest
Topic kafkatest is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.

3.4.6 验证是否真的删除

[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --describe --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181  --topic kafkatest

3.4.7 测试获取所有的topic列表

首先创建两个topic
[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --partitions 3 --replication-factor 3 --topic kafkatest
Created topic "kafkatest".
[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --partitions 3 --replication-factor 3 --topic kafkatest2
Created topic "kafkatest2". 然后查看所有的topic列表
[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --list --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 kafkatest
kafkatest2

3.4.8 Kafka测试命令发送消息

创建一个名为messagetest的topic
[root@kafka70 ~]# /opt/kafka/bin/kafka-topics.sh --create --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --partitions 3 --replication-factor 3 --topic messagetest
Created topic "messagetest". 发送消息:注意,端口是 kafka的9092,而不是zookeeper的2181
[root@kafka70 ~]# /opt/kafka/bin/kafka-console-producer.sh --broker-list 192.168.47.70:9092,192.168.47.71:9092,192.168.47.72:9092 --topic messagetest
>hello
>mymy
>Yo!
>

3.4.9 其他Kafka服务器获取消息

[root@kafka70 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
mymy
Yo!
hello
[root@kafka71 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
mymy
Yo!
hello
[root@kafka72 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
hello
mymy
Yo!

3.5 报错解决

3.5.1 zookeeper配置文件的server写错导致zookeeper状态为standalone

配置文件里zoo.cfg里的server地址写错了,导致启动的时候只会查找自己的节点
[root@kafka70 soft]# grep "^[a-Z]" /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=192.168.47.70:2888:3888
server.1=192.168.47.71:2888:3888
server.1=192.168.47.72:2888:3888
[root@kafka70 ~]# /opt/zookeeper/bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@kafka70 ~]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: standalone 解决:各节点修改标签为正确的数字,然后重启zookeeper服务,注意!所有节点都要操作!
[root@kafka70 soft]# grep "^[a-Z]" /opt/zookeeper/conf/zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeper
clientPort=2181
server.1=192.168.47.70:2888:3888
server.2=192.168.47.71:2888:3888
server.3=192.168.47.72:2888:3888
[root@kafka70 soft]# /opt/zookeeper/bin/zkServer.sh restart
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Stopping zookeeper ... STOPPED
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@kafka71 soft]# /opt/zookeeper/bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/bin/../conf/zoo.cfg
Mode: follower

3.5.2 发送消息失败

[root@kafka70 ~]# /opt/kafka/bin/kafka-console-producer.sh --broker-list  192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic  messagetest
>hellp mymy
meme
[2018-03-14 11:47:31,269] ERROR Error when sending message to topic messagetest with key: null, value: 5 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
>hello
[2018-03-14 11:48:31,277] ERROR Error when sending message to topic messagetest with key: null, value: 0 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
> 报错原因.端口写错了,应该是kafka的9092,而不是zookeeper的2181
解决:使用正确的端口
[root@kafka70 ~]# /opt/kafka/bin/kafka-console-producer.sh --broker-list 192.168.47.70:9092,192.168.47.71:9092,192.168.47.72:9092 --topic messagetest
>hello
>mymy
>Yo!
>

3.5.2 接受消息失败报错

[root@kafka71 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
[2018-03-14 12:02:01,648] ERROR Unknown error when running consumer: (kafka.tools.ConsoleConsumer$)
java.net.UnknownHostException: kafka71: kafka71: Name or service not known
at java.net.InetAddress.getLocalHost(InetAddress.java:1505)
at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:135)
at kafka.consumer.ZookeeperConsumerConnector.<init>(ZookeeperConsumerConnector.scala:159)
at kafka.consumer.Consumer$.create(ConsumerConnector.scala:112)
at kafka.consumer.OldConsumer.<init>(BaseConsumer.scala:130)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:72)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Caused by: java.net.UnknownHostException: kafka71: Name or service not known
at java.net.Inet6AddressImpl.lookupAllHostAddr(Native Method)
at java.net.InetAddress$2.lookupAllHostAddr(InetAddress.java:928)
at java.net.InetAddress.getAddressesFromNameService(InetAddress.java:1323)
at java.net.InetAddress.getLocalHost(InetAddress.java:1500)
... 7 more 报错原因:主机名和hosts解析名不一致
[root@kafka71 ~]# cat /etc/hostname
kafka71
[root@kafka71 ~]# tail -3 /etc/hosts
192.168.47.70 kafka70
192.168.47.71 kafka71
192.168.47.72 kafka72 解决办法:所有主机的主机名和hosts解析名保持一致,然后重新获取
修改所有主机的主机名
[root@kafka70 ~]# hostname
kafka70
[root@kafka70 ~]# tail -3 /etc/hosts
192.168.47.70 kafka70
192.168.47.71 kafka71
192.168.47.72 kafka72
[root@kafka71 ~]# hostname
kafka71
[root@kafka71 ~]# tail -3 /etc/hosts
192.168.47.70 kafka70
192.168.47.71 kafka71
192.168.47.72 kafka72
[root@kafka72 ~]# hostname
kafka72
[root@kafka72 ~]# tail -3 /etc/hosts
192.168.47.70 kafka70
192.168.47.71 kafka71
192.168.47.72 kafka72 重新获取消息
[root@kafka71 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
mymy
Yo!
hello
[root@kafka72 ~]# /opt/kafka/bin/kafka-console-consumer.sh --zookeeper 192.168.47.70:2181,192.168.47.71:2181,192.168.47.72:2181 --topic messagetest --from-beginning
Using the ConsoleConsumer with old consumer is deprecated and will be removed in a future major release. Consider using the new consumer by passing [bootstrap-server] instead of [zookeeper].
hello
mymy
Yo!

kafka和zookeeper安装的更多相关文章

  1. kafka及zookeeper安装

    kafka_2.9.2-0.8.1.tgzzookeeper-3.4.8.tar.gz 安装 zookeeper1 export PATH=$PATH:/usr/local/zookeeper/bin ...

  2. kafka和zookeeper安装部署(版本弄不好就是坑)

    yum install -y unzip zip 配置host vi /etc/host172.19.68.10 zk1 1. zookeeper zookeeper下载地址 http://mirro ...

  3. elk、kafka、zookeeper 安装

    .elk解释 ELK分别是Elasticsearch.Logstash.Kibana三个开源框架缩写 Elasticsearch 开源分布式搜索引擎,提供存储.分析.搜索功能.特点:分布式.基于rea ...

  4. zookeeper+kafka集群安装之二

    zookeeper+kafka集群安装之二 此为上一篇文章的续篇, kafka安装需要依赖zookeeper, 本文与上一篇文章都是真正分布式安装配置, 可以直接用于生产环境. zookeeper安装 ...

  5. zookeeper+kafka集群安装之一

    zookeeper+kafka集群安装之一 准备3台虚拟机, 系统是RHEL64服务版. 1) 每台机器配置如下: $ cat /etc/hosts ... # zookeeper hostnames ...

  6. kubernetes(k8s) helm安装kafka、zookeeper

    通过helm在k8s上部署kafka.zookeeper 通过helm方法安装 k8s上安装kafka,可以使用helm,将kafka作为一个应用安装.当然这首先要你的k8s支持使用helm安装.he ...

  7. zookeeper+kafka集群安装之中的一个

    版权声明:本文为博主原创文章.未经博主同意不得转载. https://blog.csdn.net/cheungmine/article/details/26678877 zookeeper+kafka ...

  8. Kafka集群安装Version1.0.1(自带Zookeeper)

    1.说明 Kafka集群安装,基于版本1.0.1, 使用kafka_2.12-1.0.1.tgz安装包, 其中2.12是编译工具Scala的版本. 而且不需要另外安装Zookeeper服务, 使用Ka ...

  9. 安装kafka和zookeeper以及使用

    1.安装zookeeper zookeeper下载:http://zookeeper.apache.org/releases.html 从3.5.5开始,带有bin名称的包才是要下载的包可以直接使用 ...

  10. centOS7安装kafka和zookeeper

    wget http://mirrors.hust.edu.cn/apache/kafka/2.0.0/kafka_2.11-2.0.0.tgz tar zxvf kafka_2.-.tgz cd ka ...

随机推荐

  1. 2023-07-05:爱丽丝和鲍勃继续他们的石子游戏 许多堆石子 排成一行,每堆都有正整数颗石子 piles[i] 游戏以谁手中的石子最多来决出胜负。 爱丽丝和鲍勃轮流进行,爱丽丝先开始。最初,

    2023-07-05:爱丽丝和鲍勃继续他们的石子游戏 许多堆石子 排成一行,每堆都有正整数颗石子 piles[i] 游戏以谁手中的石子最多来决出胜负. 爱丽丝和鲍勃轮流进行,爱丽丝先开始.最初,M = ...

  2. FFmpeg+SDL实时解码和渲染H264视频流

    前言 之前实现了Android手机摄像头数据的TCP实时传输,今天接着聊聊,如何在PC端把接收到的H264视频流实时解码并渲染出来.这次使用的语言是C++,框架有FFmpeg和SDL2. 解码 解码部 ...

  3. Appium新版本引发的一个问题

    Appium新版本引发的一个问题 准备工作 测试代码 from appium import webdriver des_cap = {'platformName': 'android'} driver ...

  4. OO第一次大作业

    前言 前言的前言 这是我的第一篇blog,有点小激动,我还找教程设置了一下我的背景,本来还想弄个页面小宠物,但是看了一下感觉有点复杂,下次一定.如果对我blog的内容有任何修正或者建议可以评论让我知道 ...

  5. CPU摸鱼被抓,上了一个新技术!

    我叫阿Q,是CPU一号车间里的员工,我所在的这个CPU足足有8个核,就有8个车间,干起活来杠杠滴. 我们CPU的任务就是执行程序员编写的程序,只不过程序员编写的是高级语言代码,而我们执行的是这些代码被 ...

  6. 代码随想录算法训练营第一天| LeetCode 704. 二分查找、LeetCode 27. 移除元素

    704. 二分查找         题目链接:https://leetcode.cn/problems/binary-search/       视频链接:https://www.bilibili.c ...

  7. pycharm链接mysql报错: Server returns invalid timezone. Go to 'Advanced' tab and set 'serverTimezone' property manually.

    检查驱动 我本机安装的mysql版本是5.6的,那么IDEA要连接mysql也应该匹配下驱动版本.把Driver改成MySQL for 5.1就可以了. 参考链接:https://blog.csdn. ...

  8. js面向对象编程,你需要知道这些

    javascript中对象由key和value组成,key是标识符,value可以为任意类型 创建对象的方式 1.通过构造函数 var obj = new Object() obj.name = 'a ...

  9. vscode snnipet of python

    { // Place your snippets for python here. Each snippet is defined under a snippet name and has a pre ...

  10. SpringBoot项目统一处理返回值和异常

    目录 简介 前期准备 统一封装报文 统一异常处理 自定义异常信息 简介 当使用SpringBoot开发Web项目的API时,为了与前端更好地通信,通常会约定好接口的响应格式.例如,以下是一个JSON格 ...