CentOS中配置Kafka集群
环境:三台虚拟机Host0,Host1,Host2
Host0:192.168.10.2
Host1: 192.168.10.3
Host2: 192.168.10.4
在三台虚拟机上配置zookeeper,具体配置详见CentOS中配置CDH版本的ZooKeeper
下载kafka:http://kafka.apache.org/downloads.html
我的kafka版本是kafka_2.10-0.8.2.0
在各个kafka节点上解压kafka&进入kafka目录
[root@Host0 ~]# tar xfvz kafka_2.-0.8.2.0.tgz
[root@Host0 ~]# cd kafka_2.10-0.8.2.0
在各个kafka节点上配置config/server.propertieswen文件
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id= ############################# Socket Server Settings ############################# # The port the socket server listens on
port= # Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=211.68.36.127 # Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
advertised.host.name=211.68.36.127 # The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients> # The number of threads handling network requests
num.network.threads= # The number of threads doing disk I/O
num.io.threads= # The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes= # The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes= # The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes= ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions= # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir= ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# . Durability: Unflushed data may be lost if you are not using replication.
# . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages= # The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms= ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion
log.retention.hours= # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes= # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes= # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms= # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=Host0:,Host1:,Host2: # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms= delete.topic.enable = true
注意:
broker.id=0 broker的id,每个kafka节点配置不能一样,可以0,1,2等
host.name=192.168.10.2 broker的hostname;如果hostname已经设置的话,broker将只会绑定到这个地址上;如果没有设置,它将绑定到所有接口,并发布一份到ZK。每台节点设置成当前节点的IP地址
advertised.host.name=192.168.10.2 作为broker的hostname发往producer、consumers以及其他brokers。每台节点设置成当前节点的IP地址
log.dirs=/tmp/kafka-logs 消息文件存储的路径,并不是kafka系统日志存放路径,这里不建议存放在/tmp目录下,因为/tmp目录会定时清理
zookeeper.connect=Host0:2181,Host1:2181,Host2:2181 指定连接Zookeeper的连接串,此处填写上一节中安装的三个zk节点的ip和端口即可
在各节点中启动kafka
[root@Host0 kafka_2.-0.8.2.0]# bin/kafka-server-start.sh config/server.properties
创建topic
[root@Host0 kafka_2.10-0.8.2.0]# bin/kafka-topics.sh --create --zookeeper Host0: --replication-factor --partition --topic my-replicated-topic1
模拟生产者
在任意一个节点上打开终端
[root@Host0 kafka_2.10-0.8.2.0]# bin/kafka-console-producer.sh --broker-list Host0: --topic my-replicated-topic1
[2016-09-05 21:51:57,134] WARN Property topic is not valid (kafka.utils.VerifiableProperties)
hello kafka!
模拟消费者
在任意一个节点上打开终端
[root@Host0 kafka_2.-0.8.2.0]# bin/kafka-console-consumer.sh --zookeeper Host2: --topic my-replicated-topic1
hello kafka!
host.name和advertised.host.name会有坑,见以下,以下为转载。
此处的坑:
经过debug发现,连接到集群是成功的,但连接到集群后更新回来的集群meta信息却是错误的:
能够看到,metadata中的Cluster信息,节点的hostname是iZ25wuzqk91Z这样的一串数字,而不是实际的ip地址 10.0.0.100和101。iZ25wuzqk91Z其实是远端主机的hostname,这说明在没有配置advertised.host.name 的情况下,Kafka并没有像官方文档宣称的那样改为广播我们配置的host.name,而是广播了主机配置的hostname。远端的客户端并没有配置 hosts,所以自然是连接不上这个hostname的。要解决这一问题,把host.name和advertised.host.name都配置成绝对 的ip地址就可以了。
CentOS中配置Kafka集群的更多相关文章
- KafKa简介和利用docker配置kafka集群及开发环境
KafKa的基本认识,写的很好的一篇博客:https://www.cnblogs.com/sujing/p/10960832.html 问题:1.kafka是什么?Kafka是一种高吞吐量的分布式发布 ...
- docker容器中搭建kafka集群环境
Kafka集群管理.状态保存是通过zookeeper实现,所以先要搭建zookeeper集群 zookeeper集群搭建 一.软件环境: zookeeper集群需要超过半数的的node存活才能对外服务 ...
- Kafka详解二:如何配置Kafka集群
问题导读1.Kafka有哪几种配制方法?2.如何启动一个Consumer实例来消费消息? Kafka集群配置比较简单,为了更好的让大家理解,在这里要分别介绍下面三种配置 单节点:一个broker的集群 ...
- Kafka具体解释二、怎样配置Kafka集群
Kafka集群配置比較简单,为了更好的让大家理解.在这里要分别介绍以下三种配置 单节点:一个broker的集群 单节点:多个broker的集群 多节点:多broker集群 一.单节点单broker实例 ...
- CentOS下配置Hadoop集群:java.net.NoRouteToHostException: No route to host问题的解决
我用的是hadoop 1.2.1 遇到的问题是: hadoop中datanode无法启动,报Caused by: java.net.NoRouteToHostException: No route t ...
- 配置Kafka集群和zookeeper集群
原文链接请参见:http://www.cnblogs.com/5iTech/articles/6043224.html
- kafka学习2:kafka集群安装与配置
在前一篇:kafka学习1:kafka安装 中,我们安装了单机版的Kafka,而在实际应用中,不可能是单机版的应用,必定是以集群的方式出现.本篇介绍Kafka集群的安装过程: 一.准备工作 1.开通Z ...
- CentOS6安装各种大数据软件 第五章:Kafka集群的配置
相关文章链接 CentOS6安装各种大数据软件 第一章:各个软件版本介绍 CentOS6安装各种大数据软件 第二章:Linux各个软件启动命令 CentOS6安装各种大数据软件 第三章:Linux基础 ...
- Kafka集群搭建 (2.11-0.9.0.1)
之前写过kafka_2.9.2-0.8.2.2版本的安装,kafka在新的0.9版本以上改动比较大,配置和api都有很大更新,并且broker对应的partition支持多线程生产和消费,所以性能比之 ...
随机推荐
- 【 js 基础 】【 源码学习 】柯里化和箭头函数
最近在看 redux 的源码,代码结构很简单,主要就是6个文件,其中 index.js 负责将剩余5个文件中定义的方法 export 出来,其他5个文件各自负责一个方法的实现. 大部分代码比较简单,很 ...
- python-建造者模式
源码地址:https://github.com/weilanhanf/PythonDesignPatterns 说明: 假如要组装一台电脑,将主板,CPU,内存等部件按照某个稳定的步骤组合,基本过程是 ...
- JavaScript中七种数据类型·中·一
Standing on Shoulders of Giants; 说到JavaScript里的类型很容易就让人想起 42和"42",分别是string型和number型,但是他们可 ...
- CSS隐藏多余的文字
效果: <p><strong>商品名称:</strong>新鲜现摘云南绥江半边红李子甜脆脱骨李6斤当季绿色有机水果包邮</p></div> ...
- 纯小白入手 vue3.0 CLI - 2.2 - 组件 home.vue 的初步改造
vue3.0 CLI 真小白一步一步入手全教程系列:https://www.cnblogs.com/ndos/category/1295752.html 我的 github 地址 - vue3.0St ...
- 【element】改变el-table样式,实现全局滚动,固定表头和表尾
后台管理系统,多半都有表格,数据量大的时候,需要固定表头或者底部. 因为饿了么是局部滚动的,现在我们需要改饿了么某些样式实现全局滚动 饿了么局部滚动 全局滚动demo 删除height=200 固 ...
- 树莓派 MPG视频硬件解码破解 Raspberry Pi Patch for MPEG-2, VC-1 license
Enable the Pi's hardware decoding of MPEG-2 and VC-1 MPEG2 patents have expired If you have start.e ...
- 在ASP.NET MVC 中使用ActiveReports报表控件
随着MVC模式的广泛运用,对Web应用系统的开发带来了巨大的影响,我们好像又回到了原来的ASP时代,视乎这是一种后退而不是一种进步,不过MVC模式给我们带来的影响不仅限于我们所看到的这一点..MVC看 ...
- Android--用JS去控制WebView显示的字体的大小
<script type="text/javascript"> function changeFontSize(size) { var tfs = '120%'; va ...
- UWP开发细节记录:IStream 和 IRandomAccessStream^ 以及 IMFByteStream 互转
IStream 和 IRandomAccessStream^ 互转 IRandomAccessStream^ --> IStream: CreateStreamOverRandomAccess ...