简单点的,就是

kafka_2.11-0.8.2.2.tgz的3节点集群的下载、安装和配置(图文详解)

  但是呢,大家在实际工作中,会一定要去牵扯到调参数和调优问题的。以下,是我给大家分享的kafka的server.properties配置文件参考示范。

[hadoop@master config]$ pwd
/home/hadoop/app/kafka_2.10-0.9.0.1/config
[hadoop@master config]$ ll
total 64
-rw-r--r-- 1 hadoop hadoop 906 Feb 12 2016 connect-console-sink.properties
-rw-r--r-- 1 hadoop hadoop 909 Feb 12 2016 connect-console-source.properties
-rw-r--r-- 1 hadoop hadoop 2110 Feb 12 2016 connect-distributed.properties
-rw-r--r-- 1 hadoop hadoop 922 Feb 12 2016 connect-file-sink.properties
-rw-r--r-- 1 hadoop hadoop 920 Feb 12 2016 connect-file-source.properties
-rw-r--r-- 1 hadoop hadoop 1074 Feb 12 2016 connect-log4j.properties
-rw-r--r-- 1 hadoop hadoop 2055 Feb 12 2016 connect-standalone.properties
-rw-r--r-- 1 hadoop hadoop 1199 Feb 12 2016 consumer.properties
-rw-r--r-- 1 hadoop hadoop 4369 Feb 12 2016 log4j.properties
-rw-r--r-- 1 hadoop hadoop 2228 Feb 12 2016 producer.properties
-rw-r--r-- 1 hadoop hadoop 5705 Jul 27 17:49 server.properties
-rw-r--r-- 1 hadoop hadoop 3325 Feb 12 2016 test-log4j.properties
-rw-r--r-- 1 hadoop hadoop 1032 Feb 12 2016 tools-log4j.properties
-rw-r--r-- 1 hadoop hadoop 1023 Feb 12 2016 zookeeper.properties
[hadoop@master config]$ vim server.properties

  master节点上

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id= ############################# Socket Server Settings ############################# listeners=PLAINTEXT://:9092 # The port the socket server listens on
port= # Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=master # Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients> # The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients> # The number of threads handling network requests
#num.network.threads= # The number of threads doing disk I/O
#num.io.threads= # The send buffer (SO_SNDBUF) used by the socket server
#socket.send.buffer.bytes= # The receive buffer (SO_RCVBUF) used by the socket server
#socket.receive.buffer.bytes= # The maximum size of a request that the socket server will accept (protection against OOM)
#socket.request.max.bytes= # 是否允许自动创建topic ,若是false,就需要通过命令创建topic
auto.create.topics.enable =false ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
log.dirs=/data/kafka-log/log/ # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions= # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir= ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# . Durability: Unflushed data may be lost if you are not using replication.
# . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be
a lot of data to flush
# . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to ex
ceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages= # The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms= ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion
log.retention.hours= # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
log.retention.bytes=
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes= # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms= log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=master:,slave1:,slave2: # Timeout in ms for connecting to zookeeper
offsets.commit.timeout.ms=
request.timeout.ms=
zookeeper.connection.timeout.ms=300000
export HBASE_MANAGES_ZK=false ############################# zhouls add #############################
num.replica.fetchers=
replica.fetch.max.bytes=
replica.fetch.wait.max.ms=
replica.high.watermark.checkpoint.interval.ms=
replica.socket.timeout.ms=
replica.socket.receive.buffer.bytes=
replica.lag.time.max.ms= controller.socket.timeout.ms=
controller.message.queue.size= message.max.bytes= num.io.threads=
num.network.threads=
socket.request.max.bytes=
socket.receive.buffer.bytes=
socket.send.buffer.bytes=
queued.max.requests=
fetch.purgatory.purge.interval.requests=
producer.purgatory.purge.interval.requests= group.max.session.timeout.ms=

  slave1上

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id= ############################# Socket Server Settings ############################# listeners=PLAINTEXT://:9092 # The port the socket server listens on
port= # Hostname the broker will bind to. If not set, the server will bind to all interfaces
#host.name=localhost # Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients> # The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients> # The number of threads handling network requests
#num.network.threads= # The number of threads doing disk I/O
#num.io.threads= # The send buffer (SO_SNDBUF) used by the socket server
#socket.send.buffer.bytes= # The receive buffer (SO_RCVBUF) used by the socket server
#socket.receive.buffer.bytes= # The maximum size of a request that the socket server will accept (protection against OOM)
#socket.request.max.bytes= # 是否允许自动创建topic ,若是false,就需要通过命令创建topic
auto.create.topics.enable =false ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
log.dirs=/data/kafka-log/log/ # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions= # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir= ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# . Durability: Unflushed data may be lost if you are not using replication.
# . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be
a lot of data to flush
# . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to ex
ceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages= # The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms= ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion
log.retention.hours= # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
log.retention.bytes=
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes= # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms= log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=master:,slave1:,slave2: # Timeout in ms for connecting to zookeeper
offsets.commit.timeout.ms=
request.timeout.ms=
zookeeper.connection.timeout.ms=300000 export HBASE_MANAGES_ZK=false ############################# zhouls add #############################
num.replica.fetchers=
replica.fetch.max.bytes=
replica.fetch.wait.max.ms=
replica.high.watermark.checkpoint.interval.ms=
replica.socket.timeout.ms=
replica.socket.receive.buffer.bytes=
replica.lag.time.max.ms= controller.socket.timeout.ms=
controller.message.queue.size= message.max.bytes= num.io.threads=
num.network.threads=
socket.request.max.bytes=
socket.receive.buffer.bytes=
socket.send.buffer.bytes=
queued.max.requests=
fetch.purgatory.purge.interval.requests=
producer.purgatory.purge.interval.requests= group.max.session.timeout.ms=

  slave2上

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id= ############################# Socket Server Settings ############################# listeners=PLAINTEXT://:9092 # The port the socket server listens on
port= # Hostname the broker will bind to. If not set, the server will bind to all interfaces
#host.name=localhost # Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients> # The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients> # The number of threads handling network requests
#num.network.threads= # The number of threads doing disk I/O
#num.io.threads= # The send buffer (SO_SNDBUF) used by the socket server
#socket.send.buffer.bytes= # The receive buffer (SO_RCVBUF) used by the socket server
#socket.receive.buffer.bytes= # The maximum size of a request that the socket server will accept (protection against OOM)
#socket.request.max.bytes= # 是否允许自动创建topic ,若是false,就需要通过命令创建topic
auto.create.topics.enable =false ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
log.dirs=/data/kafka-log/log/ # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions= # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir= ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# . Durability: Unflushed data may be lost if you are not using replication.
# . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be
a lot of data to flush
# . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to ex
ceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages= # The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms= ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion
log.retention.hours= # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
log.retention.bytes=
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes= # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms= log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=master:,slave1:,slave2: # Timeout in ms for connecting to zookeeper
offsets.commit.timeout.ms=
request.timeout.ms=
zookeeper.connection.timeout.ms= export HBASE_MANAGES_ZK=false ############################# zhouls add #############################
num.replica.fetchers=
replica.fetch.max.bytes=
replica.fetch.wait.max.ms=
replica.high.watermark.checkpoint.interval.ms=
replica.socket.timeout.ms=
replica.socket.receive.buffer.bytes=
replica.lag.time.max.ms= controller.socket.timeout.ms=
controller.message.queue.size= message.max.bytes= num.io.threads=
num.network.threads=
socket.request.max.bytes=
socket.receive.buffer.bytes=
socket.send.buffer.bytes=
queued.max.requests=
fetch.purgatory.purge.interval.requests=
producer.purgatory.purge.interval.requests= group.max.session.timeout.ms=

  当然,大家也可以把master的broker.id=1开始。这个随意!

  master的broker.id=0    slave1的broker.id=0    slave2的broker.id=0

  master的broker.id=1    slave1的broker.id=2    slave2的broker.id=3

kafka的server.properties配置文件参考示范(图文详解)(多种方式)的更多相关文章

  1. ambari-server启动报错500 status code received on GET method for API:/api/v1/stacks/HDP/versions/2.4/recommendations Error message : Server Error解决办法(图文详解)

    问题详情 来源是,我在Ambari集群里,安装Hue. 给Ambari集群里安装可视化分析利器工具Hue步骤(图文详解 所遇到的这个问题. 然后,去ambari-server的log日志,查看,如下 ...

  2. kafka中server.properties配置文件参数说明

    转自:http://blog.csdn.net/lizhitao/article/details/25667831 参数 说明(解释) broker.id =0 每一个broker在集群中的唯一表示, ...

  3. ES配置文件参考与参数详解

    cluster.name: data-cluster node.name: "data-es-05" #node.data: false # Indexing & Cach ...

  4. kafka启动时出现FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) java.io.IOException: Permission denied错误解决办法(图文详解)

    首先,说明,我kafk的server.properties是 kafka的server.properties配置文件参考示范(图文详解)(多种方式) 问题详情 然后,我启动时,出现如下 [hadoop ...

  5. kafka_2.11-0.8.2.2.tgz的3节点集群的下载、安装和配置(图文详解)

    kafka_2.10-0.8.1.1.tgz的1或3节点集群的下载.安装和配置(图文详细教程)绝对干货 一.安装前准备 1.1 示例机器 二. JDK7 安装 1.1 下载地址 下载地址: http: ...

  6. 给Ambari集群里安装可视化分析利器工具Hue步骤(图文详解)

    扩展博客 以下,是我在手动的CDH版本平台下,安装Hue. CDH版本大数据集群下搭建Hue(hadoop-2.6.0-cdh5.5.4.gz + hue-3.9.0-cdh5.5.4.tar.gz) ...

  7. spark最新源码下载并导入到开发环境下助推高质量代码(Scala IDEA for Eclipse和IntelliJ IDEA皆适用)(以spark2.2.0源码包为例)(图文详解)

    不多说,直接上干货! 前言   其实啊,无论你是初学者还是具备了有一定spark编程经验,都需要对spark源码足够重视起来. 本人,肺腑之己见,想要成为大数据的大牛和顶尖专家,多结合源码和操练编程. ...

  8. 基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

    不多说,直接上干货! 至于为什么,要写这篇博客以及安装Kafka-manager? 问题详情 无奈于,在kafka里没有一个较好自带的web ui.启动后无法观看,并且不友好.所以,需安装一个第三方的 ...

  9. neo4j的配置文件(图文详解)

    不多说,直接上干货! 前期博客 Ubuntu16.04下Neo4j图数据库官网安装部署步骤(图文详解)(博主推荐) Ubuntu14.04下Neo4j图数据库官网安装部署步骤(图文详解)(博主推荐) ...

随机推荐

  1. python之模块随笔记-os

    操作系统模块:import os os.remove() 删除文件 os.unlink() 删除链接文件 os.rename() 重命名文件 os.listdir() 列出指定目录下所有文件 os.c ...

  2. 深入理解hadoop(一)

    hadoop 前世今生  hadoop最早起源于开源收缩引擎nutch,由dong cutting 贡献,但由于nutch最初的设计不能解决数10亿级别的文件存储和索引而遇到了严重的可扩展性问题,直到 ...

  3. cogs——2419. [HZOI 2016]公路修建2

    2419. [HZOI 2016]公路修建2 ★☆   输入文件:hzoi_road2.in   输出文件:hzoi_road2.out   简单对比时间限制:1 s   内存限制:128 MB [题 ...

  4. Java高并发实战,锁的优化

    锁优化 这里的锁优化主要是指 JVM 对 synchronized 的优化. 自旋锁 互斥同步进入阻塞状态的开销都很大,应该尽量避免.在许多应用中,共享数据的锁定状态只会持续很短的一段时间.自旋锁的思 ...

  5. springboot-jjwt HS256加解密(PS:验证就是解密)

    最近项目需要用到类似access token进行加解密.验签的需求,本人在此做个小笔记记录一下,以供他人参考. 一共会用到2中加解密,HS256 和 RS256,本文只是对 HS256做个备注,好了直 ...

  6. windows安装docker

    主要參考:http://docs.docker.com/installation/windows/ [1]安装完毕后同意后可能会报错: error in run: Failed to start ma ...

  7. DELPHI、FLASH、AS3、FLEX使用Protobuf(google Protocol Buffers)的具体方法

    最近因为工作需要,需要在不同的开发环境中应用Protobuf,特此,我专门研究了一下.为了防止自己忘记这些事情,现在记录在这里!需要的朋友可以借鉴一些,因为这些东西在GOOGLE和百度上搜索起来真的很 ...

  8. C#如何给Listbox添加右键菜单

    1 拖一个ContextMenuStrip控件,然后可以直接在界面上编辑,也可以在FormLoad的时候动态添加 2 把这两个控件关联起来就可以实现listBox1的右键菜单跟ContextMenuS ...

  9. jquery中怎样防止冒泡事件

    jquery中怎样防止冒泡事件 1.利用event.stopPropagation() 2.利用return false 3.利用event.preventDefault()

  10. JButton的setRollover出现的奇怪问题

    设置了setRollover,可以正常出现状态但是却不会回到默认状态. 研究了一下才发现,repaint的时候不会清除背板而是覆盖上去的, 所以如果原图是透明图就会出现状态不变的情况