简单点的,就是

kafka_2.11-0.8.2.2.tgz的3节点集群的下载、安装和配置(图文详解)

  但是呢,大家在实际工作中,会一定要去牵扯到调参数和调优问题的。以下,是我给大家分享的kafka的server.properties配置文件参考示范。

[hadoop@master config]$ pwd
/home/hadoop/app/kafka_2.10-0.9.0.1/config
[hadoop@master config]$ ll
total 64
-rw-r--r-- 1 hadoop hadoop 906 Feb 12 2016 connect-console-sink.properties
-rw-r--r-- 1 hadoop hadoop 909 Feb 12 2016 connect-console-source.properties
-rw-r--r-- 1 hadoop hadoop 2110 Feb 12 2016 connect-distributed.properties
-rw-r--r-- 1 hadoop hadoop 922 Feb 12 2016 connect-file-sink.properties
-rw-r--r-- 1 hadoop hadoop 920 Feb 12 2016 connect-file-source.properties
-rw-r--r-- 1 hadoop hadoop 1074 Feb 12 2016 connect-log4j.properties
-rw-r--r-- 1 hadoop hadoop 2055 Feb 12 2016 connect-standalone.properties
-rw-r--r-- 1 hadoop hadoop 1199 Feb 12 2016 consumer.properties
-rw-r--r-- 1 hadoop hadoop 4369 Feb 12 2016 log4j.properties
-rw-r--r-- 1 hadoop hadoop 2228 Feb 12 2016 producer.properties
-rw-r--r-- 1 hadoop hadoop 5705 Jul 27 17:49 server.properties
-rw-r--r-- 1 hadoop hadoop 3325 Feb 12 2016 test-log4j.properties
-rw-r--r-- 1 hadoop hadoop 1032 Feb 12 2016 tools-log4j.properties
-rw-r--r-- 1 hadoop hadoop 1023 Feb 12 2016 zookeeper.properties
[hadoop@master config]$ vim server.properties

  master节点上

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id= ############################# Socket Server Settings ############################# listeners=PLAINTEXT://:9092 # The port the socket server listens on
port= # Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=master # Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients> # The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients> # The number of threads handling network requests
#num.network.threads= # The number of threads doing disk I/O
#num.io.threads= # The send buffer (SO_SNDBUF) used by the socket server
#socket.send.buffer.bytes= # The receive buffer (SO_RCVBUF) used by the socket server
#socket.receive.buffer.bytes= # The maximum size of a request that the socket server will accept (protection against OOM)
#socket.request.max.bytes= # 是否允许自动创建topic ,若是false,就需要通过命令创建topic
auto.create.topics.enable =false ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
log.dirs=/data/kafka-log/log/ # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions= # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir= ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# . Durability: Unflushed data may be lost if you are not using replication.
# . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be
a lot of data to flush
# . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to ex
ceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages= # The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms= ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion
log.retention.hours= # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
log.retention.bytes=
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes= # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms= log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=master:,slave1:,slave2: # Timeout in ms for connecting to zookeeper
offsets.commit.timeout.ms=
request.timeout.ms=
zookeeper.connection.timeout.ms=300000
export HBASE_MANAGES_ZK=false ############################# zhouls add #############################
num.replica.fetchers=
replica.fetch.max.bytes=
replica.fetch.wait.max.ms=
replica.high.watermark.checkpoint.interval.ms=
replica.socket.timeout.ms=
replica.socket.receive.buffer.bytes=
replica.lag.time.max.ms= controller.socket.timeout.ms=
controller.message.queue.size= message.max.bytes= num.io.threads=
num.network.threads=
socket.request.max.bytes=
socket.receive.buffer.bytes=
socket.send.buffer.bytes=
queued.max.requests=
fetch.purgatory.purge.interval.requests=
producer.purgatory.purge.interval.requests= group.max.session.timeout.ms=

  slave1上

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id= ############################# Socket Server Settings ############################# listeners=PLAINTEXT://:9092 # The port the socket server listens on
port= # Hostname the broker will bind to. If not set, the server will bind to all interfaces
#host.name=localhost # Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients> # The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients> # The number of threads handling network requests
#num.network.threads= # The number of threads doing disk I/O
#num.io.threads= # The send buffer (SO_SNDBUF) used by the socket server
#socket.send.buffer.bytes= # The receive buffer (SO_RCVBUF) used by the socket server
#socket.receive.buffer.bytes= # The maximum size of a request that the socket server will accept (protection against OOM)
#socket.request.max.bytes= # 是否允许自动创建topic ,若是false,就需要通过命令创建topic
auto.create.topics.enable =false ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
log.dirs=/data/kafka-log/log/ # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions= # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir= ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# . Durability: Unflushed data may be lost if you are not using replication.
# . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be
a lot of data to flush
# . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to ex
ceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages= # The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms= ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion
log.retention.hours= # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
log.retention.bytes=
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes= # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms= log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=master:,slave1:,slave2: # Timeout in ms for connecting to zookeeper
offsets.commit.timeout.ms=
request.timeout.ms=
zookeeper.connection.timeout.ms=300000 export HBASE_MANAGES_ZK=false ############################# zhouls add #############################
num.replica.fetchers=
replica.fetch.max.bytes=
replica.fetch.wait.max.ms=
replica.high.watermark.checkpoint.interval.ms=
replica.socket.timeout.ms=
replica.socket.receive.buffer.bytes=
replica.lag.time.max.ms= controller.socket.timeout.ms=
controller.message.queue.size= message.max.bytes= num.io.threads=
num.network.threads=
socket.request.max.bytes=
socket.receive.buffer.bytes=
socket.send.buffer.bytes=
queued.max.requests=
fetch.purgatory.purge.interval.requests=
producer.purgatory.purge.interval.requests= group.max.session.timeout.ms=

  slave2上

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id= ############################# Socket Server Settings ############################# listeners=PLAINTEXT://:9092 # The port the socket server listens on
port= # Hostname the broker will bind to. If not set, the server will bind to all interfaces
#host.name=localhost # Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients> # The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients> # The number of threads handling network requests
#num.network.threads= # The number of threads doing disk I/O
#num.io.threads= # The send buffer (SO_SNDBUF) used by the socket server
#socket.send.buffer.bytes= # The receive buffer (SO_RCVBUF) used by the socket server
#socket.receive.buffer.bytes= # The maximum size of a request that the socket server will accept (protection against OOM)
#socket.request.max.bytes= # 是否允许自动创建topic ,若是false,就需要通过命令创建topic
auto.create.topics.enable =false ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
log.dirs=/data/kafka-log/log/ # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions= # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir= ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# . Durability: Unflushed data may be lost if you are not using replication.
# . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be
a lot of data to flush
# . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to ex
ceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages= # The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms= ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion
log.retention.hours= # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
log.retention.bytes=
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes= # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms= log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=master:,slave1:,slave2: # Timeout in ms for connecting to zookeeper
offsets.commit.timeout.ms=
request.timeout.ms=
zookeeper.connection.timeout.ms= export HBASE_MANAGES_ZK=false ############################# zhouls add #############################
num.replica.fetchers=
replica.fetch.max.bytes=
replica.fetch.wait.max.ms=
replica.high.watermark.checkpoint.interval.ms=
replica.socket.timeout.ms=
replica.socket.receive.buffer.bytes=
replica.lag.time.max.ms= controller.socket.timeout.ms=
controller.message.queue.size= message.max.bytes= num.io.threads=
num.network.threads=
socket.request.max.bytes=
socket.receive.buffer.bytes=
socket.send.buffer.bytes=
queued.max.requests=
fetch.purgatory.purge.interval.requests=
producer.purgatory.purge.interval.requests= group.max.session.timeout.ms=

  当然,大家也可以把master的broker.id=1开始。这个随意!

  master的broker.id=0    slave1的broker.id=0    slave2的broker.id=0

  master的broker.id=1    slave1的broker.id=2    slave2的broker.id=3

kafka的server.properties配置文件参考示范(图文详解)(多种方式)的更多相关文章

  1. ambari-server启动报错500 status code received on GET method for API:/api/v1/stacks/HDP/versions/2.4/recommendations Error message : Server Error解决办法(图文详解)

    问题详情 来源是,我在Ambari集群里,安装Hue. 给Ambari集群里安装可视化分析利器工具Hue步骤(图文详解 所遇到的这个问题. 然后,去ambari-server的log日志,查看,如下 ...

  2. kafka中server.properties配置文件参数说明

    转自:http://blog.csdn.net/lizhitao/article/details/25667831 参数 说明(解释) broker.id =0 每一个broker在集群中的唯一表示, ...

  3. ES配置文件参考与参数详解

    cluster.name: data-cluster node.name: "data-es-05" #node.data: false # Indexing & Cach ...

  4. kafka启动时出现FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) java.io.IOException: Permission denied错误解决办法(图文详解)

    首先,说明,我kafk的server.properties是 kafka的server.properties配置文件参考示范(图文详解)(多种方式) 问题详情 然后,我启动时,出现如下 [hadoop ...

  5. kafka_2.11-0.8.2.2.tgz的3节点集群的下载、安装和配置(图文详解)

    kafka_2.10-0.8.1.1.tgz的1或3节点集群的下载.安装和配置(图文详细教程)绝对干货 一.安装前准备 1.1 示例机器 二. JDK7 安装 1.1 下载地址 下载地址: http: ...

  6. 给Ambari集群里安装可视化分析利器工具Hue步骤(图文详解)

    扩展博客 以下,是我在手动的CDH版本平台下,安装Hue. CDH版本大数据集群下搭建Hue(hadoop-2.6.0-cdh5.5.4.gz + hue-3.9.0-cdh5.5.4.tar.gz) ...

  7. spark最新源码下载并导入到开发环境下助推高质量代码(Scala IDEA for Eclipse和IntelliJ IDEA皆适用)(以spark2.2.0源码包为例)(图文详解)

    不多说,直接上干货! 前言   其实啊,无论你是初学者还是具备了有一定spark编程经验,都需要对spark源码足够重视起来. 本人,肺腑之己见,想要成为大数据的大牛和顶尖专家,多结合源码和操练编程. ...

  8. 基于Web的Kafka管理器工具之Kafka-manager的编译部署详细安装 (支持kafka0.8、0.9和0.10以后版本)(图文详解)(默认端口或任意自定义端口)

    不多说,直接上干货! 至于为什么,要写这篇博客以及安装Kafka-manager? 问题详情 无奈于,在kafka里没有一个较好自带的web ui.启动后无法观看,并且不友好.所以,需安装一个第三方的 ...

  9. neo4j的配置文件(图文详解)

    不多说,直接上干货! 前期博客 Ubuntu16.04下Neo4j图数据库官网安装部署步骤(图文详解)(博主推荐) Ubuntu14.04下Neo4j图数据库官网安装部署步骤(图文详解)(博主推荐) ...

随机推荐

  1. 深入理解hadoop(二)

    hadoop RPC 网络通信是hadoop的核心模块之一,他支撑了整个Hadoop的上层分布式应用(HBASE.HDFS.MapReduce), Hadoop RPC具有以下几个特性,透明性(用户本 ...

  2. 18.9.23 PION模拟赛

    U32670 小凯的数字 题目背景 NOIP2018 原创模拟题T1 NOIP DAY1 T1 or DAY 2 T1 难度 是否发现与NOIP2017 DAY1 T1 有异曲同工之妙 说明:#10, ...

  3. Eclipse使用Maven时,修改默认中央仓库后的配置报错找不到包的问题解决

    一般在公司内容配置Maven时会在settings.xml文件下配置私服nexus地址,那么修改完之后在Eclipse中如果不指定用户目录级别的settings.xml文件会出现找不到包的问题. se ...

  4. 系统无法安装 OfficeControl.ocx 控件如何解决

      在OA上要直接查看word等公告文件,就必须安装office控件.要安装office控件,需要在IE浏览器中做相应的设置.如何设置呢,下面由小编具体介绍下. 工具/原料   OA IE浏览器 方法 ...

  5. JFileChooser 中文API

    javax.swing类 JFileChooser java.lang.Object java.awt.Component java.awt.Container javax.swing.JCompon ...

  6. [52ABP系列] - 001、SPA免费项目模版搭建教程

    前言 这个项目是基于 ABP ASPNetCore 免费版,整合 NG-Alian 和 NG-Zorro 的项目,所以比较适合熟悉 ABP 和 Angular2+ 的开发人员, 如果你是新手,学习的话 ...

  7. 条款九: 避免隐藏标准形式的new

    因为内部范围声明的名称会隐藏掉外部范围的相同的名称,所以对于分别在类的内部和全局声明的两个相同名字的函数f来说,类的成员函数会隐藏掉全局函数 class x { public: void f(); / ...

  8. IDM百度云使用

    2018-8-6 (idm百度云速度很慢) Tips:如果感觉图片较小,可以ctrl+鼠标滚轮放大网页 首先下载IDM绿色版. 解压后:右键以管理员权限运行进行绿化 最后,它会在所有支持的浏览器上安装 ...

  9. ZOJ 1806 (小数高精度)

    题意:八进制小数转化成十进制的小数. 0.d1d2d3 ... dk [8] = 0.D1D2D3 ... Dm [10] 例: 0.75 [8] = 7*8^-1+5*8^-2 = ( 5/8 + ...

  10. iOS开发——高级篇——iOS中为什么block用copy属性

    1. Block的声明和线程安全Block属性的声明,首先需要用copy修饰符,因为只有copy后的Block才会在堆中,栈中的Block的生命周期是和栈绑定的,可以参考之前的文章(iOS: 非ARC ...