1.下载需要的安装包

http://kafka.apache.org/downloads.html

本文使用的

其他的自由选择吧

2.上传解压改名字三部曲

3.修改配置文件

下面就是配置文件的全文 重点修改的都写了!

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
#全局唯一的id
broker.id=0
############################# Socket Server Settings ############################# # The port the socket server listens on
#消费者和生产者 监听的端口
port=9092 # Hostname the broker will bind to. If not set, the server will bind to all interfaces
#host.name=localhost # Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients> # The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients> # The number of threads handling network requests
#处理网络请求的线程数量
num.network.threads=3 # The number of threads doing disk I/O
#用来出来磁盘IO的数量
num.io.threads=8 #发送字节的缓冲大小
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
#运行的日志路径
log.dirs=/usr/local/app/kafka/los/tmp/kafka-logs # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
#当然爱你的brokers 分片数量
num.partitions=2 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion
log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000 # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=v1:2181,v2:2181,v3:2181 # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

然后把kafka 发到其他机器上!  修改配置上的id  ,记得创建日志目录

启动!那么坑就要冒出来了! 心中总有那句MMP !

启动前 先保证你则zk 都启动了

启动命令。后台启动

nohup bin/kafka-server-start.sh config/server.properties&

常见错误

Java HotSpot(TM) Server VM warning: INFO: os::commit_memory(0x67e00000, 1073741824, 0) failed; error='Cannot allocate memory' (errno=12) # # There is insufficient memory for the Java Runtime Environment to continue. # Native memory allocation (mmap) failed to map 1073741824 bytes for committing reserved memory. # An error report file with more information is saved as: # /opt/kafka_2.11-0.9.0.1/hs_err_pid2249.log

解决办法:

错误原因: 
Kafka默认使用-Xmx1G -Xms1G的JVM内存配置,如果机器内存较小,需要调整启动配置。 
打开/config/kafka-server-start.sh,修改 
export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G" 
为适合当前服务器的配置,例如export KAFKA_HEAP_OPTS="-Xmx256M -Xms128M"

kafka 集群安装过程的更多相关文章

  1. KafKa集群安装详细步骤

    最近在使用Spring Cloud进行分布式微服务搭建,顺便对集成KafKa的方案做了一些总结,今天详细介绍一下KafKa集群安装过程: 1. 在根目录创建kafka文件夹(service1.serv ...

  2. kafka集群安装部署

    kafka集群安装 使用的版本 系统:centos6.5 centos6.7 jdk:1.7.0_79 zookeeper:3.4.9 kafka:2.10-0.10.1.0 一.环境准备[只列,不具 ...

  3. zookeeper+kafka集群安装之二

    zookeeper+kafka集群安装之二 此为上一篇文章的续篇, kafka安装需要依赖zookeeper, 本文与上一篇文章都是真正分布式安装配置, 可以直接用于生产环境. zookeeper安装 ...

  4. zookeeper+kafka集群安装之一

    zookeeper+kafka集群安装之一 准备3台虚拟机, 系统是RHEL64服务版. 1) 每台机器配置如下: $ cat /etc/hosts ... # zookeeper hostnames ...

  5. zookeeper+kafka集群安装之中的一个

    版权声明:本文为博主原创文章.未经博主同意不得转载. https://blog.csdn.net/cheungmine/article/details/26678877 zookeeper+kafka ...

  6. hadoop1.2.1+zk-3.4.5+hbase-0.94.1集群安装过程详解

    hadoop1.2.1+zk-3.4.5+hbase-0.94.1集群安装过程详解 一,环境: 1,主机规划: 集群中包括3个节点:hadoop01为Master,其余为Salve,节点之间局域网连接 ...

  7. Kafka 集群安装

    Kafka 集群安装 环境: Linux 7.X kafka_2.x 在linux操作系统中,kafka安装在 /u04/app目录中 1. 下载 # wget https://mirrors.cnn ...

  8. KafKa集群安装、配置

    一.事前准备 1.kafka官网:http://kafka.apache.org/downloads. 2.选择使用版本下载. 3.kafka集群环境准备:(linux) 192.168.145.12 ...

  9. Centos7.4 kafka集群安装与kafka-eagle1.3.9的安装

    Centos7.4 kafka集群安装与kafka-eagle1.3.9的安装 集群规划: hostname Zookeeper Kafka kafka-eagle kafka01 √ √ √ kaf ...

随机推荐

  1. python之最强王者(2)——python基础语法

    背景介绍:由于本人一直做java开发,也是从txt开始写hello,world,使用javac命令编译,一直到使用myeclipse,其中的道理和辛酸都懂(请容许我擦干眼角的泪水),所以对于pytho ...

  2. Web前端框架与类库的思考

    说起前端框架,我也是醉了.现在去面试或者和同行聊天,动不动就这个框架碉堡了,那个框架好犀利. 当然不是贬低框架,只是有一种杀鸡焉用牛刀的感觉.网站技术是为业务而存在的,除此毫无意义,框架也是一样.在技 ...

  3. Java文件写入,换行

    import java.io.BufferedWriter; import java.io.File; import java.io.FileWriter; import java.io.IOExce ...

  4. margin小结

    一. margin百分比 1. 普通元素的百分比margin都是相对于容器的宽度计算 2. 绝对定位元素的百分比margin是相对于第一个定位祖先元素(relative/absolute/fixed) ...

  5. ACM YTU 十进制与八进制的转换 (栈和队列) STL栈调用

    十进制与八进制的转换(栈和队列) Description 对于输入的任意一个非负十进制整数,利用栈打印输出与其等值的八进制数. Input 111 Output 157 Sample Input 14 ...

  6. Odoo Tech World 2018(上海)互联网开源技术大会通告

    会议概述 点击进入活动报名通道 高成本的软件开发,耗时的系统安装,繁琐的操作培训… 这一系列问题都是企业数字化管理的痛点, "软件"成为发展数企业数字化转型的瓶颈, 无论是小厂家或 ...

  7. A1089. Insert or Merge

    According to Wikipedia: Insertion sort iterates, consuming one input element each repetition, and gr ...

  8. HDU 1851 (N个BASH博弈子游戏)

    题意:n堆石子,分别有M1,M2,·······,Mn个石子,各堆分别最多取L1,L2,·····Ln个石头,两个人分别取,一次只能从一堆中取,取走最后一个石子的人获胜.后选的人获胜输出Yes,否则输 ...

  9. AngularJS的select设置默认值

    AngularJS的select设置默认值 在使用Angular时候使用select标签时会遇到绑定数据指定默认显示值可这样实现 <!DOCTYPE html> <html ng-a ...

  10. Java与JS生成二维码

    1.二维码概念 二维码/二维条码是用某种特定的集合图形按一定规律在平面上(二维方向上)分布的黑白相间的图形记录数据符号信息的图片. 黑线是二进制的1,空白的地方是二进制的0,通过1.0这种数据组合用于 ...