本位以最新版本kafka_2.11-0.10.1.0版本讲述分布式kafka集群环境的搭建过程。服务器列表:

172.31.10.1
172.31.10.2
172.31.10.3

1.下载kafka安装包

登录kafka官网http://kafka.apache.org/,

  • 单击左侧“Download”按钮
  • 选择对应的版本,版本2.11代表scala版本(kafka是由scala编写的),0.10.1.0代表kafka的版本
  • 在弹出的窗口中选择下载链接即可

2.下载zookeeper安装包

kafka整体架构如下:

而kafka集群通常会依赖zookeeper的命名服务,单机版的可以直接用kafka安装包的zookeeper,而通常生产环境为保证命名服务的可用性,一般会单独搭建zookeeper集群。服务器不足可以直接和kafka broker共用服务器,zookeeper命名服务队资源要求不高。

登录zookeeper官网http://www.apache.org/dyn/closer.cgi/zookeeper/,一路选择download下载即可,本文选择稳定版zookeeper-3.4.8

3.安装zookeeper集群

将安装包zookeeper-3.4.8.tar上传至服务器172.31.10.1,

  • 解压,目录/opt/zookeeper/zookeeper-3.4.8

    tar -zxvf zookeeper-3.4.8.tar
    
  • 配置,切换到conf目录,并更改dataDir和server.x

    cd /opt/zookeeper/zookeeper-3.4.8/conf
    mv zoo_sample.cfg zoo.cfg
    

    更改后的zoo.cfg配置如下:

    # The number of milliseconds of each tick
    tickTime=2000
    # The number of ticks that the initial
    # synchronization phase can take
    initLimit=10
    # The number of ticks that can pass between
    # sending a request and getting an acknowledgement
    syncLimit=5
    # the directory where the snapshot is stored.
    # do not use /tmp for storage, /tmp here is just
    # example sakes.
    dataDir=/var/logs/data/zookeeper
    # the port at which the clients will connect
    clientPort=2181
    server.1=172.31.10.1:2888:3888
    server.2=172.31.10.2:2888:3888
    server.3=172.31.10.3:2888:3888
    # the maximum number of client connections.
    # increase this if you need to handle more clients
    #maxClientCnxns=60
    #
    # Be sure to read the maintenance section of the
    # administrator guide before turning on autopurge.
    #
    # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
    #
    # The number of snapshots to retain in dataDir
    #autopurge.snapRetainCount=3
    # Purge task interval in hours
    # Set to "0" to disable auto purge feature
    #autopurge.purgeInterval=1
    

    其中dataDir为zookeeper目录,server.x为zookeeper服务器列表的地址和通信端口

  • 远程复制到其他两台服务器,并在dataDir目录下创建myid文件,内容为server.x中的数字。本文设置如下:
#172.31.10.1执行
cd /var/logs/data/zookeeper
echo "1" >  /var/logs/data/zookeeper/myid

#172.31.10.2执行
cd /var/logs/data/zookeeper
echo "2" >  /var/logs/data/zookeeper/myid

#172.31.10.3执行
cd /var/logs/data/zookeeper
echo "3" >  /var/logs/data/zookeeper/myid
  • 启动zookeeper集群和验证
#在每台服务器上启动zookeeper
cd /opt/zookeeper/zookeeper-3.4.8/bin
/opt/zookeeper/zookeeper-3.4.8/bin/zkServer.sh start

#查看服务器上zookeeper节点角色
cd /opt/zookeeper/zookeeper-3.4.8/bin
/opt/zookeeper/zookeeper-3.4.8/bin/zkServer.sh status

4.安装kafka集群

  • 解压,到/opt/kafka/kafka_2.11-0.10.1.0
tar -zxvf kafka_2.11-0.10.1.0.tgz
cd /opt/kafka/kafka_2.11-0.10.1.0
  • 更改conf/server.properties配置,主要是更改如下几项:
  • broker.id=1
    host.name=172.31.10.1
    log.dirs=/var/logs/data/kafka
    zookeeper.connect=172.31.10.1:2181,172.31.10.2:2181,172.31.10.2:2181/kafka
    

  注意每台服务器上的broker.id均不同,需要保证整个集群中唯一性

  更改后的server.properties如下:

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1

# The port the socket server listens on
port=9092
# Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=172.31.10.1

# Switch to enable topic deletion or not, default value is false
#delete.topic.enable=true

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = security_protocol://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# The number of threads handling network requests
num.network.threads=3

# The number of threads doing disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600

############################# Log Basics #############################

# A comma seperated list of directories under which to store log files
log.dirs=/var/logs/data/kafka

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=172.31.10.1:2181,172.31.10.2:2181,172.31.10.2:2181/kafka

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
  • 同步到其他服务器,更改broker.id
  • kafka启动和验证

    cd /opt/kafka/kafka_2.11-0.10.1.0/bin
    nohup /opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh config/server.properties &
    

    创建topic,如能成功创建topic则表示集群安装完成,也可以用jps命令查看kafka进程是否存在。

    /opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-topics.sh --create --zookeeper 172.31.10.1:2181,172.31.10.2:2181,172.31.10.2:2181/kafka --replication-factor 3 --partitions 1 --topic test
    

    至此,kafka分布式集群安装完成,后续将深入讲解kafka其他内容。

Kakfa分布式集群搭建的更多相关文章

  1. Hadoop上路-01_Hadoop2.3.0的分布式集群搭建

    一.配置虚拟机软件 下载地址:https://www.virtualbox.org/wiki/downloads 1.虚拟机软件设定 1)进入全集设定 2)常规设定 2.Linux安装配置 1)名称类 ...

  2. hadoop伪分布式集群搭建与安装(ubuntu系统)

    1:Vmware虚拟软件里面安装好Ubuntu操作系统之后使用ifconfig命令查看一下ip; 2:使用Xsheel软件远程链接自己的虚拟机,方便操作.输入自己ubuntu操作系统的账号密码之后就链 ...

  3. Hadoop分布式集群搭建

    layout: "post" title: "Hadoop分布式集群搭建" date: "2017-08-17 10:23" catalog ...

  4. hbase分布式集群搭建

    hbase和hadoop一样也分为单机版.伪分布式版和完全分布式集群版本,这篇文件介绍如何搭建完全分布式集群环境搭建. hbase依赖于hadoop环境,搭建habase之前首先需要搭建好hadoop ...

  5. 分布式实时日志系统(四) 环境搭建之centos 6.4下hbase 1.0.1 分布式集群搭建

    一.hbase简介 HBase是一个开源的非关系型分布式数据库(NoSQL),它参考了谷歌的BigTable建模,实现的编程语言为 Java.它是Apache软件基金会的Hadoop项目的一部分,运行 ...

  6. kafka系列二:多节点分布式集群搭建

    上一篇分享了单节点伪分布式集群搭建方法,本篇来分享一下多节点分布式集群搭建方法.多节点分布式集群结构如下图所示: 为了方便查阅,本篇将和上一篇一样从零开始一步一步进行集群搭建. 一.安装Jdk 具体安 ...

  7. MinIO 分布式集群搭建

    MinIO 分布式集群搭建 分布式 Minio 可以让你将多块硬盘(甚至在不同的机器上)组成一个对象存储服务.由于硬盘分布在不同的节点上,分布式 Minio 避免了单点故障. Minio 分布式模式可 ...

  8. 阿里云ECS服务器部署HADOOP集群(二):HBase完全分布式集群搭建(使用外置ZooKeeper)

    本篇将在阿里云ECS服务器部署HADOOP集群(一):Hadoop完全分布式集群环境搭建的基础上搭建,多添加了一个 datanode 节点 . 1 节点环境介绍: 1.1 环境介绍: 服务器:三台阿里 ...

  9. 阿里云ECS服务器部署HADOOP集群(三):ZooKeeper 完全分布式集群搭建

    本篇将在阿里云ECS服务器部署HADOOP集群(一):Hadoop完全分布式集群环境搭建的基础上搭建,多添加了一个 datanode 节点 . 1 节点环境介绍: 1.1 环境介绍: 服务器:三台阿里 ...

随机推荐

  1. 深入了解Activity-生命周期

    一 介绍 Activity是android中使用最为频繁的组件,在官方文档中是这样描述的:An activity is a single, focused thing that the user ca ...

  2. NSString、NSArray、NSDictionary和NSData的数据存储

    #import "ViewController.h" @interface ViewController () @end @implementation ViewControlle ...

  3. iOS 抽象工厂模式

    iOS 抽象工厂模式 什么是抽象工厂模式 简单了解一下 按照惯例,我们先了解一下什么是抽象工厂模式.抽象工厂模式和工厂方法模式很相似,但是抽象工厂模式将抽象发挥的更加极致,是三种工厂模式中最抽象的一种 ...

  4. IOS 支付宝-五福简单框架实现-线性滚动(UICollectionView)

    猴年支付宝可算是给大家一个很好的惊喜,刺激.大家都在为敬业福而四处奔波.可是到最后也没有几个得到敬业福德,就像我.不知道大家有没有观察,五福界面的滚动是一个很好的设计.在这里,给大家带来简单的滚动实现 ...

  5. 使用greenDAO生成DAO代码

    研究greenDAO有几天了,刚开始看别人的博客基本都把我带到了沟里,讲gradle把简单的问题搞得非常复杂,而且都是抄来抄去,看来看去也就那么几篇,实在看不下去了,只得硬着头皮自己琢磨,好在终于把这 ...

  6. 【文章内容来自《Android 应用程序开发权威指南》(第四版)】如何设计兼容的用户界面的一些建议(有删改)

    最近一直在看的一本书是<Android 应用程序开发权威指南>(第四版),十分推荐.书中讲到了一些用户界面设计的规范,对于初学者我认为十分有必要,在这里码给大家,希望对我们都有用. 在我们 ...

  7. iOS开发之NSTimer使用初探

    创建一个定时器(NSTimer) - (void)viewDidLoad { [super viewDidLoad]; [NSTimer scheduledTimerWithTimeInterval: ...

  8. iOS网络-05-AFNetwoking原理及常用操作

    AFN的六大模块 NSURLConnection,主要对NSURLConnection进行了进一步的封装,包含以下核心的类: AFURLConnectionOperation AFHTTPReques ...

  9. 每日Scrum--No.9

    Yesterday:测试软件 Today:写阶段性的总结 Problem: (1)晚上我们的团队进行了收尾工作:第一阶段的任务基本完成,软件主要实现了校园景点照片以及对应的介绍,查询最短路径,查询涉及 ...

  10. sql server 导出的datetime结果 CAST(0x00009E0E0095524F AS DateTime) 如何向mysql,oracle等数据库进行转换

    1. 处理 sql server 导出的 datetime 类型的字段 在进行sql server向mysql等其他数据进行迁移数据时,会发现使用sql server导出的datetime类型的结果是 ...