Kakfa分布式集群搭建
本位以最新版本kafka_2.11-0.10.1.0版本讲述分布式kafka集群环境的搭建过程。服务器列表:
172.31.10.1 172.31.10.2 172.31.10.3
1.下载kafka安装包
登录kafka官网http://kafka.apache.org/,
- 单击左侧“Download”按钮
- 选择对应的版本,版本2.11代表scala版本(kafka是由scala编写的),0.10.1.0代表kafka的版本
- 在弹出的窗口中选择下载链接即可
2.下载zookeeper安装包
kafka整体架构如下:
而kafka集群通常会依赖zookeeper的命名服务,单机版的可以直接用kafka安装包的zookeeper,而通常生产环境为保证命名服务的可用性,一般会单独搭建zookeeper集群。服务器不足可以直接和kafka broker共用服务器,zookeeper命名服务队资源要求不高。
登录zookeeper官网http://www.apache.org/dyn/closer.cgi/zookeeper/,一路选择download下载即可,本文选择稳定版zookeeper-3.4.8
3.安装zookeeper集群
将安装包zookeeper-3.4.8.tar上传至服务器172.31.10.1,
- 解压,目录/opt/zookeeper/zookeeper-3.4.8
tar -zxvf zookeeper-3.4.8.tar
- 配置,切换到conf目录,并更改dataDir和server.x
cd /opt/zookeeper/zookeeper-3.4.8/conf mv zoo_sample.cfg zoo.cfg
更改后的zoo.cfg配置如下:
# The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. # do not use /tmp for storage, /tmp here is just # example sakes. dataDir=/var/logs/data/zookeeper # the port at which the clients will connect clientPort=2181 server.1=172.31.10.1:2888:3888 server.2=172.31.10.2:2888:3888 server.3=172.31.10.3:2888:3888 # the maximum number of client connections. # increase this if you need to handle more clients #maxClientCnxns=60 # # Be sure to read the maintenance section of the # administrator guide before turning on autopurge. # # http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance # # The number of snapshots to retain in dataDir #autopurge.snapRetainCount=3 # Purge task interval in hours # Set to "0" to disable auto purge feature #autopurge.purgeInterval=1
其中dataDir为zookeeper目录,server.x为zookeeper服务器列表的地址和通信端口
- 远程复制到其他两台服务器,并在dataDir目录下创建myid文件,内容为server.x中的数字。本文设置如下:
#172.31.10.1执行 cd /var/logs/data/zookeeper echo "1" > /var/logs/data/zookeeper/myid #172.31.10.2执行 cd /var/logs/data/zookeeper echo "2" > /var/logs/data/zookeeper/myid #172.31.10.3执行 cd /var/logs/data/zookeeper echo "3" > /var/logs/data/zookeeper/myid
- 启动zookeeper集群和验证
#在每台服务器上启动zookeeper cd /opt/zookeeper/zookeeper-3.4.8/bin /opt/zookeeper/zookeeper-3.4.8/bin/zkServer.sh start #查看服务器上zookeeper节点角色 cd /opt/zookeeper/zookeeper-3.4.8/bin /opt/zookeeper/zookeeper-3.4.8/bin/zkServer.sh status
4.安装kafka集群
- 解压,到/opt/kafka/kafka_2.11-0.10.1.0
tar -zxvf kafka_2.11-0.10.1.0.tgz cd /opt/kafka/kafka_2.11-0.10.1.0
- 更改conf/server.properties配置,主要是更改如下几项:
broker.id=1 host.name=172.31.10.1 log.dirs=/var/logs/data/kafka zookeeper.connect=172.31.10.1:2181,172.31.10.2:2181,172.31.10.2:2181/kafka
注意每台服务器上的broker.id均不同,需要保证整个集群中唯一性
更改后的server.properties如下:
############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker. broker.id=1 # The port the socket server listens on port=9092 # Hostname the broker will bind to. If not set, the server will bind to all interfaces host.name=172.31.10.1 # Switch to enable topic deletion or not, default value is false #delete.topic.enable=true ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from # java.net.InetAddress.getCanonicalHostName() if not configured. # FORMAT: # listeners = security_protocol://host_name:port # EXAMPLE: # listeners = PLAINTEXT://your.host.name:9092 #listeners=PLAINTEXT://:9092 # Hostname and port the broker will advertise to producers and consumers. If not set, # it uses the value for "listeners" if configured. Otherwise, it will use the value # returned from java.net.InetAddress.getCanonicalHostName(). #advertised.listeners=PLAINTEXT://your.host.name:9092 # The number of threads handling network requests num.network.threads=3 # The number of threads doing disk I/O num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM) socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files log.dirs=/var/logs/data/kafka # The default number of log partitions per topic. More partitions allow greater # parallelism for consumption, but this will also result in more files across # the brokers. num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. # This value is recommended to be increased for installations with data dirs located in RAID array. num.recovery.threads.per.data.dir=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync # the OS cache lazily. The following configurations control the flush of data to disk. # There are a few important trade-offs here: # 1. Durability: Unflushed data may be lost if you are not using replication. # 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush. # 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks. # The settings below allow one to configure the flush policy to flush data after a period of time or # every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk #log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush #log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can # be set to delete segments after a period of time, or after a given size has accumulated. # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens # from the end of the log. # The minimum age of a log file to be eligible for deletion log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining # segments don't drop below log.retention.bytes. #log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created. log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according # to the retention policies log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details). # This is a comma separated host:port pairs, each corresponding to a zk # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002". # You can also append an optional chroot string to the urls to specify the # root directory for all kafka znodes. zookeeper.connect=172.31.10.1:2181,172.31.10.2:2181,172.31.10.2:2181/kafka # Timeout in ms for connecting to zookeeper zookeeper.connection.timeout.ms=6000
- 同步到其他服务器,更改broker.id
- kafka启动和验证
cd /opt/kafka/kafka_2.11-0.10.1.0/bin nohup /opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-server-start.sh config/server.properties &
创建topic,如能成功创建topic则表示集群安装完成,也可以用jps命令查看kafka进程是否存在。
/opt/kafka/kafka_2.11-0.10.1.0/bin/kafka-topics.sh --create --zookeeper 172.31.10.1:2181,172.31.10.2:2181,172.31.10.2:2181/kafka --replication-factor 3 --partitions 1 --topic test
至此,kafka分布式集群安装完成,后续将深入讲解kafka其他内容。
Kakfa分布式集群搭建的更多相关文章
- Hadoop上路-01_Hadoop2.3.0的分布式集群搭建
一.配置虚拟机软件 下载地址:https://www.virtualbox.org/wiki/downloads 1.虚拟机软件设定 1)进入全集设定 2)常规设定 2.Linux安装配置 1)名称类 ...
- hadoop伪分布式集群搭建与安装(ubuntu系统)
1:Vmware虚拟软件里面安装好Ubuntu操作系统之后使用ifconfig命令查看一下ip; 2:使用Xsheel软件远程链接自己的虚拟机,方便操作.输入自己ubuntu操作系统的账号密码之后就链 ...
- Hadoop分布式集群搭建
layout: "post" title: "Hadoop分布式集群搭建" date: "2017-08-17 10:23" catalog ...
- hbase分布式集群搭建
hbase和hadoop一样也分为单机版.伪分布式版和完全分布式集群版本,这篇文件介绍如何搭建完全分布式集群环境搭建. hbase依赖于hadoop环境,搭建habase之前首先需要搭建好hadoop ...
- 分布式实时日志系统(四) 环境搭建之centos 6.4下hbase 1.0.1 分布式集群搭建
一.hbase简介 HBase是一个开源的非关系型分布式数据库(NoSQL),它参考了谷歌的BigTable建模,实现的编程语言为 Java.它是Apache软件基金会的Hadoop项目的一部分,运行 ...
- kafka系列二:多节点分布式集群搭建
上一篇分享了单节点伪分布式集群搭建方法,本篇来分享一下多节点分布式集群搭建方法.多节点分布式集群结构如下图所示: 为了方便查阅,本篇将和上一篇一样从零开始一步一步进行集群搭建. 一.安装Jdk 具体安 ...
- MinIO 分布式集群搭建
MinIO 分布式集群搭建 分布式 Minio 可以让你将多块硬盘(甚至在不同的机器上)组成一个对象存储服务.由于硬盘分布在不同的节点上,分布式 Minio 避免了单点故障. Minio 分布式模式可 ...
- 阿里云ECS服务器部署HADOOP集群(二):HBase完全分布式集群搭建(使用外置ZooKeeper)
本篇将在阿里云ECS服务器部署HADOOP集群(一):Hadoop完全分布式集群环境搭建的基础上搭建,多添加了一个 datanode 节点 . 1 节点环境介绍: 1.1 环境介绍: 服务器:三台阿里 ...
- 阿里云ECS服务器部署HADOOP集群(三):ZooKeeper 完全分布式集群搭建
本篇将在阿里云ECS服务器部署HADOOP集群(一):Hadoop完全分布式集群环境搭建的基础上搭建,多添加了一个 datanode 节点 . 1 节点环境介绍: 1.1 环境介绍: 服务器:三台阿里 ...
随机推荐
- Android7.0 Phone应用源码分析(二) phone来电流程分析
接上篇博文:Android7.0 Phone应用源码分析(一) phone拨号流程分析 今天我们再来分析下Android7.0 的phone的来电流程 1.1TelephonyFramework 当有 ...
- iOS设计模式之中介者模式
中介者模式 基本理解 中介者模式又叫做调停者模式,其实就是中间人或者调停者的意思. 尽管将一个系统分割成许多对象通常可以增加可复用性,但是对象之间的连接又降低了可复用性. 如果两个类不必彼此直接通信, ...
- 2、IOS开发--iPad之仿制QQ空间 (初始化HomeViewController子控件视图)
1.先初始化侧边的duck,效果图: 实现步骤: 2.然后初始化BottomMenu,效果: 步骤: 其实到这里,会出现一个小bug,那就是: 子控件的位置移高了,主要原因是: 逻辑分析图: 问题解决 ...
- java 接口(interface)
接口定义:[修饰符] interface 接口名 extends 父接口名1,父接口名2 ...{ } 接口可以说是一种特殊的抽象类.接口只能定义方法,而不能实现方法的实例. 1.接口中能够定义抽象方 ...
- IE无法正常打开QC的解决方案
方案一: 用兼容视图方式打开.(亲测IE10 可行) 方案二:(使用版本IE6-IE10) 1.安装过程中Jboss服务键入windows系统用户名密码域时总是提示用户名密码不正确! 解决方法:我的电 ...
- WebApi 使用PUT和DELETE时报405的问题
最近两天写了个项目,里面有一个接口是用谓词delete接收请求. 本地完全没问题,但是当发布到服务器上之后(IIS7.5),就报出 405.0 - Method Not Allowed 很明显是配置问 ...
- 未能找到元数据文件“引用的DLL的路径”
使用VS的时候 偶尔会出现错误 [未能找到元数据文件“引用的DLL的路径”] 但是实际上项目中这些DLL都是做了引用的,甚至你前一天打开还是好好的,睡一觉起来 不知道什么原因 就酱紫了 原因:不详 ...
- C#初级知识点整理及VS的简单使用
C#预处理器指令#define #undef 声明一个不需赋值的变量注意的一点事它必须放到using 上面,如 #define TEST using System.xxx; public class ...
- A Popup Progress Window
一个包含bar和取消而且不需要资源弹出窗口 1.构造函数 CProgressWnd(); CProgressWnd(CWnd* pParent, LPCTSTR strTitle, BOOL bSmo ...
- cocos2d-x之事件传递
bool HelloWorld::init() { if ( !Layer::init() ) { return false; } Size size=Director::getInstance()- ...