kafka 0.10.2 cetos6.5 集群部署
安装 zookeeper http://www.cnblogs.com/xiaojf/p/6572351.html
安装 scala http://www.cnblogs.com/xiaojf/p/6568432.html
[root@m1 jar]# tar zxvf kafka_2.-0.10.2.0.tgz -C ../
[root@m1 jar]# cd ..
[root@m1 soft]# ll
total
drwxr-xr-x. root root Mar : jar
drwxr-xr-x. uucp Dec : jdk
drwxr-xr-x. root root Feb : kafka_2.-0.10.2.0
drwxrwxr-x. Mar scala-2.11.
drwxr-xr-x. root root Mar : tmp
drwxr-xr-x. Aug zookeeper-3.4.
[root@m1 soft]# mv kafka_2.-0.10.2.0 kafka
[root@m1 soft]# cd kafka/config/
[root@m1 config]# ll
total
-rw-r--r--. root root Feb : connect-console-sink.properties
-rw-r--r--. root root Feb : connect-console-source.properties
-rw-r--r--. root root Feb : connect-distributed.properties
-rw-r--r--. root root Feb : connect-file-sink.properties
-rw-r--r--. root root Feb : connect-file-source.properties
-rw-r--r--. root root Feb : connect-log4j.properties
-rw-r--r--. root root Feb : connect-standalone.properties
-rw-r--r--. root root Feb : consumer.properties
-rw-r--r--. root root Feb : log4j.properties
-rw-r--r--. root root Feb : producer.properties
-rw-r--r--. root root Feb : server.properties
-rw-r--r--. root root Feb : tools-log4j.properties
-rw-r--r--. root root Feb : zookeeper.properties
[root@m1 config]# vi server.properties
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults
############################# Server Basics #############################
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0
# Switch to enable topic deletion or not, default value is false
#delete.topic.enable=true
############################# Socket Server Settings #############################
# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092
# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName(). 此处要修改为当前节点的真实ip地址,否则外部java无法访问9092端口
advertised.listeners=PLAINTEXT://192.168.59.130:9092
# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
# The number of threads handling network requests
num.network.threads=3
# The number of threads doing disk I/O
num.io.threads=8
# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400
# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400
# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600
############################# Log Basics #############################
# A comma seperated list of directories under which to store log files
log.dirs=/usr/local/soft/tmp/kafka/logs
# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
############################# Log Flush Policy #############################
# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.
# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000
# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.
# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168
# A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824
# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824
# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=m1:2181,s1:2181,s2:2181
# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
创建日志文件夹 /usr/local/soft/tmp/kafka/logs
[root@m1 config]# mkdir -p /usr/local/soft/tmp/kafka/logs
分发代码到其他服务器,并修改broker.id
s1 broker.id = 1
s2 broker.id = 2
设置环境变量
[root@s2 config]# vi /etc/profile
export KAFKA_HOME=/usr/local/soft/kafka/
export PATH=$PATH:$KAFKA_HOME/bin
source /etc/profile
启动kafka server, 确认各个服务已经创建 /usr/local/soft/tmp/kafka/logs 目录
[root@s2 config]# kafka-server-start.sh /usr/local/soft/kafka/config/server.properties &
关闭kafka
[root@s2 config]# kafka-server-stop.sh
如果上面命令无效,直接kiil -9 kafka的进程号
创建topic
[root@s2 config]# kafka-topics.sh --create --zookeeper localhost: --replication-factor --partitions --topic test
Created topic "test".
创建消息生产者
[root@s2 config]# kafka-console-producer.sh --broker-list localhost: --topic test
xiaojf
创建消息消费者
[root@s1 ~]# kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
[2017-03-21 07:15:32,611] INFO [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-42 (kafka.coordinator.GroupMetadataManager)
[2017-03-21 07:15:32,649] INFO [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-42 in 38 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2017-03-21 07:15:32,649] INFO [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-4 (kafka.coordinator.GroupMetadataManager)
[2017-03-21 07:15:32,681] INFO [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-4 in 32 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2017-03-21 07:15:32,681] INFO [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-48 (kafka.coordinator.GroupMetadataManager)
[2017-03-21 07:15:32,718] INFO [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-48 in 37 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2017-03-21 07:15:32,718] INFO [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-10 (kafka.coordinator.GroupMetadataManager)
[2017-03-21 07:15:32,757] INFO [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-10 in 39 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2017-03-21 07:15:32,757] INFO [Group Metadata Manager on Broker 1]: Loading offsets and group metadata from __consumer_offsets-16 (kafka.coordinator.GroupMetadataManager)
[2017-03-21 07:15:32,805] INFO [Group Metadata Manager on Broker 1]: Finished loading offsets from __consumer_offsets-16 in 48 milliseconds. (kafka.coordinator.GroupMetadataManager)
xiaojf
至此,安装全部完成,接着测试多个broker 代理的例子
创建topic,replicat 数量根据当前集群kafka节点数据量相关
[root@s2 config]# kafka-topics.sh --create --zookeeper localhost: --replication-factor --partitions --topic my-replicated-topic
Error while executing topic command : replication factor: larger than available brokers:
[-- ::,] ERROR org.apache.kafka.common.errors.InvalidReplicationFactorException: replication factor: larger than available brokers:
(kafka.admin.TopicCommand$)
[root@s2 config]# kafka-topics.sh --create --zookeeper localhost: --replication-factor --partitions --topic my-replicated-topic
Created topic "my-replicated-topic".
查看topic状态
[root@s2 config]# kafka-topics.sh --describe --zookeeper localhost: --topic my-replicated-topic
Topic:my-replicated-topic PartitionCount: ReplicationFactor: Configs:
Topic: my-replicated-topic Partition: Leader: Replicas: , Isr: ,
Topic: my-replicated-topic Partition: Leader: Replicas: , Isr: ,
partitions 为2 ,所以有两行的分区信息
Replicas 是指当前分区分别在哪个kafka节点上, 数字标识broker.id
Leader 分区的leader节点
Isr 存活的kafka replicat节点,并且如果当前leader挂掉后, 依次选举为心的leader节点
创建消息生产者
[root@s2 config]# kafka-console-producer.sh --broker-list localhost: --topic my-replicated-topic
xiaojf muli^Hti topic
创建消息消费者
[root@s2 ~]# kafka-console-consumer.sh --bootstrap-server localhost: --from-beginning --topic my-replicated-topic
xiaojf multi topic
结束
kafka 0.10.2 cetos6.5 集群部署的更多相关文章
- Kafka 0.9+Zookeeper3.4.6集群搭建、配置,新Client API的使用要点,高可用性测试,以及各种坑 (转载)
Kafka 0.9版本对java client的api做出了较大调整,本文主要总结了Kafka 0.9在集群搭建.高可用性.新API方面的相关过程和细节,以及本人在安装调试过程中踩出的各种坑. 关于K ...
- Redis3.0.1 Stable版本的集群部署(Mac)
本文档基于如下原始文档(CentOS)创建: http://blog.csdn.net/xu470438000/article/details/42971091 修改了一些路径的错误,补全了一些命令执 ...
- ArcGIS 10.2 for Server 集群部署
ArcGIS 10.2 for Server 具有很灵活的体系结构,而 ArcGIS 10.2 forServer site 可以包含一台或多台安装 GIS Server 的机器,这些参与ArcGI ...
- hadoop2.7.3+spark2.0.1+scala2.11.8集群部署
一.环境 4.用户 hadoop 5.目录规划 /home/hadoop/app #程序目录 /home/hadoop/data #数据目录 #打开文件的最大数 vi /etc/sec ...
- 配置MapReduce插件时,弹窗报错org/apache/hadoop/eclipse/preferences/MapReducePreferencePage : Unsupported major.minor version 51.0(Hadoop2.7.3集群部署)
原因: hadoop-eclipse-plugin-2.7.3.jar 编译的jdk版本和eclipse启动使用的jdk版本不一致导致. 解决方案一: 修改myeclipse.ini文件即可解决. ...
- redis-5.0.5 集群部署
之前写过一套基于redis-4.0.6版本的测试集群部署 https://www.cnblogs.com/mrice/p/10730309.html 最近生产环境需要部署一套redis-5.0.5版本 ...
- Nacos(九):Nacos集群部署和遇到的问题
前言 前面的系列文章已经介绍了Nacos的如何接入SpringCloud,以及Nacos的基本使用方式 之前的文章中都是基于单机模式部署进行讲解的,本文对Nacos的集群部署方式进行说明 环境准备 J ...
- 消息中间件kafka+zookeeper集群部署、测试与应用
业务系统中,通常会遇到这些场景:A系统向B系统主动推送一个处理请求:A系统向B系统发送一个业务处理请求,因为某些原因(断电.宕机..),B业务系统挂机了,A系统发起的请求处理失败:前端应用并发量过大, ...
- Zookeeper+Kafka集群部署(转)
Zookeeper+Kafka集群部署 主机规划: 10.200.3.85 Kafka+ZooKeeper 10.200.3.86 Kafka+ZooKeeper 10.200.3.87 Kaf ...
随机推荐
- jmeter JDBC Request (查询数据库获取数据库数据) 的使用
JDBC Request 这个Sampler可以向数据库发送一个jdbc请求(sql语句),并获取返回的数据库数据进行操作.它经常需要和JDBC Connection Configuration配置原 ...
- [洛谷P2580]于是他错误的点名开始了
洛谷P2580的一个水题,用啥都能过,不过为了练习一下刚刚学会的字典树,还是认真做一下吧. #include <cstdio> #include <cstring> using ...
- AngularJs 4大核心
放弃了IE8, 4大核心: MVC: 数据模型,视图层,业务逻辑和控制模式(控制器), 为何MVC不是设计模式呢?(23种设计模式里没有MVC,MVC模式的目的就是实现Web系统的职能分工,超越了设计 ...
- DateTime.Now的一些用法
System.DateTime.Now.ToString("D"); //Tuesday, December 13, 2016 System.DateTime.Now.ToSt ...
- junit测试Android项目
关于junit测试Android项目方法主要有一下步骤: 1.导入junit4的jar包 在工厂中Build Path中Add Library->JUnit->JUnit4->Fin ...
- Python:认识变量和字符串
几个月前,我开始学习个人形象管理,从发型.妆容.服饰到仪表仪态,都开始做全新改造,在塑造个人风格时,最基础的是先了解自己属于哪种风格,然后找到参考对象去模仿,可以是自己欣赏的人.明星或模特等,直至最后 ...
- 【渗透测试】hydra使用小结
-R:继续从上一次进度接着破解 -S:大写,采用SSL链接 -s <PORT>:小写,可通过这个参数指定非默认端口 -l <LOGIN>:指定破解的用户,对特定用户破解 -L ...
- 通过 jdbc 分析数据库中的表结构和主键外键
文章转自:http://ivan4126.blog.163.com/blog/static/20949109220137753214811/ 在某项目中用到了 hibernate ,大家都知道 hib ...
- 基于vue2.0前端组件库element中 el-form表单 自定义验证填坑
eleme写的基于vue2.0的前端组件库: http://element.eleme.io 我在平时使用过程中,遇到的问题. 自定义表单验证出坑: 1: validate/resetFields 未 ...
- 蓝桥杯-马虎的算式-java
/* (程序头部注释开始) * 程序的版权和版本声明部分 * Copyright (c) 2016, 广州科技贸易职业学院信息工程系学生 * All rights reserved. * 文件名称: ...