正文前先来一波福利推荐:

福利一:

百万年薪架构师视频,该视频可以学到很多东西,是本人花钱买的VIP课程,学习消化了一年,为了支持一下女朋友公众号也方便大家学习,共享给大家。

福利二:

毕业答辩以及工作上各种答辩,平时积累了不少精品PPT,现在共享给大家,大大小小加起来有几千套,总有适合你的一款,很多是网上是下载不到。

获取方式:

微信关注 精品3分钟 ,id为 jingpin3mins,关注后回复   百万年薪架构师 ,精品收藏PPT  获取云盘链接,谢谢大家支持!

------------------------正文开始---------------------------

环境准备

  1. zookeeper集群环境
    kafka是依赖于zookeeper注册中心的一款分布式消息对列,所以需要有zookeeper单机或者集群环境。

  2. 三台服务器:

172.16.18.198 k8s-n1
172.16.18.199 k8s-n2
172.16.18.200 k8s-n3
  1. 下载kafka安装包

http://kafka.apache.org/downloads 中下载,目前最新版本的kafka已经到2.2.0,我这里之前下载的是kafka_2.11-2.2.0.tgz.

安装kafka集群

1.上传压缩包到三台服务器解压缩到/opt/目录下

tar -zxvf kafka_2.11-2.2.0.tgz -C /opt/
ls -s kafka_2.11-2.2.0 kafka

2.修改 server.properties

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0 ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://k8s-n1:9092 # Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://k8s-n1:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads= # The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads= # The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes= # The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes= # The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes= ############################# Log Basics ############################# # A comma separated list of directories under which to store log files
log.dirs=/var/applog/kafka/ # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions= # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir= ############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than is recommended for to ensure availability such as .
offsets.topic.replication.factor=
transaction.state.log.replication.factor=
transaction.state.log.min.isr= ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# . Durability: Unflushed data may be lost if you are not using replication.
# . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages= # The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms= ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion due to age
log.retention.hours= # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes= # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes= # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms= ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=k8s-n1:2181,k8s-n2:2181,k8s-n3:2181 # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms= ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is seconds.
# We override this to here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms= delete.topic.enable=true

拷贝两份到k8s-n2,k8s-n3

[root@k8s-n2 config]# cat server.properties
broker.id=1
listeners=PLAINTEXT://k8s-n2:9092
advertised.listeners=PLAINTEXT://k8s-n2:9092 [root@k8s-n3 config]# cat server.properties
broker.id=2
listeners=PLAINTEXT://k8s-n3:9092
advertised.listeners=PLAINTEXT://k8s-n3:9092
  1. 添加环境变量 在/etc/profile 中添加
export ZOOKEEPER_HOME=/opt/kafka_2.11-2.2.0
export PATH=$PATH:$ZOOKEEPER_HOME/bin

source /etc/profile 重载生效

  1. 启动kafka
kafka-server-start.sh config/server.properties &

Zookeeper+Kafka集群测试

1.创建topic:

kafka-topics.sh --create --zookeeper k8s-n1:2181, k8s-n2:2181, k8s-n3:2181 --replication-factor 3 --partitions 3 --topic test

2.显示topic

kafka-topics.sh --describe --zookeeper k8s-n1:2181, k8s-n2:2181, k8s-n3:2181 --topic test

3.列出topic

kafka-topics.sh --list --zookeeper k8s-n1:2181, k8s-n2:2181, k8s-n3:2181
test

创建 producer(生产者);

kafka-console-producer.sh --broker-list k8s-n1:9092 --topic test
hello

创建 consumer(消费者)

kafka-console-consumer.sh --bootstrap-server k8s-n1:9092 --topic test --from-beginning
hello

至此,kafka集群搭建就已经完成了。

Linux下kafka集群搭建过程记录的更多相关文章

  1. Linux下kafka集群搭建

    环境准备 zookeeper集群环境 kafka是依赖于zookeeper注册中心的一款分布式消息对列,所以需要有zookeeper单机或者集群环境. 三台服务器: 172.16.18.198 k8s ...

  2. Linux 下kafka集群搭建

    主机的IP地址: 主机IP地址 zookeeper kafka10.19.85.149 myid=1 broker.id=110.19.15.103 myid=2 broker.id=210.19.1 ...

  3. Linux下zookeeper集群搭建

    Linux下zookeeper集群搭建 部署前准备 下载zookeeper的安装包 http://zookeeper.apache.org/releases.html 我下载的版本是zookeeper ...

  4. Linux下kafka集群的搭建

    上一篇日志已经搭建好了zookeeper集群,详细请查看:http://www.cnblogs.com/lianliang/p/6533670.html,接下来继续搭建kafka的集群 1.首先下载k ...

  5. mongodb集群搭建过程记录

    mongodb集群搭建花费比较长的时间,在此记录下过程,方便以后使用 一 软件环境 系统:ubuntu 18.04,mongodb 社区版4.2 https://docs.mongodb.com/ma ...

  6. Linux 下redis 集群搭建练习

    Redis集群 学习参考:https://blog.csdn.net/jeffleo/article/details/54848428https://my.oschina.net/iyinghui/b ...

  7. Linux下solr集群搭建

    第一步:创建四个tomcat实例.每个tomcat运行在不同的端口.8180.8280.8380.8480 第二步:部署solr的war包.把单机版的solr工程复制到集群中的tomcat中. 第三步 ...

  8. linux下Mongodb集群搭建:分片+副本集

    三台服务器 192.168.1.40/41/42 安装包 mongodb-linux-x86_64-amazon2-4.0.1.tgz 服务规划  服务器40  服务器41  服务器42  mongo ...

  9. 消息队列kafka集群搭建

    linux系统kafka集群搭建(3个节点192.168.204.128.192.168.204.129.192.168.204.130)    本篇文章kafka集群采用外部zookeeper,没采 ...

随机推荐

  1. 部署lnmp

    装包 1.安装依赖包 yum - y install gcc openssl-devel pcre-devel zlib-devel 2.解源码包 .tar.gz 3.切换到解压缩后的目录,配置参数 ...

  2. spring的finishBeanFactoryInitialization方法分析

    spring源码版本5.0.5 概述 该方法会实例化所有剩余的非懒加载单例 bean.除了一些内部的 bean.实现了 BeanFactoryPostProcessor 接口的 bean.实现了 Be ...

  3. Python模块之目录

     1.加密算法有关 hmac模块 hashlib模块 2.进程有关 multiprocessing模块 3.线程有关 threading模块 4.协程有关 asyncio模块 5.系统命令调用 sub ...

  4. Java Spring MVC工作流程

    本文是对 SpringMVC 工作流程的总结,自己一定要可以用语言描述. 名词解释: DispatcherServlet:前端控制器,是 SpringMVC 工作流程的中心,负责调用其他组件,在系统启 ...

  5. [Luogu] 计算系数

    https://www.luogu.org/problemnew/show/P1313#sub Answer = a ^ n * b ^ m * C(k, min(n,  m)) 这里用费马小定理求逆 ...

  6. 【CUDA 基础】6.1 流和事件概述

    title: [CUDA 基础]6.1 流和事件概述 categories: - CUDA - Freshman tags: - 流 - 事件 toc: true date: 2018-06-10 2 ...

  7. Financial Management(SDUT 1007)

    Problem Description Larry graduated this year and finally has a job. He's making a lot of money, but ...

  8. Flask-配置参数

    Flask配置 Flask 是一个非常灵活且短小精干的web框架 , 那么灵活性从什么地方体现呢? 有一个神奇的东西叫 Flask配置 , 这个东西怎么用呢? 它能给我们带来怎么样的方便呢? 首先展示 ...

  9. 重写equals为啥需要重写hashCode

    描述 以前一直记得重写equals要把hashCode也要重写了,但是一直也是没有搞明白, 最近在看一些东西,觉得有必要记录一下. 了解一下equals equals是Object类的方法, equa ...

  10. Qt事件机制浅析

    Qt事件机制 Qt程序是事件驱动的, 程序的每个动作都是由幕后某个事件所触发.. Qt事件的发生和处理成为程序运行的主线,存在于程序整个生命周期. Qt事件的类型很多, 常见的qt的事件如下: 键盘事 ...