Linux下kafka集群搭建过程记录
正文前先来一波福利推荐:
福利一:
百万年薪架构师视频,该视频可以学到很多东西,是本人花钱买的VIP课程,学习消化了一年,为了支持一下女朋友公众号也方便大家学习,共享给大家。
福利二:
毕业答辩以及工作上各种答辩,平时积累了不少精品PPT,现在共享给大家,大大小小加起来有几千套,总有适合你的一款,很多是网上是下载不到。
获取方式:
微信关注 精品3分钟 ,id为 jingpin3mins,关注后回复 百万年薪架构师 ,精品收藏PPT 获取云盘链接,谢谢大家支持!

------------------------正文开始---------------------------
环境准备
zookeeper集群环境
kafka是依赖于zookeeper注册中心的一款分布式消息对列,所以需要有zookeeper单机或者集群环境。三台服务器:
172.16.18.198 k8s-n1
172.16.18.199 k8s-n2
172.16.18.200 k8s-n3
- 下载kafka安装包
http://kafka.apache.org/downloads 中下载,目前最新版本的kafka已经到2.2.0,我这里之前下载的是kafka_2.11-2.2.0.tgz.
安装kafka集群
1.上传压缩包到三台服务器解压缩到/opt/目录下
tar -zxvf kafka_2.11-2.2.0.tgz -C /opt/
ls -s kafka_2.11-2.2.0 kafka
2.修改 server.properties
############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id=0 ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
listeners=PLAINTEXT://k8s-n1:9092 # Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://k8s-n1:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads= # The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads= # The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes= # The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes= # The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes= ############################# Log Basics ############################# # A comma separated list of directories under which to store log files
log.dirs=/var/applog/kafka/ # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions= # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir= ############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than is recommended for to ensure availability such as .
offsets.topic.replication.factor=
transaction.state.log.replication.factor=
transaction.state.log.min.isr= ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# . Durability: Unflushed data may be lost if you are not using replication.
# . Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# . Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
log.flush.interval.messages= # The maximum amount of time a message can sit in a log before we force a flush
log.flush.interval.ms= ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion due to age
log.retention.hours= # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes= # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes= # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms= ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=k8s-n1:2181,k8s-n2:2181,k8s-n3:2181 # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms= ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is seconds.
# We override this to here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms= delete.topic.enable=true
拷贝两份到k8s-n2,k8s-n3
[root@k8s-n2 config]# cat server.properties
broker.id=1
listeners=PLAINTEXT://k8s-n2:9092
advertised.listeners=PLAINTEXT://k8s-n2:9092
[root@k8s-n3 config]# cat server.properties
broker.id=2
listeners=PLAINTEXT://k8s-n3:9092
advertised.listeners=PLAINTEXT://k8s-n3:9092
- 添加环境变量 在/etc/profile 中添加
export ZOOKEEPER_HOME=/opt/kafka_2.11-2.2.0
export PATH=$PATH:$ZOOKEEPER_HOME/bin
source /etc/profile 重载生效
- 启动kafka
kafka-server-start.sh config/server.properties &
Zookeeper+Kafka集群测试
1.创建topic:
kafka-topics.sh --create --zookeeper k8s-n1:2181, k8s-n2:2181, k8s-n3:2181 --replication-factor 3 --partitions 3 --topic test
2.显示topic
kafka-topics.sh --describe --zookeeper k8s-n1:2181, k8s-n2:2181, k8s-n3:2181 --topic test
3.列出topic
kafka-topics.sh --list --zookeeper k8s-n1:2181, k8s-n2:2181, k8s-n3:2181
test
创建 producer(生产者);
kafka-console-producer.sh --broker-list k8s-n1:9092 --topic test
hello
创建 consumer(消费者)
kafka-console-consumer.sh --bootstrap-server k8s-n1:9092 --topic test --from-beginning
hello
至此,kafka集群搭建就已经完成了。
Linux下kafka集群搭建过程记录的更多相关文章
- Linux下kafka集群搭建
环境准备 zookeeper集群环境 kafka是依赖于zookeeper注册中心的一款分布式消息对列,所以需要有zookeeper单机或者集群环境. 三台服务器: 172.16.18.198 k8s ...
- Linux 下kafka集群搭建
主机的IP地址: 主机IP地址 zookeeper kafka10.19.85.149 myid=1 broker.id=110.19.15.103 myid=2 broker.id=210.19.1 ...
- Linux下zookeeper集群搭建
Linux下zookeeper集群搭建 部署前准备 下载zookeeper的安装包 http://zookeeper.apache.org/releases.html 我下载的版本是zookeeper ...
- Linux下kafka集群的搭建
上一篇日志已经搭建好了zookeeper集群,详细请查看:http://www.cnblogs.com/lianliang/p/6533670.html,接下来继续搭建kafka的集群 1.首先下载k ...
- mongodb集群搭建过程记录
mongodb集群搭建花费比较长的时间,在此记录下过程,方便以后使用 一 软件环境 系统:ubuntu 18.04,mongodb 社区版4.2 https://docs.mongodb.com/ma ...
- Linux 下redis 集群搭建练习
Redis集群 学习参考:https://blog.csdn.net/jeffleo/article/details/54848428https://my.oschina.net/iyinghui/b ...
- Linux下solr集群搭建
第一步:创建四个tomcat实例.每个tomcat运行在不同的端口.8180.8280.8380.8480 第二步:部署solr的war包.把单机版的solr工程复制到集群中的tomcat中. 第三步 ...
- linux下Mongodb集群搭建:分片+副本集
三台服务器 192.168.1.40/41/42 安装包 mongodb-linux-x86_64-amazon2-4.0.1.tgz 服务规划 服务器40 服务器41 服务器42 mongo ...
- 消息队列kafka集群搭建
linux系统kafka集群搭建(3个节点192.168.204.128.192.168.204.129.192.168.204.130) 本篇文章kafka集群采用外部zookeeper,没采 ...
随机推荐
- offsetParent() 返回第一个匹配元素用于定位的父节点。
offsetParent() V1.2.6概述 返回第一个匹配元素用于定位的父节点. 这返回父元素中第一个其position设为relative或者absolute的元素.此方法仅对可见元素有效.大理 ...
- useradd/usermod/userdel/passwd/groupadd/groupmod/groupdel/gpasswd
用户 用户系统也是通过一个文件来管理的,默认的root用户id是0, shadow文件说明 加密算法类别 $后面的数字6指定了加密算法使用的是第六种,sha512加密 增加用户,修改成同样的密码,查看 ...
- springboot 生产环境与开发环境配置
通过修改yml文件里的active属性,prod(生产环境) 与 dev (开发环境)
- 0 - Visualizing and Understanding Convolutional Networks(阅读翻译)
卷积神经网络的可视化理解(Visualizing and Understanding Convolutional Networks) 摘要(Abstract) 近来,大型的卷积神经网络模型在Image ...
- LeetCode205----同构字符串
给定两个字符串 s 和 t,判断它们是否是同构的. 如果 s 中的字符可以被替换得到 t ,那么这两个字符串是同构的. 所有出现的字符都必须用另一个字符替换,同时保留字符的顺序.两个字符不能映射到同一 ...
- LeetCode 141. 环形链表(Linked List Cycle)
题目描述 给定一个链表,判断链表中是否有环. 进阶:你能否不使用额外空间解决此题? 解题思路 快慢指针,慢指针一次走一步,快指针一次走两步,若两者相遇则说明有环,快指针无路可走则说明无环. 代码 /* ...
- springboot内置分页技术
1,在pom.xml中注入分页的配置 <dependency> <groupId>com.github.pagehelper</groupId> <artif ...
- 使用pyinstaller 打包python程序
1.打开PyCharm的Terminal,使用命令pip install pyinstaller安装pyinstaller 2.打包命令:pyinstaller --console --onefile ...
- SRCNN代码分析
代码是作者页面上下载的matlab版.香港中文大学汤晓鸥教授.Learning a Deep Convolutional Network for Image Super-Resolution. htt ...
- Grafana添加Zabbix为数据源(二)
接触过grafana的同学肯定会觉得grafana比zabbix的图像灵活好看很多,下面就让我们一起进行grafana的web界面配置 1.鼠标移动到左上角,点击Plugins,然后选择"c ...