快速安装 kafka 集群
前言
安装步骤

#!/bin/bash # Modify the link if you want to download other version
KAFKA_DOWNLOAD_URL="https://dlcdn.apache.org/kafka/3.1.0/kafka_2.13-3.1.0.tgz" # Please use your own server ip
SERVERS=("192.168.1.1" "192.168.1.2" "192.168.1.3") ID=0 MECHINE_IP=$(hostname -i)
echo "Mechine IP: "${MECHINE_IP} LENGTH=${#SERVERS[@]} for (( i=0; i<${LENGTH}; i++ ));
do
if [ "${SERVERS[$i]}" = "${MECHINE_IP}" ]; then
ID=$((i+1))
fi
done echo "ID: "${ID} if [ "${ID}" -eq "0" ]; then
echo "Mechine IP is not matched to server list"
exit 1
fi ZOOKEEPER_CONNECT=$(printf ",%s:2181" "${SERVERS[@]}")
ZOOKEEPER_CONNECT=${ZOOKEEPER_CONNECT:1}
echo "Zookeeper Connect: "${ZOOKEEPER_CONNECT} echo "---------- Update yum ----------"
yum update -y
yum install -y wget echo "---------- Install java ----------"
yum -y install java-1.8.0-openjdk
java -version echo "---------- Create kafka user & group ----------"
groupadd -r kafka
useradd -g kafka -r kafka -s /bin/false echo "---------- Download kafka ----------"
cd /opt
wget ${KAFKA_DOWNLOAD_URL} -O kafka.tgz
mkdir -p kafka
tar -xzf kafka.tgz -C kafka --strip-components=1
chown -R kafka:kafka /opt/kafka echo "---------- Install and start zookeeper ----------"
mkdir -p /data/zookeeper
chown -R kafka:kafka /data/zookeeper
echo "${ID}" > /data/zookeeper/myid # zookeeper config
# https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_configuration
cat <<EOF > /opt/kafka/config/zookeeper-cluster.properties
# the directory where the snapshot is stored.
dataDir=/data/zookeeper # the port at which the clients will connect
clientPort=2181 # setting number of connections to unlimited
maxClientCnxns=0 # keeps a heartbeat of zookeeper in milliseconds
tickTime=2000 # time for initial synchronization
initLimit=10 # how many ticks can pass before timeout
syncLimit=5 # define servers ip and internal ports to zookeeper
EOF for (( i=0; i<${LENGTH}; i++ ));
do
INDEX=$((i+1))
echo "server.${INDEX}=${SERVERS[$i]}:2888:3888" >> /opt/kafka/config/zookeeper-cluster.properties
done # zookeeper.service
cat <<EOF > /usr/lib/systemd/system/zookeeper.service
[Unit]
Description=Apache Zookeeper server (Kafka)
Documentation=http://zookeeper.apache.org
Requires=network.target remote-fs.target
After=network.target remote-fs.target [Service]
Type=simple
User=kafka
Group=kafka
ExecStart=/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper-cluster.properties
ExecStop=/opt/kafka/bin/zookeeper-server-stop.sh
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl start zookeeper && systemctl enable zookeeper echo "---------- Install and start kafka ----------"
mkdir -p /data/kafka
chown -R kafka:kafka /data/kafka # kafka config
# https://kafka.apache.org/documentation/#configuration
cat <<EOF > /opt/kafka/config/server-cluster.properties
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=${ID} # Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://${MECHINE_IP}:9092 # A comma separated list of directories under which to store log files
log.dirs=/data/kafka # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1 # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1 # The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168 # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000 # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=${ZOOKEEPER_CONNECT}/kafka # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=60000
EOF # kafka.service
cat <<EOF > /usr/lib/systemd/system/kafka.service
[Unit]
Description=Apache Kafka server (broker)
Documentation=http://kafka.apache.org/documentation.html
Requires=network.target remote-fs.target
After=network.target remote-fs.target kafka-zookeeper.service [Service]
Type=simple
User=kafka
Group=kafka
ExecStart=/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server-cluster.properties
ExecStop=/opt/kafka/bin/kafka-server-stop.sh
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl start kafka && systemctl enable kafka
setup.sh
基本操作
# 启动 zookeeper
systemctl start zookeeper # 停止 zookeeper
systemctl stop zookeeper # 重启 zookeeper
systemctl restart zookeeper # 查看 zookeeper 日志
systemctl status zookeeper -l # 启动 kafka
systemctl start kafka # 停止 kafka
systemctl stop kafka # 重启 kafka
systemctl restart kafka # 查看 kafka 日志
systemctl status kafka -l
简单测试
# 进入 kafka bin 目录
cd /opt/kafka/bin/ # 创建一个 topic
kafka-topics.sh --create --topic test --partitions 3 --replication-factor 1 --bootstrap-server localhost:9092 # 查看 topic 描述
kafka-topics.sh --topic test --describe --bootstrap-server localhost:9092 # 启动生产者然后输入消息
kafka-console-producer.sh --topic test --bootstrap-server localhost:9092 # 启动消费者消费消息
kafka-console-consumer.sh --topic test --from-beginning --bootstrap-server localhost:9092 # 删除 topic
kafka-topics.sh --topic test --delete --bootstrap-server localhost:9092
脚本说明
1. 以下代码主要指定下载 kafka 的版本以及服务器 IP 列表,可根据实际情况进行调整。
# Modify the link if you want to download other version
KAFKA_DOWNLOAD_URL="https://dlcdn.apache.org/kafka/3.1.0/kafka_2.13-3.1.0.tgz" # Please use your own server ip
SERVERS=("192.168.1.1" "192.168.1.2" "192.168.1.3")
2. 以下代码主要用于生成 zookeeper id 和 kafka broker id 以及拼接 kafka 配置中的 zookeeper 连接串,通过本机 IP 与填写的 IP 列表进行匹配,如果本机 IP 等于第一个服务器 IP,则 ID为 1,等于第二个服务器 IP,则 ID为 2,等于第二个服务器 IP,则 ID为 3,以此类推;本机 IP 不在填写的 IP 列表中,则会退出安装。
ID=0 MECHINE_IP=$(hostname -i)
echo "Mechine IP: "${MECHINE_IP} LENGTH=${#SERVERS[@]} for (( i=0; i<${LENGTH}; i++ ));
do
if [ "${SERVERS[$i]}" = "${MECHINE_IP}" ]; then
ID=$((i+1))
fi
done echo "ID: "${ID} if [ "${ID}" -eq "0" ]; then
echo "Mechine IP is not matched to server list"
exit 1
fi ZOOKEEPER_CONNECT=$(printf ",%s:2181" "${SERVERS[@]}")
ZOOKEEPER_CONNECT=${ZOOKEEPER_CONNECT:1}
3. 更新 yum 源,并安装 wget 下载工具
yum update -y
yum install -y wget
4. 安装 java 8
yum -y install java-1.8.0-openjdk
java -version
5. 创建 kafka 用户及组
groupadd -r kafka
useradd -g kafka -r kafka -s /bin/false
6. 下载并解压 kafka 可执行程序
cd /opt
wget ${KAFKA_DOWNLOAD_URL} -O kafka.tgz
mkdir -p kafka
tar -xzf kafka.tgz -C kafka --strip-components=1
chown -R kafka:kafka /opt/kafka
7. 创建 zookeeper 目录,创建 zookeeper id
mkdir -p /data/zookeeper
chown -R kafka:kafka /data/zookeeper
echo "${ID}" > /data/zookeeper/myid
8. 生成 zookeeper 配置文件,详细说明可参考:https://zookeeper.apache.org/doc/r3.1.2/zookeeperAdmin.html#sc_configuration
cat <<EOF > /opt/kafka/config/zookeeper-cluster.properties
# the directory where the snapshot is stored.
dataDir=/data/zookeeper # the port at which the clients will connect
clientPort=2181 # setting number of connections to unlimited
maxClientCnxns=0 # keeps a heartbeat of zookeeper in milliseconds
tickTime=2000 # time for initial synchronization
initLimit=10 # how many ticks can pass before timeout
syncLimit=5 # define servers ip and internal ports to zookeeper
EOF for (( i=0; i<${LENGTH}; i++ ));
do
INDEX=$((i+1))
echo "server.${INDEX}=${SERVERS[$i]}:2888:3888" >> /opt/kafka/config/zookeeper-cluster.properties
done
9. 创建 zookeeper systemd 管理文件,启动并设置开机启动 zookeeper
cat <<EOF > /usr/lib/systemd/system/zookeeper.service
[Unit]
Description=Apache Zookeeper server (Kafka)
Documentation=http://zookeeper.apache.org
Requires=network.target remote-fs.target
After=network.target remote-fs.target [Service]
Type=simple
User=kafka
Group=kafka
ExecStart=/opt/kafka/bin/zookeeper-server-start.sh /opt/kafka/config/zookeeper-cluster.properties
ExecStop=/opt/kafka/bin/zookeeper-server-stop.sh
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl start zookeeper && systemctl enable zookeeper
10. 创建 kafka 目录
mkdir -p /data/kafka
chown -R kafka:kafka /data/kafka
11. 生成 kafka 配置文件,详细说明可参考:https://kafka.apache.org/documentation/#configuration
cat <<EOF > /opt/kafka/config/server-cluster.properties
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=${ID} # Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
advertised.listeners=PLAINTEXT://${MECHINE_IP}:9092 # A comma separated list of directories under which to store log files
log.dirs=/data/kafka # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1 # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1 # The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168 # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000 # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=${ZOOKEEPER_CONNECT}/kafka # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=60000
EOF
12. 创建 kafka systemd 管理文件,启动并设置开机启动 kafka
cat <<EOF > /usr/lib/systemd/system/kafka.service
[Unit]
Description=Apache Kafka server (broker)
Documentation=http://kafka.apache.org/documentation.html
Requires=network.target remote-fs.target
After=network.target remote-fs.target kafka-zookeeper.service [Service]
Type=simple
User=kafka
Group=kafka
ExecStart=/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server-cluster.properties
ExecStop=/opt/kafka/bin/kafka-server-stop.sh
Restart=on-failure [Install]
WantedBy=multi-user.target
EOF systemctl daemon-reload
systemctl start kafka && systemctl enable kafka
总结
按照上述的操作,你将快速完成 kafka 集群安装,如有问题可以在文章留言。
快速安装 kafka 集群的更多相关文章
- Centos7.5安装kafka集群
Tags: kafka Centos7.5安装kafka集群 Centos7.5安装kafka集群 主机环境 软件环境 主机规划 主机安装前准备 安装jdk1.8 安装zookeeper 安装kafk ...
- Centos安装Kafka集群
kafka是LinkedIn开发并开源的一个分布式MQ系统,现在是Apache的一个孵化项目.在它的主页描述kafka为一个高吞吐量的分布式(能 将消息分散到不同的节点上)MQ.在这片博文中,作者简单 ...
- CentOS7 安装kafka集群
1. 环境准备 JDK1.8 ZooKeeper集群(参见本人博文) Scala2.12(如果需要做scala开发的话,安装方法参见本人博文) 本次安装的kafka和zookeeper集群在同一套物理 ...
- RedHat6.5安装kafka集群
版本号: Redhat6.5 JDK1.8 zookeeper-3.4.6 kafka_2.11-0.8.2.1 1.软件环境 1.3台RedHat机器,master.slave1. ...
- helm安装kafka集群并测试其高可用性
介绍 Kafka是由Apache软件基金会开发的一个开源流处理平台,由Scala和Java编写.Kafka是一种高吞吐量的分布式发布订阅消息系统,它可以处理消费者在网站中的所有动作流数据. 这种动作( ...
- 用 Docker 快速搭建 Kafka 集群
开源Linux 一个执着于技术的公众号 版本 •JDK 14•Zookeeper•Kafka 安装 Zookeeper 和 Kafka Kafka 依赖 Zookeeper,所以我们需要在安装 Kaf ...
- 安装kafka 集群 步骤
1.下载 http://mirror.bit.edu.cn/apache/kafka/2.1.0/kafka_2.11-2.1.0.tgz 2.解压 tar -zxvf kafka_2.11-2.1 ...
- 安装kafka集群
1解压tar包 tar -zxvf kafka_2.-.tgz 2.进入config目录 3.配置server.properties文件 # Licensed to the Apache Softwa ...
- ubuntu 16.04快速安装ceph集群
准备工作 假设集群: 选一台作管理机 注意: ceph集群引用hostname,而非ip. 172.17.4.16 test16 #hostname必须是test16 172.17.4.17 test ...
随机推荐
- 两天两夜,1M图片优化到100kb!
坦白从宽吧,我就是那个花了两天两夜把 1M 图片优化到 100kb 的家伙--王小二! 自从因为一篇报道登上热搜后,我差点抑郁,每天要靠 50 片安眠药才能入睡. 网络上曝光的那些关于一码通的消息,有 ...
- Chapter06 数组(Array)
目录 Chapter06 数组 6.1 数组的认识 6.2 数组的使用 使用方式1 - 动态初始化 使用方式2 - 动态初始化 使用方法3 - 静态初始化 6.3 数组使用的注意事项和细节 6.4 数 ...
- LeetCode-071-简化路径
简化路径 题目描述:给你一个字符串 path ,表示指向某一文件或目录的 Unix 风格 绝对路径 (以 '/' 开头),请你将其转化为更加简洁的规范路径. 在 Unix 风格的文件系统中,一个点(. ...
- linux Wireshark图解TCP三次握手与四次挥手
Linux Wireshark图解TCP三次握手与四次挥手 原文章链接:Wireshark图解TCP三次握手与四次挥手 文章内容丰富 值得学习
- .Net Core(.NET6)中接入Log4net和NLog进行日志记录
一.接入Log4net 1.按日期和大小混合分割日志 nuget包安装 log4net Microsoft.Extensions.Logging.Log4Net.AspNetCore 配置文件 配置文 ...
- OSPF协议原理及配置2-理解邻居和邻接关系
OSPF是一个动态路由协议,运行OSPF的路由器之间需要交换链路状态信息和路由信息,在交换这些信息之前首先需要建立邻接关系.邻接关系用来交换链路状态及路由信息. 注意:并非所有的邻居关系都可以成为邻接 ...
- LGP2522题解
双倍经验题. 柯以看成是P3455的扩展. 首先这个范围内是数我们柯以用类似二维前缀和的思想,看成: \(ans(a,b,c,d)=ans(1,b,1,d)+ans(1,a-1,1,c-1)-ans( ...
- 初探 Elasticsearch,学习笔记第一讲
1. ES 基础 1.1 ES定义 ES=elaticsearch简写, Elasticsearch是一个开源的高扩展的分布式全文检索引擎,它可以近乎实时的存储.检索数据:本身扩展 ...
- oracle数据库导入导出语句
一.导出: 导出语句: expdp sanyayun/sanyayun@syerpdb directory=DMP dumpfile=fooderp.dmp content=all SCHEMAS=s ...
- 单循环链表(基于c语言)
#include <stdio.h> #include <stdlib.h> #include <assert.h> typedef int LDataType; ...