zookeeper参照https://www.cnblogs.com/wintersoft/p/11128484.html

mkdir /opt/kafka -p
vim /opt/kafka/Dockerfile

FROM wurstmeister/kafka:2.12-2.3.0
EXPOSE 9092

sudo mkdir -p /var/log/kafka;sudo chmod -R 777 /var/log/kafka

vim /opt/kafka/docker-compose.yml

version: '2'
services:
kafka:
image: v-kafka
container_name: kafka
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: kafkaserver
KAFKA_ADVERTISED_PORT: 9092
KAFKA_ZOOKEEPER_CONNECT: "zookeeperserver:2181"
volumes:
- /var/log/kafka/:/kafka
- /var/run/docker.sock:/var/run/docker.sock
extra_hosts:
- "kafkaserver:192.168.0.101"
- "zookeeperserver:192.168.0.101"

生成启动
cd /opt/kafka/
docker-compose build
docker-compose up -d --force-recreate
docker-compose down
docker-compose restart

查看进程
netstat -anltp|grep 9092

查看日志
docker logs --tail="500" kafka
docker logs -f kafka

进入容器
docker exec -it kafka /bin/bash

伪集群

sudo mkdir -p /var/log/kafka/node1;sudo chmod -R 777 /var/log/kafka/node1
sudo mkdir -p /var/log/kafka/node2;sudo chmod -R 777 /var/log/kafka/node2
sudo mkdir -p /var/log/kafka/node3;sudo chmod -R 777 /var/log/kafka/node3

vim /opt/kafka/docker-compose.yml

version: '2'

services:
kafka1:
image: v-kafka1
container_name: kafka1
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- 9011:9092
environment:
KAFKA_PORT: 9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafkaserver1:9011
KAFKA_ADVERTISED_HOST_NAME: kafkaserver1
KAFKA_ADVERTISED_PORT: 9011
KAFKA_BROKER_ID: 1
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeperserver1:2181,zookeeperserver2:2182,zookeeperserver3:2183
KAFKA_DELETE_TOPIC_ENABLE: "true"
volumes:
- /var/log/kafka/node1:/kafka
- /var/run/docker.sock:/var/run/docker.sock
extra_hosts:
- "kafkaserver1:192.168.0.101"
- "kafkaserver2:192.168.0.101"
- "kafkaserver3:192.168.0.101"
- "zookeeperserver1:192.168.0.101"
- "zookeeperserver2:192.168.0.101"
- "zookeeperserver3:192.168.0.101"
kafka2:
image: v-kafka2
container_name: kafka2
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- 9012:9092
environment:
KAFKA_PORT: 9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafkaserver2:9012
KAFKA_ADVERTISED_HOST_NAME: kafkaserver2
KAFKA_ADVERTISED_PORT: 9012
KAFKA_BROKER_ID: 2
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeperserver1:2181,zookeeperserver2:2182,zookeeperserver3:2183
KAFKA_DELETE_TOPIC_ENABLE: "true"
volumes:
- /var/log/kafka/node2:/kafka
- /var/run/docker.sock:/var/run/docker.sock
extra_hosts:
- "kafkaserver1:192.168.0.101"
- "kafkaserver2:192.168.0.101"
- "kafkaserver3:192.168.0.101"
- "zookeeperserver1:192.168.0.101"
- "zookeeperserver2:192.168.0.101"
- "zookeeperserver3:192.168.0.101"
kafka3:
image: v-kafka3
container_name: kafka3
build:
context: .
dockerfile: Dockerfile
restart: always
ports:
- 9013:9092
environment:
KAFKA_PORT: 9092
KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafkaserver3:9013
KAFKA_ADVERTISED_HOST_NAME: kafkaserver3
KAFKA_ADVERTISED_PORT: 9013
KAFKA_BROKER_ID: 3
KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 2
KAFKA_ZOOKEEPER_CONNECT: zookeeperserver1:2181,zookeeperserver2:2182,zookeeperserver3:2183
KAFKA_DELETE_TOPIC_ENABLE: "true"
volumes:
- /var/log/kafka/node3:/kafka
- /var/run/docker.sock:/var/run/docker.sock
extra_hosts:
- "kafkaserver1:192.168.0.101"
- "kafkaserver2:192.168.0.101"
- "kafkaserver3:192.168.0.101"
- "zookeeperserver1:192.168.0.101"
- "zookeeperserver2:192.168.0.101"
- "zookeeperserver3:192.168.0.101"

配置key规则:在前面加KAFKA_前缀 全部大写 “.”用“_”代替

如:
增加 Kafka 堆的内存大小 KAFKA_HEAP_OPTS=-Xmx4G -Xms4G
KAFKA_LOG_DIRS=/kafka/logs 时 volumes:- "./kafka3/logs:/kafka/logs"

kafka-manager的environment可设置APPLICATION_SECRET: "xxx"
KAFKA_LISTENERS的值 为内网地址

没配置delete.topic.enable=true,只是软删除

如果将topic软删除,java客户端会报:

WARN Error while fetching metadata with correlation id 0 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)

报 org.apache.kafka.clients.NetworkClient   : [Producer clientId=producer-3] 1 partitions have leader brokers without a matching listener, including [log-0]

一般是zookeeper的ip:port配置导致kafka报错问题,配置好后需清理zookeeper数据才会正常。

复制配置
docker cp kafka1:/opt/kafka/config/ /opt/kafka/kafka1_config_bak/

kafka-manager需在界面手动添加集群配置才能显示。

测试kafka

进入容器
docker exec -it kafka1 /bin/bash

创建topic
/opt/kafka/bin/kafka-topics.sh --create --bootstrap-server 192.168.0.101:9011,192.168.0.101:9012,192.168.0.101:9013 --topic myTestTopic --partitions 3 --replication-factor 3
注:replication-factor个数不能超过broker的个数

查看当前topic列表
/opt/kafka/bin/kafka-topics.sh --list --bootstrap-server 192.168.0.101:9011,192.168.0.101:9012,192.168.0.101:9013

运行一个消息生产者,指定topic为刚刚创建的myTestTopic
/opt/kafka/bin/kafka-console-producer.sh --broker-list 192.168.0.101:9011,192.168.0.101:9012,192.168.0.101:9013 --topic myTestTopic
输入任意字符 然后ctrl+c退出

查看指定topic明细
/opt/kafka/bin/kafka-topics.sh --describe --bootstrap-server 192.168.0.101:9011,192.168.0.101:9012,192.168.0.101:9013 --topic myTestTopic

消费消息
/opt/kafka/bin/kafka-console-consumer.sh --bootstrap-server 192.168.0.101:9011,192.168.0.101:9012,192.168.0.101:9013 --topic myTestTopic --from-beginning
ctrl+c退出

删除topic
/opt/kafka/bin/kafka-topics.sh --delete --bootstrap-server 192.168.0.101:9011,192.168.0.101:9012,192.168.0.101:9013 --topic myTestTopic
如果不能删除,docker启动时配置KAFKA_DELETE_TOPIC_ENABLE: "true"

 

kafka安装运行(docker)的更多相关文章

  1. Docker——MacOS上安装运行docker

    近几年来,Docker越来越流行,使用场景也越来越广泛.为了能尽快跟上时代步伐.学习并应用到实际工作中,我也开始了Docker之旅. Docker版本 Docker共有两种版本: 社区版(Commun ...

  2. elastic stack安装运行(docker)

    https://www.docker.elastic.co 注:目前阿里云为7.4 elasticsearch 参考https://www.elastic.co/guide/en/elasticsea ...

  3. TICK/TIGK运维栈安装运行 docker【中】

    InfluxDB docker search influxdb docker pull influxdb docker run -d -p 8086:8086 -v /var/lib/influxdb ...

  4. Centos7基于容器安装运行Docker私有仓库及添加认证

    一.前言 官方的Docker hub是一个用于管理公共镜像的好地方,我们可以在上面找到我们想要的镜像,也可以把我们自己的镜像推送上去.但是,有时候,我们的使用场景需要我们拥有一个私有的镜像仓库用于管理 ...

  5. skywalking安装运行(docker)

    https://github.com/apache/skywalking-docker/tree/master/6/6.5 https://hub.docker.com/r/apache/skywal ...

  6. zookeeper安装运行(docker)

    拉取镜像docker pull zookeeper:latest 获取镜像基本信息docker inspect zookeeper mkdir /opt/zookeeper -p vim /opt/z ...

  7. Windows OS上安装运行Apache Kafka教程

    Windows OS上安装运行Apache Kafka教程 下面是分步指南,教你如何在Windows OS上安装运行Apache Zookeeper和Apache Kafka. 简介 本文讲述了如何在 ...

  8. Windows系统下安装运行Kafka

    一.安装JAVA JDK 1.下载安装包 http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151. ...

  9. 在Windows安装运行Kafka

    一.安装JAVA JDK 1.下载安装包 http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151. ...

随机推荐

  1. 【转载】C#中SqlCommand类的作用以及常用方法

    在C#的数据库操作过程中,SqlCommand类一般用于Sqlserver数据库的SQL语句的执行,包括Select语句.Update语句.Delete语句以及SQL存储过程等,SqlCommand的 ...

  2. 学习笔记之正则表达式 (Regular Expressions)

    正则表达式_百度百科 http://baike.baidu.com/link?url=ybgDrN2WQQKN64_gu-diCqdeDqL8LQ-jiQ-ftzzPaNUa9CmgBRDNnyx50 ...

  3. Number最大范围相关

    今天在leetcode上面做题目,有一道数组形式的整数加法运算,本来以为还蛮简单的,想着直接将数组先转化为String类型,然后直接相加就好, 代码如下: var addToArrayForm = f ...

  4. redis-5.0.5安装(linux centos)

    下载 cd /data wget http://download.redis.io/releases/redis-5.0.5.tar.gz 历史版本库地址 http://download.redis. ...

  5. Asp.Net Core 生成二维码(NuGet使用QRCoder)

    前言 功能:调用web api 接口 1.获取 jpeg 格式的二维码 2.获取中间带有logo 的二维码 3. 下载 jpeg,svg 格式的二维码 需要的NuGet 包: > QRCoder ...

  6. nginx 日志整理 目录区分 日志配置

    Nginx日志对于统计.系统服务排错很有用,但是原始的配置方案,日志很难定位问题.因此设想将nginx日志分类,包括access及error日志.并且按照不同域名及日志进行分类. 配置nginx日志目 ...

  7. Redhat6.5安装oracle11g

    Redhat6.5安装oracle11g 一.    安装环境 linux服务器:Redhat 6.5 64位 oracle版本:oracle11gR2 远程windows服务器:已安装Xmanage ...

  8. Kubernetes学习之路(27)之k8s 1.15.2 部署

    目录 一.环境准备 二.软件安装 三.部署master节点 四.部署node节点 五.集群状态检测 一.环境准备 IP地址 节点角色 CPU Memory Hostname Docker versio ...

  9. 捕获Ctrl + C中断 优雅的退出程序 golang

    捕获Ctrl + C中断 优雅的退出程序 Gracefully terminate a program in Go os/signal 来捕获系统中断等信号 // Notify方法将signal发送到 ...

  10. [SourceInsight].source insight 使用技巧

    转自:https://www.veryarm.com/140428.html 1  开胃菜-初级应用 1.1  选择美丽的界面享受工作 虽然不能以貌取人,但似乎从来没有人责备以貌取软件的.SI的华丽界 ...