一 配置文件(下载、解压、跳过)

 # Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License. # see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
#Broker的全局唯一编号,不能重复
broker.id=0 ############################# Socket Server Settings ############################# # The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
# FORMAT:
# listeners = listener_name://host_name:port
# EXAMPLE:
# listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092 # Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured. Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092 # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL # The number of threads that the server uses for receiving requests from the network and sending responses to the network
#处理网络请求的线程数量
num.network.threads=3 # The number of threads that the server uses for processing requests, which may include disk I/O
#用来处理磁盘IO的线程数量
num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server
#发送套接字的缓冲区大小
socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server
#接收套接字的缓冲区大小
socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM)
#请求套接字的缓冲区大小
socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
#运行日志存放路径
log.dirs=/home/hadoop/logs/kafka # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
#topic 的分片个数
num.partitions=3 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
#用来恢复和清理Data下数据的线程数量
num.recovery.threads.per.data.dir=1 ############################# Internal Topic Settings #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion due to age
#segment文件保留的最长时间,超时将被删除
log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000 ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
# zk地址
zookeeper.connect=hadoop2:2181,hadoop3:2181,hadoop4:2181 # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000 ############################# Group Coordinator Settings ############################# # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

server.properties

二 启动集群

  分发安装包,修改每个配置文件的broker.id ,不得重复

  启动zookeeper集群(hadoop2,hadoop3,hadoop4)

  依次在各节点上启动kafka

bin/kafka-server-start.sh config/server.properties

三 常用操作命令

  1 查看当前服务器中所有的topic

bin/kafka-topics.sh --list --zookeeper hadoop2:2181

  2 创建topic

bin/kafka-topics.sh --create --zookeeper hadoop2:2181 --replication-factor 1 --partitions 3 --topic first

   --replication-factor 1 副本个数

   --partitions 3 分片个数

   --topic first 主题名字

  3 删除topic

bin/kafka-topics.sh --delete --zookeeper hadoop2:2181 --topic first

  需要server.properties中设置delete.topic.enable=true否则只是标记删除或者直接重启

  4 通过shell命令发送消息

bin/kafka-console-producer.sh --broker-list kafka1:9092 --topic first

  5 通过shell命令消费消息

bin/kafka-console-consumer.sh --zookeeper hadoop2:2181 --from-beginning --topic first

  6 查看消费者位置

bin/kafka-run-class.sh kafka.tools.ConsumerOffsetChecker --zookeeper hadoop2:2181 --group testGroup

  7 查看某个Topic详情

bin/kafka-topics.sh --topic first--describe --zookeeper hadoop2:2181

Kafka系列一 基本安装的更多相关文章

  1. Kafka系列一之架构介绍和安装

    Kafka架构介绍和安装 写在前面 还是那句话,当你学习一个新的东西之前,你总得知道这个东西是什么?这个东西可以用来做什么?然后你才会去学习它,使用它.简单来说,kafka既是一个消息队列,如今,它也 ...

  2. Kafka集群的安装和使用

    Kafka是一种高吞吐量的分布式发布订阅的消息队列系统,原本开发自LinkedIn,用作LinkedIn的活动流(ActivityStream)和运营数据处理管道(Pipeline)的基础.现在它已被 ...

  3. Kafka系列之-Kafka监控工具KafkaOffsetMonitor配置及使用

    KafkaOffsetMonitor是一个可以用于监控Kafka的Topic及Consumer消费状况的工具,其配置和使用特别的方便.源项目Github地址为:https://github.com/q ...

  4. Kali linux系列之 zmap 安装

    Kali linux系列之 zmap 安装 官方文档地址:https://zmap.io/ 准备:保证有比较顺畅的更新源,可以更新系统,下载安装包. 安装 第一步:sudo apt-get insta ...

  5. Redis系列(1)之安装

    Redis系列(1)之安装 由于项目的需要,最近需要研究下Redis.Redis是个很轻量级的NoSql内存数据库,它有多轻量级的呢,用C写的,源码只有3万行,空的数据库只占1M内存.它的功能很丰富, ...

  6. Kafka系列之-Kafka Protocol实例分析

    本文基于A Guide To The Kafka Protocol文档,以及Spark Streaming中实现的org.apache.spark.streaming.kafka.KafkaClust ...

  7. Kafka系列之-Kafka入门

    接下来的这些博客,主要内容来自<Learning Apache Kafka Second Edition>这本书,书不厚,200多页.接下来摘录出本书中的重要知识点,偶尔参考一些网络资料, ...

  8. Open vSwitch系列之二 安装指定版本ovs

    在ovs学习过程中,如果自己想要安装一个ovs交换机其实一条简单的命令 apt  install openvswitch 就可以了,但是这种方法只能安装低版本的ovs.在特殊情况下需要安装指定版本,例 ...

  9. kubernetes系列03—kubeadm安装部署K8S集群

    本文收录在容器技术学习系列文章总目录 1.kubernetes安装介绍 1.1 K8S架构图 1.2 K8S搭建安装示意图 1.3 安装kubernetes方法 1.3.1 方法1:使用kubeadm ...

随机推荐

  1. 标准的Flask启动文件

    最近有一些同学问了我一些项目结构的问题 所以今天给大家专门讲解 解耦后的项目 目录我会分为两种方式:一种是普通解耦 一种是多mvc解耦 首先 我没先建立我们程序的文件夹并且在这个文件夹内写一个和这个文 ...

  2. NSArray排序方法讲解

    NSArray排序方法讲解 给数组排序有着多种方式 最麻烦的是sortedArrayUsingSelector:,其次是sortedArrayUsingDescriptors:,最容易使用的就是sor ...

  3. SpringBoot Mybatis 执行自定义SQL

    1.XML中执行自定义SQL. https://blog.csdn.net/u012427355/article/details/80654806 2.注解执行自定义SQL @Select(" ...

  4. Hadoop HBase概念学习系列之优秀行键设计(十六)

    我们通过行键访问HBase.尽管使用扫描过滤器可以一次性指明大量的键,但是HBase仅仅能够根据行键识别出一行. 优秀的行键设计可以保证良好的HBase性能. 1.行键存在于HBase中的每一个单元格 ...

  5. [webpack] Webpack 别名

    存在这样一种情况,有时候项目中,存在一些 公共的组件,通常会抽取出来,放在一个统一的文件夹中. 然后大家就可以再 各个 模块里面 愉快的使用该 组件了.   但是也带来一个坑爹的问题 组件放在 com ...

  6. 【洛谷】【动态规划/背包】P1833 樱花

    [题目描述:] 爱与愁大神后院里种了n棵樱花树,每棵都有美学值Ci.爱与愁大神在每天上学前都会来赏花.爱与愁大神可是生物学霸,他懂得如何欣赏樱花:一种樱花树看一遍过,一种樱花树最多看Ai遍,一种樱花树 ...

  7. this 的使用方法 —— javascript中的this讲解!

    从自己刚刚开始学习javascript到现在已经很久了,今天得益于新酱的细心讲解,总算是把this这个“雾中花”看清晰了. 在此首先感谢新酱的讲解 下面将this的一些基本使用和大家分享一下: 查看t ...

  8. 随手练——HDU 5015 矩阵快速幂

    题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=5015 看到这个限时,我就知道这题不简单~~矩阵快速幂,找递推关系 我们假设第一列为: 23 a1 a2 ...

  9. autogen.sh脚本执行报错问题解决(针对DOMJudge平台搭建)

    错误信息:./autogen.sh: 9: ./autogen.sh: aclocal: not found 解决办法: $ sudo apt-get install automake $ sudo ...

  10. mariadb密码问题

    错误信息: Mysql:ERROR 1698 (28000): Access denied for user 'root'@'localhost' 解决办法: sudo cat /etc/mysql/ ...