下载kafka,http://kafka.apache.org/downloads

kafka下面的文件结构如下:

进入bin目录,启动kafka之前要先启动zookeeper

./zookeeper-server-start.sh ../config/zookeeper.properties   启动zookeeper

显示绑定了2181端口,我们来查看一下

zookeeper启动成功

接下来启动kafka

./kafka-server-start.sh ../config/server.properties, 和启动zookeeper类似

kafka启动成功

创建topic

之前说过,kafka是基于发布订阅的方式。并且生产者要指定这条消息是发送到什么topic上的,而且消费者一般也不会消费所有的消息,需要指定订阅的消息是属于哪个topic。
如果想创建一个名为satori的topic就可以这么写,后面的参数之后会介绍

./kafka-topics.sh --zookeeper localhost:2181 --create --topic satori --partitions 3 --replication-factor 1

查看satori这个topic,直接将create改成describe

启动一个consumer

./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic satori --from-beginning

顺带一提,在低版本的kafka中还可以通过 ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic satori,但是已经在0.9.0之后被移除了

启动一个producer

./kafka-console-producer.sh --broker-list localhost:9092 --topic satori

我们再来看看consumer

-------------------------------------------------------------------

以上是通过kafka自带的zookeeper启动的,接下来我们演示如何通过单独的zookeeper启动kafka

先安装zookeeper,http://archive.cloudera.com/cdh5/cdh/5/

然后配置到环境变量里面去,激活一下。

然后还要修改一个配置文件,进入conf目录下。

启动zookeeper,进入bin目录

zkServer.sh则是启动文件

但是提示我们需要加一些参数,我们选一下start

我们还可以使用zkCli.sh连接一下

连接成功,这个比较简单

我们再回过头来看kafka的配置文件

我们来看看server.properties

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600

############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

  

我们来慢慢看

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

之前说过kafka就相当于一个broker,而且还可以有多个broker,那么为了区分,因此每一个broker都是有一个id的,并且id还必须唯一。

这个就不需要改,直接从零开始的异世界生活就行。

#listeners=PLAINTEXT://:9092

监听的端口

# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

这个log.dirs是用来存放kafka的日志文件的,这个也不能用这个路径,重启的时候回废掉的,我们自己指定一个,就在kafka下面建一个目录吧。

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

分区的数量,后面介绍,就先让它为1

zookeeper.connect=localhost:2181

zookeeper的地址

配置文件就先到这,我们再来启动kafka。忘记说了我们要把kafka加到环境变量里面去

然后kafka-server-start.sh $KAFKA_HOME/config/server.properties

接下来创建topic

kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic hello_satori

指定一个zookeeper,副本系数是1,我们这就一个节点,分区也是1

创建成功

查看topic

kafka-topics.sh --list --zookeeper localhost:2181

显示只有一个topic,就是我们的hello_satori

发送消息

kafka-console-producer.sh --broker-list localhost:9092 --topic hello_satori

我们发送消息是往篮子里面发送,所以需要指定--broker-list,指定topic是往哪个topic里面发送,9092是我们在配置文件中写的监听的端口

接收消息

kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic hello_satori --from-beginning

顺带一提,在低版本的kafka中还可以通过 ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic satori,但是已经在0.9.0之后被移除了

注意这里的--from-beginning,如果带上,那么以前的数据也会被消费

启动成功,等待生产者生产数据

2.kafka单节点broker的安装与启动的更多相关文章

  1. Kafka 单节点多Kafka Broker集群

    Kafka 单节点多Kafka Broker集群 接前一篇文章,今天搭建一下单节点多Kafka Broker集群环境. 配置与启动服务 由于是在一个节点上启动多个 Kafka Broker实例,所以我 ...

  2. Kafka单节点及集群配置安装

    一.单节点 1.上传Kafka安装包到Linux系统[当前为Centos7]. 2.解压,配置conf/server.property. 2.1配置broker.id 2.2配置log.dirs 2. ...

  3. zookeeper单节点windows下安装

    由于需要在windows下面安装zookeeper,故做个整理 1.下载zookeeper http://mirrors.hust.edu.cn/apache/zookeeper/ 2.解压 3.修改 ...

  4. 吴裕雄--天生自然HADOOP操作实验学习笔记:单节点伪分布式安装

    实验目的 了解java的安装配置 学习配置对自己节点的免密码登陆 了解hdfs的配置和相关命令 了解yarn的配置 实验原理 1.Hadoop安装 Hadoop的安装对一个初学者来说是一个很头疼的事情 ...

  5. Apache Kafka(二)- Kakfa 安装与启动

    安装并启动Kafka 1.下载最新版Kafka(当前为kafka_2.12-2.3.0)并解压: > wget http://mirror.bit.edu.cn/apache/kafka/2.3 ...

  6. Kafka 单节点单Kafka Broker集群

    下载与安装 从 http://www.apache.org/dist/kafka/ 下载最新版本的 kafka,这里使用的是 kafka_2.12-0.10.2.1.tgz $ tar zxvf ka ...

  7. kafka单节点的安装,部署,使用

    1.kafka官网:http://kafka.apache.org/downloads jdk:https://www.oracle.com/technetwork/java/javase/downl ...

  8. kafka单节点测试

    ======================命令====================== 启动zookeeper server bin/zookeeper-server-start.sh conf ...

  9. kafka单节点部署无法访问问题解决

    场景:在笔记本安装了一台虚拟机, 在本地的虚拟机上部署了一个kafka服务: 写了一个测试程序,在笔记本上运行测试程序,访问虚拟机上的kafka,报如下异常: 2015-01-15 09:33:26 ...

随机推荐

  1. MyBatis实例教程--以接口的方式编程

    以接口的方式编程: 只需要修改两个地方即可, 1.mapper.xml(实体类)配置文件, 注意mapper的namespace的名字是mapper对象的完整路径名com.xiamen.mapper. ...

  2. JavaScript中注册时间处理程序的方式

    基本的方式有两种: 一.第一种方式,出现在Web初期,给时间目标对象或文档元素设置属性. 1.设置JavaScript对象属性为事件处理程序. 示例: 缺点,这种设计都是围绕着假设每个事件目标对于每种 ...

  3. JVM(1)——简介

    网上流传着一段挺有意思的话-- 对于从事C或C++的开发人员来说,他们既是内存管理的最高权力的皇帝,也是最基础的劳动人民,担负着每一个对象生命开始到终结的维护工作,有点光杆司令的赶脚. 但对于java ...

  4. P4109 [HEOI2015]定价

    题目描述 在市场上有很多商品的定价类似于 999 元.4999 元.8999 元这样.它们和 1000 元.5000 元和 9000 元并没有什么本质区别,但是在心理学上会让人感觉便宜很多,因此也是商 ...

  5. P3457 [POI2007]POW-The Flood

    题意翻译 Description 你手头有一张该市的地图.这张地图是边长为 m∗n 的矩形,被划分为m∗n个1∗1的小正方形.对于每个小正方形,地图上已经标注了它的海拔高度以及它是否是该市的一个组成部 ...

  6. PowerShell收发TCP消息包

    PowerShell收发TCP消息包 https://www.cnblogs.com/fuhj02/archive/2012/10/16/2725609.html 在上篇文章中,我们在PSNet包中创 ...

  7. 【BZOJ 2879】[Noi2012]美食节 费用流

    思路同修车,就是多了一个骚气的操作:动态加边,我们通过spfa流的过程可以知道,我们一次只会跑一流量,最后一层边跑过就不会再悔改,所以说我们只会用到一大片里面的很少的点,所以我们如果可以动态加边的话我 ...

  8. 【BZOJ 3925】[Zjoi2015]地震后的幻想乡 期望概率dp+状态压缩+图论知识+组合数学

    神™题........ 这道题的提示......(用本苣蒻并不会的积分积出来的)并没有 没有什么卵用 ,所以你发现没有那个东西并不会 不影响你做题 ,然后你就可以推断出来你要求的是我们最晚挑到第几大的 ...

  9. Why is the ibdata1 file continuously growing in MySQL?

    We receive this question about the ibdata1 file in MySQL very often in Percona Support. The panic st ...

  10. E. Sonya and Ice Cream(开拓思维)

    E. Sonya and Ice Cream time limit per test 2 seconds memory limit per test 256 megabytes input stand ...