下载kafka,http://kafka.apache.org/downloads

kafka下面的文件结构如下:

进入bin目录,启动kafka之前要先启动zookeeper

./zookeeper-server-start.sh ../config/zookeeper.properties   启动zookeeper

显示绑定了2181端口,我们来查看一下

zookeeper启动成功

接下来启动kafka

./kafka-server-start.sh ../config/server.properties, 和启动zookeeper类似

kafka启动成功

创建topic

之前说过,kafka是基于发布订阅的方式。并且生产者要指定这条消息是发送到什么topic上的,而且消费者一般也不会消费所有的消息,需要指定订阅的消息是属于哪个topic。
如果想创建一个名为satori的topic就可以这么写,后面的参数之后会介绍

./kafka-topics.sh --zookeeper localhost:2181 --create --topic satori --partitions 3 --replication-factor 1

查看satori这个topic,直接将create改成describe

启动一个consumer

./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic satori --from-beginning

顺带一提,在低版本的kafka中还可以通过 ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic satori,但是已经在0.9.0之后被移除了

启动一个producer

./kafka-console-producer.sh --broker-list localhost:9092 --topic satori

我们再来看看consumer

-------------------------------------------------------------------

以上是通过kafka自带的zookeeper启动的,接下来我们演示如何通过单独的zookeeper启动kafka

先安装zookeeper,http://archive.cloudera.com/cdh5/cdh/5/

然后配置到环境变量里面去,激活一下。

然后还要修改一个配置文件,进入conf目录下。

启动zookeeper,进入bin目录

zkServer.sh则是启动文件

但是提示我们需要加一些参数,我们选一下start

我们还可以使用zkCli.sh连接一下

连接成功,这个比较简单

我们再回过头来看kafka的配置文件

我们来看看server.properties

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
#    http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# see kafka.server.KafkaConfig for additional details and defaults

############################# Server Basics #############################

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

############################# Socket Server Settings #############################

# The address the socket server listens on. It will get the value returned from
# java.net.InetAddress.getCanonicalHostName() if not configured.
#   FORMAT:
#     listeners = listener_name://host_name:port
#   EXAMPLE:
#     listeners = PLAINTEXT://your.host.name:9092
#listeners=PLAINTEXT://:9092

# Hostname and port the broker will advertise to producers and consumers. If not set,
# it uses the value for "listeners" if configured.  Otherwise, it will use the value
# returned from java.net.InetAddress.getCanonicalHostName().
#advertised.listeners=PLAINTEXT://your.host.name:9092

# Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
#listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL

# The number of threads that the server uses for receiving requests from the network and sending responses to the network
num.network.threads=3

# The number of threads that the server uses for processing requests, which may include disk I/O
num.io.threads=8

# The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400

# The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400

# The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600

############################# Log Basics #############################

# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1

############################# Internal Topic Settings  #############################
# The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
# For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1

############################# Log Flush Policy #############################

# Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
#    1. Durability: Unflushed data may be lost if you are not using replication.
#    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
#    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis.

# The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000

# The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000

############################# Log Retention Policy #############################

# The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log.

# The minimum age of a log file to be eligible for deletion due to age
log.retention.hours=168

# A size-based retention policy for logs. Segments are pruned from the log unless the remaining
# segments drop below log.retention.bytes. Functions independently of log.retention.hours.
#log.retention.bytes=1073741824

# The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824

# The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000

############################# Zookeeper #############################

# Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=localhost:2181

# Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000

############################# Group Coordinator Settings #############################

# The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
# The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
# The default value for this is 3 seconds.
# We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
# However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
group.initial.rebalance.delay.ms=0

  

我们来慢慢看

# The id of the broker. This must be set to a unique integer for each broker.
broker.id=0

之前说过kafka就相当于一个broker,而且还可以有多个broker,那么为了区分,因此每一个broker都是有一个id的,并且id还必须唯一。

这个就不需要改,直接从零开始的异世界生活就行。

#listeners=PLAINTEXT://:9092

监听的端口

# A comma separated list of directories under which to store log files
log.dirs=/tmp/kafka-logs

这个log.dirs是用来存放kafka的日志文件的,这个也不能用这个路径,重启的时候回废掉的,我们自己指定一个,就在kafka下面建一个目录吧。

# The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1

分区的数量,后面介绍,就先让它为1

zookeeper.connect=localhost:2181

zookeeper的地址

配置文件就先到这,我们再来启动kafka。忘记说了我们要把kafka加到环境变量里面去

然后kafka-server-start.sh $KAFKA_HOME/config/server.properties

接下来创建topic

kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic hello_satori

指定一个zookeeper,副本系数是1,我们这就一个节点,分区也是1

创建成功

查看topic

kafka-topics.sh --list --zookeeper localhost:2181

显示只有一个topic,就是我们的hello_satori

发送消息

kafka-console-producer.sh --broker-list localhost:9092 --topic hello_satori

我们发送消息是往篮子里面发送,所以需要指定--broker-list,指定topic是往哪个topic里面发送,9092是我们在配置文件中写的监听的端口

接收消息

kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic hello_satori --from-beginning

顺带一提,在低版本的kafka中还可以通过 ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic satori,但是已经在0.9.0之后被移除了

注意这里的--from-beginning,如果带上,那么以前的数据也会被消费

启动成功,等待生产者生产数据

2.kafka单节点broker的安装与启动的更多相关文章

  1. Kafka 单节点多Kafka Broker集群

    Kafka 单节点多Kafka Broker集群 接前一篇文章,今天搭建一下单节点多Kafka Broker集群环境. 配置与启动服务 由于是在一个节点上启动多个 Kafka Broker实例,所以我 ...

  2. Kafka单节点及集群配置安装

    一.单节点 1.上传Kafka安装包到Linux系统[当前为Centos7]. 2.解压,配置conf/server.property. 2.1配置broker.id 2.2配置log.dirs 2. ...

  3. zookeeper单节点windows下安装

    由于需要在windows下面安装zookeeper,故做个整理 1.下载zookeeper http://mirrors.hust.edu.cn/apache/zookeeper/ 2.解压 3.修改 ...

  4. 吴裕雄--天生自然HADOOP操作实验学习笔记:单节点伪分布式安装

    实验目的 了解java的安装配置 学习配置对自己节点的免密码登陆 了解hdfs的配置和相关命令 了解yarn的配置 实验原理 1.Hadoop安装 Hadoop的安装对一个初学者来说是一个很头疼的事情 ...

  5. Apache Kafka(二)- Kakfa 安装与启动

    安装并启动Kafka 1.下载最新版Kafka(当前为kafka_2.12-2.3.0)并解压: > wget http://mirror.bit.edu.cn/apache/kafka/2.3 ...

  6. Kafka 单节点单Kafka Broker集群

    下载与安装 从 http://www.apache.org/dist/kafka/ 下载最新版本的 kafka,这里使用的是 kafka_2.12-0.10.2.1.tgz $ tar zxvf ka ...

  7. kafka单节点的安装,部署,使用

    1.kafka官网:http://kafka.apache.org/downloads jdk:https://www.oracle.com/technetwork/java/javase/downl ...

  8. kafka单节点测试

    ======================命令====================== 启动zookeeper server bin/zookeeper-server-start.sh conf ...

  9. kafka单节点部署无法访问问题解决

    场景:在笔记本安装了一台虚拟机, 在本地的虚拟机上部署了一个kafka服务: 写了一个测试程序,在笔记本上运行测试程序,访问虚拟机上的kafka,报如下异常: 2015-01-15 09:33:26 ...

随机推荐

  1. ubuntu 开热点

    原文地址:https://www.cnblogs.com/king-ding/archive/2016/10/09/ubuntuWIFI.html 今天教大家一个简单方法让ubuntu发散wifi热点 ...

  2. 阿里云服务器安装https证书 centos + httpd + Symantec

    一. 环境 centos7 阿里云服务器, httpd服务, 阿里云免费的Symantec证书 阿里云Symantec 有个免费版的证书, 具体怎么申请可以去百度解决 二. 网上大部分的经验贴都是要A ...

  3. JavaScript中注册时间处理程序的方式

    基本的方式有两种: 一.第一种方式,出现在Web初期,给时间目标对象或文档元素设置属性. 1.设置JavaScript对象属性为事件处理程序. 示例: 缺点,这种设计都是围绕着假设每个事件目标对于每种 ...

  4. js调用本地office打开服务器的office文件预览

    本来是想做成直接在网页上在线预览office文件的,但是找了好多,要不是收费,要不就是要调用别人的API不安全,所以纠结了好久还是用调用本地的office预览office文件. 废话不多说,那么怎么调 ...

  5. 多个jar包的合并

    1.将所有jar文件复制至某临时目录中,通过jar命令解压得到所有的.class文件 > jar -xvf xx.jar xx.jar必须为具体的jar,不能为*.jar,会报FileNotFo ...

  6. 自定义 Relam

    package org.zln.hello.realm; import org.apache.shiro.authc.*; import org.apache.shiro.realm.Realm; / ...

  7. window+kafka

    window环境搭建zookeeper,kafka集群 为了演示集群的效果,这里准备一台虚拟机(window 7),在虚拟机中搭建了单IP多节点的zookeeper集群(多IP节点的也是同理的),并且 ...

  8. 前端工程师必须要知道的SEO技巧(1):rel=nofollow的使用

    前提:最近我在找工作,想面试一些关于前端的工作,被问到了一些关于SEO优化的问题.我深深的感觉我所回答的太过于表面,没有深入.所以,又把SEO的内容看了一遍.自己总结如下:有的是看的其他博友的贴子,发 ...

  9. [CF107D]Crime Management

    题目大意:有一种长度为$n(n\leqslant 10^{18})$的字符串,给定$m(m\leqslant10^3)$种限制,即字符$c$出现的次数为$cnt$,若一个字符有多种限制,则满足任意一个 ...

  10. chrome 不支持12px以下字体为题的解决

    现英文9px 设置 在chrome 下无效,可以通过 -webkit-transform: scale(0.75); 12*0.75 =9  得到小字体(在chrome浏览器下 大小缩放到0.75倍) ...