051 Kafka的安装
后来重新复习的时候,发现这篇文章不错:https://www.cnblogs.com/z-sm/p/5691760.html
一:前提
1.安装条件
Java Scala
zookeeper
Kafka
2.使用版本
使用的版本是0.8.2.1
------------------
二:伪分布式安装
1.解压
kafka_2.10-0.8.2.1
2.拷贝server.properties
3.依次修改四个文件
官网上:说明这三个配置项是必要的。
主要要配置的有:
broker.id=0 :服务器唯一标识
port=9092 :服务器监听端口
host.name=linux-hadoop01.ibeifeng.com : 服务器监听主机名
log.dirs=/opt/modules/kafka_2.10-0.8.2.1/data/0 :kafka数据存储路径
zookeeper.connect=linux-hadoop01.ibeifeng.com:2181/kafka :元数据管理的zookeeper的配置信息
需要修改四个文件,这里只写第一个。
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# see kafka.server.KafkaConfig for additional details and defaults ############################# Server Basics ############################# # The id of the broker. This must be set to a unique integer for each broker.
broker.id=0 ############################# Socket Server Settings ############################# # The port the socket server listens on
port=9092 # Hostname the broker will bind to. If not set, the server will bind to all interfaces
host.name=linux-hadoop01.ibeifeng.com # Hostname the broker will advertise to producers and consumers. If not set, it uses the
# value for "host.name" if configured. Otherwise, it will use the value returned from
# java.net.InetAddress.getCanonicalHostName().
#advertised.host.name=<hostname routable by clients> # The port to publish to ZooKeeper for clients to use. If this is not set,
# it will publish the same port that the broker binds to.
#advertised.port=<port accessible by clients> # The number of threads handling network requests
num.network.threads=3 # The number of threads doing disk I/O
num.io.threads=8 # The send buffer (SO_SNDBUF) used by the socket server
socket.send.buffer.bytes=102400 # The receive buffer (SO_RCVBUF) used by the socket server
socket.receive.buffer.bytes=102400 # The maximum size of a request that the socket server will accept (protection against OOM)
socket.request.max.bytes=104857600 ############################# Log Basics ############################# # A comma seperated list of directories under which to store log files
log.dirs=/opt/modules/kafka_2.10-0.8.2.1/data/0 # The default number of log partitions per topic. More partitions allow greater
# parallelism for consumption, but this will also result in more files across
# the brokers.
num.partitions=1 # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1 ############################# Log Flush Policy ############################# # Messages are immediately written to the filesystem but by default we only fsync() to sync
# the OS cache lazily. The following configurations control the flush of data to disk.
# There are a few important trade-offs here:
# 1. Durability: Unflushed data may be lost if you are not using replication.
# 2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
# 3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to exceessive seeks.
# The settings below allow one to configure the flush policy to flush data after a period of time or
# every N messages (or both). This can be done globally and overridden on a per-topic basis. # The number of messages to accept before forcing a flush of data to disk
#log.flush.interval.messages=10000 # The maximum amount of time a message can sit in a log before we force a flush
#log.flush.interval.ms=1000 ############################# Log Retention Policy ############################# # The following configurations control the disposal of log segments. The policy can
# be set to delete segments after a period of time, or after a given size has accumulated.
# A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
# from the end of the log. # The minimum age of a log file to be eligible for deletion
log.retention.hours=168 # A size-based retention policy for logs. Segments are pruned from the log as long as the remaining
# segments don't drop below log.retention.bytes.
#log.retention.bytes=1073741824 # The maximum size of a log segment file. When this size is reached a new log segment will be created.
log.segment.bytes=1073741824 # The interval at which log segments are checked to see if they can be deleted according
# to the retention policies
log.retention.check.interval.ms=300000 # By default the log cleaner is disabled and the log retention policy will default to just delete segments after their retention expires.
# If log.cleaner.enable=true is set the cleaner will be enabled and individual logs can then be marked for log compaction.
log.cleaner.enable=false ############################# Zookeeper ############################# # Zookeeper connection string (see zookeeper docs for details).
# This is a comma separated host:port pairs, each corresponding to a zk
# server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
# You can also append an optional chroot string to the urls to specify the
# root directory for all kafka znodes.
zookeeper.connect=linux-hadoop01.ibeifeng.com:2181/kafka # Timeout in ms for connecting to zookeeper
zookeeper.connection.timeout.ms=6000
4.启动ZK
5.进入zkCli
6.启动kafka
具体的jps
7.这个时候再看zkCli
看ids:
看ids=0的值:
8.关闭命令
051 Kafka的安装的更多相关文章
- Kafka的安装和部署及测试
1.简介 大数据分析处理平台包括数据的接入,数据的存储,数据的处理,以及后面的展示或者应用.今天我们连说一下数据的接入,数据的接入目前比较普遍的是采用kafka将前面的数据通过消息的方式,以数据流的形 ...
- Linux下Kafka单机安装配置方法(图文)
Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了普通消息系统的功能,但具有自己独特的设计.这个独特的设计是什么样的呢 介绍 Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了 ...
- kafka的安装以及基本用法
kafka的安装 kafka依赖于ZooKeeper,所以在运行kafka之前需要先部署ZooKeeper集群,ZooKeeper集群部署方式分为两种,一种是单独部署(推荐),另外一种是使用kafka ...
- kafka manager安装配置和使用
kafka manager安装配置和使用 .安装yum源 curl https://bintray.com/sbt/rpm/rpm | sudo tee /etc/yum.repos.d/bintra ...
- kafka 的安装部署
Kafka 的简介: Kafka 是一款分布式消息发布和订阅系统,具有高性能.高吞吐量的特点而被广泛应用与大数据传输场景.它是由 LinkedIn 公司开发,使用 Scala 语言编写,之后成为 Ap ...
- Kafka学习之路 (四)Kafka的安装
一.下载 下载地址: http://kafka.apache.org/downloads.html http://mirrors.hust.edu.cn/apache/ 二.安装前提(zookeepe ...
- centos php Zookeeper kafka扩展安装
如题,系统架构升级引入消息机制,php 安装还是挺麻烦的,网上各种文章有的东拼西凑这里记录下来做个备忘,有需要的同学可以自行参考安装亲测可行 1 zookeeper扩展安装 1.安装zookeeper ...
- Linux下Kafka单机安装配置方法
Kafka是一个分布式的.可分区的.可复制的消息系统.它提供了普通消息系统的功能,但具有自己独特的设计.这个独特的设计是什么样的呢? 首先让我们看几个基本的消息系统术语: •Kafka将消息以topi ...
- Kafka Manager安装部署及使用
为了简化开发者和服务工程师维护Kafka集群的工作,yahoo构建了一个叫做Kafka管理器的基于Web工具,叫做 Kafka Manager.本文对其进行部署配置,并安装配置kafkatool对k ...
随机推荐
- html5中如何去掉input type date默认
html5中如何去掉input type date默认样式 2.对日期时间控件的样式进行修改目前WebKit下有如下9个伪元素可以改变日期控件的UI:::-webkit-datetime-edit – ...
- python-迭代器、生成器、内置函数及面向过程编程
一.迭代器 迭代器是迭代取值的工具,迭代是一个重复的过程,每一次重复都是基于上一次的结果而来的. 为什么要用迭代器呢? 1.可以不依赖索引取值 2.同一时刻在内存中只有一个值,不会过多的占用内存 如何 ...
- python元组,集合类型,及字典补充
一.元组 元组与列表基本相同,不同之处在于元组只能存不能取,当多个值没有改的需求时,用元组更合适 元组的基本操作 1.创建元组: t = (1, 2, 3, 4, 2,4,) t = (1,) #单个 ...
- RCNN--目标检测
原博文:http://www.cnblogs.com/soulmate1023/p/5530600.html 文章简要介绍RCNN的框架,主要包含: 原图-->候选区域生成-->对每个候选 ...
- 关于《common-net》的ftp上传
1:jar的maven的引用: <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http ...
- vue this触发事件
@click="aHref(index,$event)" aHref: function(url,event){ this.$router.push(url); $(event.c ...
- cf1051d 简单的状态压缩dp
/* 给定一个二行n列的格子,在里面填黑白色,要求通过黑白色将格子分为k块 请问有多少种填色方式 dp[j][k][0,1,2,3] 填到第j列,有k块,第j列的颜色, */ #include< ...
- vue项目中 axios 和Vue-axios的关系
文章收集于:https://segmentfault.com/q/1010000010812113 在vue项目中,会经常看到如下代码: 今天看到有些项目是这样写的,就有点看不懂了. ----解 ...
- Redis创建集群报错
Redis创建集群报错: 1:任何一个集群节点中都不能存在数据,如果有备份一下删除掉aof文件或rdb文件 2: nodes-集群端口.conf 文件存的会有报错记录,所以该文件也要删除
- 如何使用Scrapy框架实现网络爬虫
现在用下面这个案例来演示如果爬取安居客上面深圳的租房信息,我们采取这样策略,首先爬取所有租房信息的链接地址,然后再根据爬取的地址获取我们所需要的页面信息.访问次数多了,会被重定向到输入验证码页面,这个 ...