flume 整合 kafka:
 
flume:高可用的,高可靠的,分布式的海量日志采集、聚合和传输的系统。
kafka:分布式的流数据平台。
 
flume 采集业务日志,发送到kafka
 
一、安装部署Kafka

Download

1.0.0 is the latest release. The current stable version is 1.0.0.
You can verify your download by following these procedures and using these KEYS.

1.0.0

We build for multiple versions of Scala. This only matters if you are using Scala and you want a version built for the same Scala version you use. Otherwise any version should work (2.11 is recommended).
 
1.解压:tar zxvf kafka_2.11-1.0.0.tgz
 
2.部署目录:mv kafka_2.12-1.0.0 /usr/local/kafka2.12
 
3.启动zookeeper ....
 
4.启动kafka:
#nohup bin/kafka-server-start.sh config/server.properties &
 

5.创建topic:

#bin/kafka-topics.sh --create --zookeeper localhost:2181 --partitions 1 --replication-factor 1 --topic test
Created topic "test".

6.查看topic:

# bin/kafka-topics.sh --list --zookeeper localhost:2181
__consumer_offsets
test

7.测试发送数据

#bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
输入:my test
 
8.测试消费消息:
#bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
 
 
二、安装部署flume
 
flume下载:
 

Download

Apache Flume is distributed under the Apache License, version 2.0
The link in the Mirrors column should display a list of available mirrors with a default selection based on your inferred location. If you do not see that page, try a different browser. The checksum and signature are links to the originals on the main distribution server.
Apache Flume binary (tar.gz) apache-flume-1.8.0-bin.tar.gz apache-flume-1.8.0-bin.tar.gz.md5 apache-flume-1.8.0-bin.tar.gz.sha1 apache-flume-1.8.0-bin.tar.gz.asc
Apache Flume source (tar.gz) apache-flume-1.8.0-src.tar.gz apache-flume-1.8.0-src.tar.gz.md5 apache-flume-1.8.0-src.tar.gz.sha1 apache-flume-1.8.0-src.tar.gz.asc
It is essential that you verify the integrity of the downloaded files using the PGP or MD5 signatures. Please read Verifying Apache HTTP Server Releases for more information on why you should verify our releases.
 
 
2.解压:tar zxvf apache-flume-1.8.0-bin.tar.gz
 
3.设置目录:mv apache-flume-1.8.0-bin /usr/local/flume1.8
 

4.准备工作:

安装java并设置java环境变量,flume环境变量,在`/etc/profile`中加入
 
export JAVA_HOME=/usr/java/jdk1.8.0_65
export FLUME_HOME=/usr/local/flume1.8
export PATH=$PATH:$JAVA_HOME/bin:$FLUME_HOME
 

执行:source /etc/profile 生效变量

5.建立log采集目录:
/tmp/logs/kafka.log
6.配置
拷贝配置模板:
# cp conf/flume-conf.properties.template conf/flume-conf.properties
# cp conf/flume-env.properties.template conf/flume-env.properties
编辑配置如下:
agent.sources = s1                                                                                                                  
agent.channels = c1                                                                                                                 
agent.sinks = k1                                                                                                                    
                                                                                                                                      
agent.sources.s1.type=exec                                     
#日志采集位置                                                                     
agent.sources.s1.command=tail -F /tmp/logs/kafka.log                                                                                
agent.sources.s1.channels=c1                                                                                                        
agent.channels.c1.type=memory                                                                                                       
agent.channels.c1.capacity=10000                                                                                                    
agent.channels.c1.transactionCapacity=100                                                                                           
                                                                                                                                      
agent.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink    
#kafka 地址                                                                      
agent.sinks.k1.brokerList=localhost:9092   
#kafka topic                                                                                         
agent.sinks.k1.topic=test                                                                                                      
agent.sinks.k1.serializer.class=kafka.serializer.StringEncoder                                                                      
                                                                                                                                      
agent.sinks.k1.channel=c1

功能验证

7.启动服务

# bin/flume-ng agent --conf ./conf/ -f conf/kafka.properties -Dflume.root.logger=DEBUG,console -n agent
运行日志位于logs目录,或者启动时添加-Dflume.root.logger=INFO,console 选项前台启动,输出打印日志,查看具体运行日志,服务异常时查原因。

8.创建测试日志生成:log_producer_test.sh

for((i=0;i<=1000;i++));
do echo "kafka_flume_test-"+$i>>/tmp/logs/kafka.log;
do
 
9.生成日志:
./log_producer_test.sh
 
观察kafka日志消费情况。。。

flume 整合 kafka的更多相关文章

  1. 【Kafka】Flume整合Kafka

    目录 需求 一.Flume下载地址 二.上传解压Flume 三.配置flume.conf 四.启动flume 五.测试整合 需求 实现flume监控某个目录下面的所有文件,然后将文件收集发送到kafk ...

  2. 入门大数据---Flume整合Kafka

    一.背景 先说一下,为什么要使用 Flume + Kafka? 以实时流处理项目为例,由于采集的数据量可能存在峰值和峰谷,假设是一个电商项目,那么峰值通常出现在秒杀时,这时如果直接将 Flume 聚合 ...

  3. flume整合kafka

    # Please paste flume.conf here. Example: # Sources, channels, and sinks are defined per # agent name ...

  4. flume 整合kafka

    背景:系统的数据量越来越大,日志不能再简单的文件的保存,如此日志将会越来越大,也不方便查找与分析,综合考虑下使用了flume来收集日志,收集日志后向kafka传递消息,下面给出具体的配置 # The ...

  5. flume与kafka整合

    flume与kafka整合 前提: flume安装和测试通过,可参考:http://www.cnblogs.com/rwxwsblog/p/5800300.html kafka安装和测试通过,可参考: ...

  6. ambari下的flume和kafka整合

    1.配置flume #扫描指定文件配置 agent.sources = s1 agent.channels = c1 agent.sinks = k1 agent.sources.s1.type=ex ...

  7. Flume和Kafka整合安装

    版本号: RedHat6.5   JDK1.8    flume-1.6.0   kafka_2.11-0.8.2.1 1.flume安装 RedHat6.5安装单机flume1.6:http://b ...

  8. 大数据入门第二十四天——SparkStreaming(二)与flume、kafka整合

    前一篇中数据源采用的是从一个socket中拿数据,有点属于“旁门左道”,正经的是从kafka等消息队列中拿数据! 主要支持的source,由官网得知如下: 获取数据的形式包括推送push和拉取pull ...

  9. flume和kafka整合(转)

    原文链接:Kafka flume 整合 前提 前提是要先把flume和kafka独立的部分先搭建好. 下载插件包 下载flume-kafka-plus:https://github.com/beyon ...

随机推荐

  1. mongo复制集、分片集(亲测)

    1.1 架构思路: 192.168.50.131              192.168.50.131             192.168.50.132 mongos mongos mongos ...

  2. Flex入坑指南

    弹性布局flex是一个几年前的CSS属性了,说它解放了一部分生产力不为过.至少解放了不少CSS布局相关的面试题 :) 之前网上流行的各种XX布局,什么postion: absolute+margin, ...

  3. Kubernetes对象模型

    原文发表于https://www.fangzhipeng.com/kubernetes/2018/10/13/k8s-object-model/ 欢迎访问我的方志朋的博客 Kubernetes对象 在 ...

  4. #leetcode刷题之路37-解数独

    编写一个程序,通过已填充的空格来解决数独问题.一个数独的解法需遵循如下规则:数字 1-9 在每一行只能出现一次.数字 1-9 在每一列只能出现一次.数字 1-9 在每一个以粗实线分隔的 3x3 宫内只 ...

  5. #leetcode刷题之路15-三数之和

    给定一个包含 n 个整数的数组 nums,判断 nums 中是否存在三个元素 a,b,c ,使得 a + b + c = 0 ?找出所有满足条件且不重复的三元组. 注意:答案中不可以包含重复的三元组. ...

  6. DQL数据查询

    set hive.fetch.task.conversion=more; -- 避免触发MR job select distinct name from employee_id limit 2; -- ...

  7. Linux计划任务crontab设置详解

    crontab文件的格式: minute hour day month weekday username command minute:分,值为0-59 hour:小时,值为1-23 day:天,值为 ...

  8. spark----词频统计(一)

    利用Linux系统中安装的spark来统计: 1.选择目录,并创建一个存放文本的目录,将要处理的文本保存在该目录下以供查找操作: ① cd /usr/local ②mkdir mycode ③ cd ...

  9. C语言实验报告(五) 用自定义函数求2~n之间的素数

    #include<stdio.h>#include <math.h>int main(){  int i,n;  printf("input n:");  ...

  10. ORB-SLAM(十二)优化

    ORB-SLAM中优化使用g2o库,先复习一下g2o的用法,上类图 其中SparseOptimizer就是我们需要维护的优化求解器,他是一个优化图,也是一个超图(包含若干顶点和一元二元多元边),怎样定 ...