Flume笔记--示例(使用配置文件)
例子参考资料:http://www.aboutyun.com/thread-8917-1-1.html
自定义sink实现和属性注入:http://www.coderli.com/flume-ng-sink-properties/
自定义拦截器:http://blog.csdn.net/xiao_jun_0820/article/details/38333171
自定义kafkasink:www.itnose.net/detail/6187977.html
1. 使用avro发送指定文件
(1)在conf文件夹下创建avro.conf文件,写入如下配置
vim /usr/local/hadoop/apache-flume-1.6.0-bin/conf/avro.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1 # Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4141 # Describe the sink
a1.sinks.k1.type = logger # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100 # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
(2)启动flume agent a1
进入bin执行命令
./flume-ng agent -c . -f /usr/local/hadoop/apache-flume-1.6.-bin/conf/avro.conf -n a1 -Dflume.root.logger=INFO,console
(3)创建用于传送的日志文件并写入文字
在/usr/local/hadoop/apache-flume-1.6.0-bin文件夹下创建 log.00 文件,写入"hahahahah"
(4)使用avro-client发送文件
再启动一个控制台,进入bin执行命令
./flume-ng avro-client -c . -H localhost -p -F /usr/local/hadoop/apache-flume-1.6.-bin/log.
可在控制台1见如下日志,说明已经成功传送
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: => /127.0.0.1:] OPEN
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: => /127.0.0.1:] BOUND: /127.0.0.1:
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: => /127.0.0.1:] CONNECTED: /127.0.0.1:
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: :> /127.0.0.1:] DISCONNECTED
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: :> /127.0.0.1:] UNBOUND
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: :> /127.0.0.1:] CLOSED
// :: INFO ipc.NettyServer: Connection to /127.0.0.1: disconnected.
// :: INFO sink.LoggerSink: Event: { headers:{} body: 2E 2F 6C 6D hahahahah ./flum }
2.使用EXEC(监控单个日志文件)
EXEC执行一个给定的命令获得输出的源,如果要使用tail命令,必选使得file足够大才能看到输出内容输出内容
(1)创建agent配置文件,在 /conf 下新建 exec_tail.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1 # Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.command = tail -F /usr/local/hadoop/apache-flume-1.6.-bin/log_exec_tail
#注意,上面这一行就是要监控的日志文件的位置 # Describe the sink
a1.sinks.k1.type = logger # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity =
a1.channels.c1.transactionCapacity = # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
(2)启动flume agent a1
进入bin执行命令
./flume-ng agent -c . -f /usr/local/hadoop/apache-flume-1.6.-bin/conf/exec_tail.conf -n a1 -Dflume.root.logger=INFO,console
(3)创建用于传送的日志文件并写入文字
在/usr/local/hadoop/apache-flume-1.6.0-bin文件夹下创建log_exec_tail文件,并在其中生成足够多的日志
> for i in {..}; do echo "test line $i" >> /usr/local/hadoop/apache-flume-1.6.-bin/log_exec_tail; done;
可在控制台1看见如下日志
//前面的省略
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
3.使用Spool(监控整个目录)
Spool监测配置的目录下新增的文件,并将文件中的数据读取出来。需要注意两点:
1) 拷贝到spool目录下的文件不可以再打开编辑。
2) spool目录下不可包含相应的子目录
(1)在conf文件夹下创建spool.conf文件,写入如下配置
vim /usr/local/hadoop/apache-flume-1.6.0-bin/conf/spool.conf
a1.sources = r1
a1.sinks = k1
a1.channels = c1 # Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.channels = c1
#要监控的目录(注意 一旦写入这个目录,文件就不能更改)
a1.sources.r1.spoolDir = /usr/local/hadoop/apache-flume-1.6.-bin/logs
a1.sources.r1.fileHeader = true # Describe the sink
a1.sinks.k1.type = logger # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity =
a1.channels.c1.transactionCapacity = # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
(2)启动flume agent a1
进入bin执行命令
> ./flume-ng agent -c . -f /usr/local/hadoop/apache-flume-1.6.-bin/conf/spool.conf -n a1 -Dflume.root.logger=INFO,console
(3)向被监控的文件夹下传入日志文件
生成10个文件
> for i in {..}; do echo "test line $i" >> /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text$i.log; done;
查看控制台,可见如下日志
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text1.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text1.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text1.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text10.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text10.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text10.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text2.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text2.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text2.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text3.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text3.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text3.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text4.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text4.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text4.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text5.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text5.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text5.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text6.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text6.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text6.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text7.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text7.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text7.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text8.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text8.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text8.log.COMPLETED
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text9.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text9.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text9.log.COMPLETED
注意:发送完毕的日志文件会后缀名会添加“.COMPLETED”
Flume笔记--示例(使用配置文件)的更多相关文章
- SpringBoot学习笔记:读取配置文件
SpringBoot学习笔记:读取配置文件 配置文件 在以往的项目中,我们主要通过XML文件进行框架配置,业务的相关配置会放在属性文件中,然后通过一个属性读取的工具类来读取配置信息.在SpringBo ...
- flume使用示例
flume的特点: flume是一个分布式.可靠.和高可用的海量日志采集.聚合和传输的系统.支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受 ...
- Shell学习笔记 - 环境变量配置文件(转)
一.source命令 功能:在当前bash环境下读取并执行配置文件中的命令 1. 命令格式 source 配置文件 或 . 配置文件 2. 命令示例 [root@localhost ~]# sou ...
- Shell学习笔记 - 环境变量配置文件
一.source命令 功能:在当前bash环境下读取并执行配置文件中的命令 1. 命令格式 source 配置文件 或 . 配置文件 2. 命令示例 [root@localhost ~]# sou ...
- Flume笔记--source端监听目录,sink端上传到HDFS
官方文档参数解释:http://flume.apache.org/FlumeUserGuide.html#hdfs-sink 需要注意:文件格式,fileType=DataStream 默认为Sequ ...
- Spring Boot笔记三:配置文件
配置文件这里需要讲的东西很多,所以我写在了这里,但是这个是和上篇文章衔接的,所以看这篇文章,先看上篇文章笔记二 一.单独的配置文件 配置文件里面不能都写我们的类的配置吧,这样那么多类太杂了,所以我们写 ...
- python学习笔记之读取配置文件【转自https://my.oschina.net/u/3041656/blog/793467】
[转自https://my.oschina.net/u/3041656/blog/793467] 最近在接触利用python来写测试框架,本人也是个刚接触python,所以是个小菜鸟,今天开始,一点点 ...
- Dubbo -- 系统学习 笔记 -- 示例 -- 泛化引用
Dubbo -- 系统学习 笔记 -- 目录 示例 想完整的运行起来,请参见:快速启动,这里只列出各种场景的配置方式 泛化引用 泛接口调用方式主要用于客户端没有API接口及模型类元的情况,参数及返回值 ...
- Dubbo -- 系统学习 笔记 -- 示例 -- 结果缓存
Dubbo -- 系统学习 笔记 -- 目录 示例 想完整的运行起来,请参见:快速启动,这里只列出各种场景的配置方式 结果缓存 结果缓存,用于加速热门数据的访问速度,Dubbo提供声明式缓存,以减少用 ...
随机推荐
- 【转】Unable to execute dex: Java heap space 解决方案(如何为eclipse.int 添加内存)
原文网址:http://blog.csdn.net/zengyangtech/article/details/7003379 欢迎转载,转载请注明 http://blog.csdn.net/zengy ...
- HTTP meta 设置方法
网页的缓存是由 HTTP 消息头中的 “Cache-control” 来控制的,常见的取值有 private.no-cache.max-age.must-revalidate 等,默认为private ...
- SQL SERVER 执行远端数据库的SQL命令
--------------------------------------------------------------这段先执行exec sp_configure 'show advanced ...
- HDU 2853 Assignment(KM最大匹配好题)
HDU 2853 Assignment 题目链接 题意:如今有N个部队和M个任务(M>=N),每一个部队完毕每一个任务有一点的效率,效率越高越好.可是部队已经安排了一定的计划,这时须要我们尽量用 ...
- linux下用phpize给PHP动态添加扩展(转)
使用php的常见问题是:编译php时忘记添加某扩展,后来想添加扩展,但是因为安装php后又装了一些东西如PEAR等,不想删除目录重装,别说,php还真有这样的功能. 我没有在手册中看到. 如我想增加b ...
- $.each 和$(selector).each()的差别
Home » jQuery » $.each() $.each() Posted on 2012 年 3 月 15 日 in jQuery, jQuery函数 | by Jason | 译自官方手冊: ...
- [转] JSON for java入门总结
一.JSON介绍 JSON(JavaScript Object Notation),类似于XML,是一种数据交换格式,比如JAVA产生了一个数据想要给JavaScript,则除了利用XML外,还可以利 ...
- Android 面试精华题目总结
转载请标明出处:http://blog.csdn.net/lmj623565791/article/details/24015867 下面的题目都是楼主在android交流群大家面试时遇到的,如果大家 ...
- hdu 2189
//hdu2189 题意大概就是给n个人,分成多组,要求每组人数都是素数,求有多少种... 解法就是先把150以内的素数全部存入一个数组,然后利用a[j+b[i]]+=a[j];这道题一开始没理解 ...
- Error prompt:“wget: unable to resolve host address”---Solution
//Situation System prompts that:"wget: unable to resolve host address". //Analysis Una ...