例子参考资料:http://www.aboutyun.com/thread-8917-1-1.html

自定义sink实现和属性注入:http://www.coderli.com/flume-ng-sink-properties/

自定义拦截器:http://blog.csdn.net/xiao_jun_0820/article/details/38333171

自定义kafkasink:www.itnose.net/detail/6187977.html

1. 使用avro发送指定文件

(1)在conf文件夹下创建avro.conf文件,写入如下配置
vim /usr/local/hadoop/apache-flume-1.6.0-bin/conf/avro.conf

a1.sources = r1
a1.sinks = k1
a1.channels = c1 # Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 4141 # Describe the sink
a1.sinks.k1.type = logger # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100 # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

(2)启动flume agent a1

进入bin执行命令

 ./flume-ng agent -c . -f /usr/local/hadoop/apache-flume-1.6.-bin/conf/avro.conf -n a1 -Dflume.root.logger=INFO,console

(3)创建用于传送的日志文件并写入文字
在/usr/local/hadoop/apache-flume-1.6.0-bin文件夹下创建 log.00 文件,写入"hahahahah"

(4)使用avro-client发送文件
再启动一个控制台,进入bin执行命令

 ./flume-ng avro-client -c . -H localhost -p  -F /usr/local/hadoop/apache-flume-1.6.-bin/log. 

可在控制台1见如下日志,说明已经成功传送

// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: => /127.0.0.1:] OPEN
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: => /127.0.0.1:] BOUND: /127.0.0.1:
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: => /127.0.0.1:] CONNECTED: /127.0.0.1:
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: :> /127.0.0.1:] DISCONNECTED
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: :> /127.0.0.1:] UNBOUND
// :: INFO ipc.NettyServer: [id: 0xa681f3fa, /127.0.0.1: :> /127.0.0.1:] CLOSED
// :: INFO ipc.NettyServer: Connection to /127.0.0.1: disconnected.
// :: INFO sink.LoggerSink: Event: { headers:{} body: 2E 2F 6C 6D hahahahah ./flum }

2.使用EXEC(监控单个日志文件)

EXEC执行一个给定的命令获得输出的源,如果要使用tail命令,必选使得file足够大才能看到输出内容输出内容

(1)创建agent配置文件,在 /conf 下新建 exec_tail.conf

a1.sources = r1
a1.sinks = k1
a1.channels = c1 # Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.channels = c1
a1.sources.r1.command = tail -F /usr/local/hadoop/apache-flume-1.6.-bin/log_exec_tail
#注意,上面这一行就是要监控的日志文件的位置 # Describe the sink
a1.sinks.k1.type = logger # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity =
a1.channels.c1.transactionCapacity = # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

(2)启动flume agent a1
进入bin执行命令

./flume-ng agent -c . -f /usr/local/hadoop/apache-flume-1.6.-bin/conf/exec_tail.conf -n a1 -Dflume.root.logger=INFO,console

(3)创建用于传送的日志文件并写入文字
在/usr/local/hadoop/apache-flume-1.6.0-bin文件夹下创建log_exec_tail文件,并在其中生成足够多的日志

>  for i in {..}; do echo "test line $i" >> /usr/local/hadoop/apache-flume-1.6.-bin/log_exec_tail;  done;

可在控制台1看见如下日志

//前面的省略

// :: INFO sink.LoggerSink: Event: { headers:{} body:      6C  6E                 test line  }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }
// :: INFO sink.LoggerSink: Event: { headers:{} body: 6C 6E test line }

3.使用Spool(监控整个目录)
Spool监测配置的目录下新增的文件,并将文件中的数据读取出来。需要注意两点:
  1) 拷贝到spool目录下的文件不可以再打开编辑。
  2) spool目录下不可包含相应的子目录

(1)在conf文件夹下创建spool.conf文件,写入如下配置
vim /usr/local/hadoop/apache-flume-1.6.0-bin/conf/spool.conf

a1.sources = r1
a1.sinks = k1
a1.channels = c1 # Describe/configure the source
a1.sources.r1.type = spooldir
a1.sources.r1.channels = c1
#要监控的目录(注意 一旦写入这个目录,文件就不能更改)
a1.sources.r1.spoolDir = /usr/local/hadoop/apache-flume-1.6.-bin/logs
a1.sources.r1.fileHeader = true # Describe the sink
a1.sinks.k1.type = logger # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity =
a1.channels.c1.transactionCapacity = # Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

(2)启动flume agent a1
进入bin执行命令

> ./flume-ng agent -c . -f /usr/local/hadoop/apache-flume-1.6.-bin/conf/spool.conf -n a1 -Dflume.root.logger=INFO,console

(3)向被监控的文件夹下传入日志文件
生成10个文件

> for i in {..}; do echo "test line $i" >> /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text$i.log;  done;

查看控制台,可见如下日志

// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text1.log} body:      6C  6E                   test line  }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text1.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text1.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text10.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text10.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text10.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text2.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text2.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text2.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text3.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text3.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text3.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text4.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text4.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text4.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text5.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text5.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text5.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text6.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text6.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text6.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text7.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text7.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text7.log.COMPLETED
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text8.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text8.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text8.log.COMPLETED
// :: INFO avro.ReliableSpoolingFileEventReader: Last read took us just up to a file boundary. Rolling to the next file, if there is one.
// :: INFO sink.LoggerSink: Event: { headers:{file=/usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text9.log} body: 6C 6E test line }
// :: INFO avro.ReliableSpoolingFileEventReader: Preparing to move file /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text9.log to /usr/local/hadoop/apache-flume-1.6.-bin/logs/spool_text9.log.COMPLETED

注意:发送完毕的日志文件会后缀名会添加“.COMPLETED”

Flume笔记--示例(使用配置文件)的更多相关文章

  1. SpringBoot学习笔记:读取配置文件

    SpringBoot学习笔记:读取配置文件 配置文件 在以往的项目中,我们主要通过XML文件进行框架配置,业务的相关配置会放在属性文件中,然后通过一个属性读取的工具类来读取配置信息.在SpringBo ...

  2. flume使用示例

    flume的特点: flume是一个分布式.可靠.和高可用的海量日志采集.聚合和传输的系统.支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受 ...

  3. Shell学习笔记 - 环境变量配置文件(转)

    一.source命令 功能:在当前bash环境下读取并执行配置文件中的命令 1. 命令格式 source 配置文件  或  . 配置文件 2. 命令示例 [root@localhost ~]# sou ...

  4. Shell学习笔记 - 环境变量配置文件

    一.source命令 功能:在当前bash环境下读取并执行配置文件中的命令 1. 命令格式 source 配置文件  或  . 配置文件 2. 命令示例 [root@localhost ~]# sou ...

  5. Flume笔记--source端监听目录,sink端上传到HDFS

    官方文档参数解释:http://flume.apache.org/FlumeUserGuide.html#hdfs-sink 需要注意:文件格式,fileType=DataStream 默认为Sequ ...

  6. Spring Boot笔记三:配置文件

    配置文件这里需要讲的东西很多,所以我写在了这里,但是这个是和上篇文章衔接的,所以看这篇文章,先看上篇文章笔记二 一.单独的配置文件 配置文件里面不能都写我们的类的配置吧,这样那么多类太杂了,所以我们写 ...

  7. python学习笔记之读取配置文件【转自https://my.oschina.net/u/3041656/blog/793467】

    [转自https://my.oschina.net/u/3041656/blog/793467] 最近在接触利用python来写测试框架,本人也是个刚接触python,所以是个小菜鸟,今天开始,一点点 ...

  8. Dubbo -- 系统学习 笔记 -- 示例 -- 泛化引用

    Dubbo -- 系统学习 笔记 -- 目录 示例 想完整的运行起来,请参见:快速启动,这里只列出各种场景的配置方式 泛化引用 泛接口调用方式主要用于客户端没有API接口及模型类元的情况,参数及返回值 ...

  9. Dubbo -- 系统学习 笔记 -- 示例 -- 结果缓存

    Dubbo -- 系统学习 笔记 -- 目录 示例 想完整的运行起来,请参见:快速启动,这里只列出各种场景的配置方式 结果缓存 结果缓存,用于加速热门数据的访问速度,Dubbo提供声明式缓存,以减少用 ...

随机推荐

  1. 自己动手学TCP/IP–http协议(http报文头)

    在前面的一篇文章中,简单了介绍了HTTP报文格式,详情参考http://www.firefoxbug.net/?cat=47. 这里大概介绍下基本的,常见的HTTP包头格式. POST /report ...

  2. loadrunner11 录制脚步不成功,在录制概要出现“No Events were detected”,浮动窗口总是显示“0 Events”,解决办法

    打开ie浏览器,菜单栏上的工具----Internet选项---高级选项卡,去掉勾选“启用第三方浏览器扩展”,重启ie即可,重新录制脚本就可以成功. 刚刚开始以为自己解决不了这个问题,还想怎么办呢?一 ...

  3. Matlab:拟合(1)

    拟合练习: function f = curvefun(x, tdata) f = (x()*x()*x()) / (x()-x()) * ( exp(-x()*tdata)/(x()-x()) + ...

  4. K - Leapin' Lizards - HDU 2732(最大流)

    题意:在一个迷宫里面有一些蜥蜴,这个迷宫有一些柱子组成的,并且这些柱子都有一个耐久值,每当一只蜥蜴跳过耐久值就会减一,当耐久值为0的时候这个柱子就不能使用了,每个蜥蜴都有一个最大跳跃值d,现在想知道有 ...

  5. Mina笔记

    1.MINA框架简介 MINA(Multipurpose Infrastructure for Network Applications)是用于开发高性能和高可用性的网络应用程序的基础框架.通过使用M ...

  6. [转]百度地图点聚合MarkerClusterer移动地图时,Marker的Label丢失的问题

    参考文献:http://www.cnblogs.com/jicheng1014/p/3143859.html 问题现象: 使用MarkerClusterer_min.js,可以实现点聚合,但是当将带有 ...

  7. node.js环境配置(angularjs高级程序设计中出现的错误)

    一:npm install connect会出现错误:解决方法 1:$ npm install connect@2.X.X 2:$ npm install serve-static: 建立server ...

  8. Unity3D基础学习 利用NGUI的Texture播放视频

    利用NGUI播放视频,首先你得导入你的视频 你的电脑中必须安装QuickTime软件,没有,去下一个,如果是Windows系统,安装完之后重启. 接下来转换你的视频格式,如果你的视频在QuickTim ...

  9. TinyXml快速入门(二)

    在<TinyXml快速入门(一)>中我介绍了使用TinyXml库如何创建和打印xml文件,下面我介绍使用tinyxml库对xml文件进行一系列的操作,包括获取xml文件声明,查询指定节点. ...

  10. Linux下查看显示器输出状态以及修改显示器工作模式(复制 or 扩展)

    //关闭显示器VGA1xrandr --output VGA1 --off //开启显示器VGA1xrandr --output VGA1 --auto //关闭显示器LVDS1xrandr --ou ...