这里主要介绍几种常见的日志的source来源,包括监控文件型,监控文件内容增量,TCP和HTTP。

Spool类型

  用于监控指定目录内数据变更,若有新文件,则将新文件内数据读取上传

  在教你一步搭建Flume分布式日志系统最后有介绍此案例

Exec

  EXEC执行一个给定的命令获得输出的源,如果要使用tail命令,必选使得file足够大才能看到输出内容

创建agent配置文件   

# vi /usr/local/flume170/conf/exec_tail.conf

a1.sources = r1
a1.channels = c1 c2
a1.sinks = k1 k2 # Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.channels = c1 c2
a1.sources.r1.command = tail -F /var/log/haproxy.log # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100 a1.channels.c2.type = file
a1.channels.c2.checkpointDir = /usr/local/flume170/checkpoint
a1.channels.c2.dataDirs = /usr/local/flume170/data # Describe the sink
a1.sinks.k1.type = logger
a1.sinks.k1.channel =c1 a1.sinks.k2.type = FILE_ROLL
a1.sinks.k2.channel = c2
a1.sinks.k2.sink.directory = /usr/local/flume170/files
a1.sinks.k2.sink.rollInterval = 0

启动flume agent a1

  # /usr/local/flume170/bin/flume-ng agent -c . -f /usr/local/flume170/conf/exec_tail.conf -n a1 -Dflume.root.logger=INFO,console
  生成足够多的内容在文件里
  # for i in {1..100};do echo "exec tail$i" >> /usr/local/flume170/log_exec_tail;echo $i;sleep 0.1;done
  在H32的控制台,可以看到以下信息:

Http

JSONHandler型

基于HTTP POST或GET方式的数据源,支持JSON、BLOB表示形式

创建agent配置文件

# vi /usr/local/flume170/conf/post_json.conf

a1.sources = r1
a1.channels = c1
a1.sinks = k1 # Describe/configure the source
a1.sources.r1.type = org.apache.flume.source.http.HTTPSource
a1.sources.r1.port = 5142
a1.sources.r1.channels = c1 # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100 # Describe the sink
a1.sinks.k1.type = logger
a1.sinks.k1.channel = c1

启动flume agent a1

# /usr/local/flume170/bin/flume-ng agent -c . -f /usr/local/flume170/conf/post_json.conf -n a1 -Dflume.root.logger=INFO,console
生成JSON 格式的POST request
# curl -X POST -d '[{ "headers" :{"a" : "a1","b" : "b1"},"body" : "idoall.org_body"}]' http://localhost:8888
在H32的控制台,可以看到以下信息:

Tcp

Syslogtcp监听TCP的端口做为数据源

创建agent配置文件

# vi /usr/local/flume170/conf/syslog_tcp.conf

a1.sources = r1
a1.channels = c1
a1.sinks = k1 # Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.host = H32
a1.sources.r1.channels = c1 # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100 # Describe the sink
a1.sinks.k1.type = logger
a1.sinks.k1.channel = c1

启动flume agent a1

# /usr/local/flume170/bin/flume-ng agent -c . -f /usr/local/flume170/conf/syslog_tcp.conf -n a1 -Dflume.root.logger=INFO,console
测试产生syslog
# echo "hello idoall.org syslog" | nc localhost 5140
在H32的控制台,可以看到以下信息:

Flume Sink Processors和Avro类型

  Avro可以发送一个给定的文件给Flume,Avro 源使用AVRO RPC机制。

  failover的机器是一直发送给其中一个sink,当这个sink不可用的时候,自动发送到下一个sink。channel的transactionCapacity参数不能小于sink的batchsiz
  在H32创建Flume_Sink_Processors配置文件
  # vi /usr/local/flume170/conf/Flume_Sink_Processors.conf

a1.sources = r1
a1.channels = c1 c2
a1.sinks = k1 k2 # Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.channels = c1 c2
a1.sources.r1.selector.type = replicating # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100 a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100 # Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = H32
a1.sinks.k1.port = 5141 a1.sinks.k2.type = avro
a1.sinks.k2.channel = c2
a1.sinks.k2.hostname = H33
a1.sinks.k2.port = 5141 # 这个是配置failover的关键,需要有一个sink group
a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
# 处理的类型是failover
a1.sinkgroups.g1.processor.type = failover
# 优先级,数字越大优先级越高,每个sink的优先级必须不相同
a1.sinkgroups.g1.processor.priority.k1 = 5
a1.sinkgroups.g1.processor.priority.k2 = 10
# 设置为10秒,当然可以根据你的实际状况更改成更快或者很慢
a1.sinkgroups.g1.processor.maxpenalty = 10000

  在H32创建Flume_Sink_Processors_avro配置文件

  # vi /usr/local/flume170/conf/Flume_Sink_Processors_avro.conf

a1.sources = r1
a1.channels = c1
a1.sinks = k1 # Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 5141 # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100 # Describe the sink
a1.sinks.k1.type = logger
a1.sinks.k1.channel = c1

  将2个配置文件复制到H33上一份

  /usr/local/flume170# scp -r /usr/local/flume170/conf/Flume_Sink_Processors.conf   H33:/usr/local/flume170/conf/Flume_Sink_Processors.conf
  /usr/local/flume170# scp -r /usr/local/flume170/conf/Flume_Sink_Processors_avro.conf   H33:/usr/local/flume170/conf/Flume_Sink_Processors_avro.conf
  打开4个窗口,在H32和H33上同时启动两个flume agent
  # /usr/local/flume170/bin/flume-ng agent -c . -f /usr/local/flume170/conf/Flume_Sink_Processors_avro.conf -n a1 -Dflume.root.logger=INFO,console
  # /usr/local/flume170/bin/flume-ng agent -c . -f /usr/local/flume170/conf/Flume_Sink_Processors.conf -n a1 -Dflume.root.logger=INFO,console
  然后在H32或H33的任意一台机器上,测试产生log
  # echo "idoall.org test1 failover" | nc H32 5140

  因为H33的优先级高,所以在H33的sink窗口,可以看到以下信息,而H32没有:

  这时我们停止掉H33机器上的sink(ctrl+c),再次输出测试数据
  # echo "idoall.org test2 failover" | nc localhost 5140
  可以在H32的sink窗口,看到读取到了刚才发送的两条测试数据:

  我们再在H33的sink窗口中,启动sink:
  # /usr/local/flume170/bin/flume-ng agent -c . -f /usr/local/flume170/conf/Flume_Sink_Processors_avro.conf -n a1 -Dflume.root.logger=INFO,console
  输入两批测试数据:
  # echo "idoall.org test3 failover" | nc localhost 5140 && echo "idoall.org test4 failover" | nc localhost 5140
  在H33的sink窗口,我们可以看到以下信息,因为优先级的关系,log消息会再次落到H33上:

Load balancing Sink Processor

  load balance type和failover不同的地方是,load balance有两个配置,一个是轮询,一个是随机。两种情况下如果被选择的sink不可用,就会自动尝试发送到下一个可用的sink上面。
  在H32创建Load_balancing_Sink_Processors配置文件
  # vi /usr/local/flume170/conf/Load_balancing_Sink_Processors.conf

a1.sources = r1
a1.channels = c1
a1.sinks = k1 k2 # Describe/configure the source
a1.sources.r1.type = syslogtcp
a1.sources.r1.port = 5140
a1.sources.r1.channels = c1 # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100 # Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = H32
a1.sinks.k1.port = 5141 a1.sinks.k2.type = avro
a1.sinks.k2.channel = c1
a1.sinks.k2.hostname = H33
a1.sinks.k2.port = 5141 # 这个是配置failover的关键,需要有一个sink group
a1.sinkgroups = g1
a1.sinkgroups.g1.sinks = k1 k2
# 处理的类型是load_balance
a1.sinkgroups.g1.processor.type = load_balance
a1.sinkgroups.g1.processor.backoff = true
a1.sinkgroups.g1.processor.selector = round_robin

  在H32创建Load_balancing_Sink_Processors_avro配置文件

  # vi /usr/local/flume170/conf/Load_balancing_Sink_Processors_avro.conf

a1.sources = r1
a1.channels = c1
a1.sinks = k1 # Describe/configure the source
a1.sources.r1.type = avro
a1.sources.r1.channels = c1
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 5141 # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100 # Describe the sink
a1.sinks.k1.type = logger
a1.sinks.k1.channel = c1

  将2个配置文件复制到H33上一份

/usr/local/flume170# scp -r /usr/local/flume170/conf/Load_balancing_Sink_Processors.conf H33:/usr/local/flume170/conf/Load_balancing_Sink_Processors.conf
/usr/local/flume170# scp -r /usr/local/flume170/conf/Load_balancing_Sink_Processors_avro.conf H33:/usr/local/flume170/conf/Load_balancing_Sink_Processors_avro.conf

打开4个窗口,在H32和H33上同时启动两个flume agent
# /usr/local/flume170/bin/flume-ng agent -c . -f /usr/local/flume170/conf/Load_balancing_Sink_Processors_avro.conf -n a1 -Dflume.root.logger=INFO,console
# /usr/local/flume170/bin/flume-ng agent -c . -f /usr/local/flume170/conf/Load_balancing_Sink_Processors.conf -n a1 -Dflume.root.logger=INFO,console

然后在H32或H33的任意一台机器上,测试产生log,一行一行输入,输入太快,容易落到一台机器上
# echo "idoall.org test1" | nc H32 5140
# echo "idoall.org test2" | nc H32 5140
# echo "idoall.org test3" | nc H32 5140
# echo "idoall.org test4" | nc H32 5140

在H32的sink窗口,可以看到以下信息
1. 14/08/10 15:35:29 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 32 idoall.org test2 }
2. 14/08/10 15:35:33 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 34 idoall.org test4 }

在H33的sink窗口,可以看到以下信息:
1. 14/08/10 15:35:27 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 31 idoall.org test1 }
2. 14/08/10 15:35:29 INFO sink.LoggerSink: Event: { headers:{Severity=0, flume.syslog.status=Invalid, Facility=0} body: 69 64 6F 61 6C 6C 2E 6F 72 67 20 74 65 73 74 33 idoall.org test3 }
说明轮询模式起到了作用。

  以上均是建立在H32和H33能互通,且Flume配置都正确的情况下运行,且都是非常简单的场景应用,值得注意的一点是Flume说是日志收集,其实还可以广泛的认为“日志”可以当作是信息流,不局限于认知的日志。

常见的几种Flume日志收集场景实战的更多相关文章

  1. 【转】Flume日志收集

    from:http://www.cnblogs.com/oubo/archive/2012/05/25/2517751.html Flume日志收集   一.Flume介绍 Flume是一个分布式.可 ...

  2. Apache Flume日志收集系统简介

    Apache Flume是一个分布式.可靠.可用的系统,用于从大量不同的源有效地收集.聚合.移动大量日志数据进行集中式数据存储. Flume简介 Flume的核心是Agent,Agent中包含Sour ...

  3. Hadoop生态圈-flume日志收集工具完全分布式部署

    Hadoop生态圈-flume日志收集工具完全分布式部署 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任.   目前为止,Hadoop的一个主流应用就是对于大规模web日志的分析和处理 ...

  4. Flume日志收集系统架构详解--转

     2017-09-06 朱洁 大数据和云计算技术 任何一个生产系统在运行过程中都会产生大量的日志,日志往往隐藏了很多有价值的信息.在没有分析方法之前,这些日志存储一段时间后就会被清理.随着技术的发展和 ...

  5. Flume日志收集系统介绍

    转自:http://blog.csdn.net/a2011480169/article/details/51544664 在具体介绍本文内容之前,先给大家看一下Hadoop业务的整体开发流程: 从Ha ...

  6. flume 日志收集单节点

    flume 是 cloudera公司研发的日志收集系统,采用3层结构:1. agent层,用于直接收集日志;2.connect 层,用于接受日志; 3. 数据存储层,用于保存日志.由一到多个maste ...

  7. Flume日志收集 总结

    Flume是一个分布式.可靠.和高可用的海量日志聚合的系统,支持在系统中定制各类数据发送方,用于收集数据: 同时,Flume提供对数据进行简单处理,并写到各种数据接受方(可定制)的能力. (1) 可靠 ...

  8. Flume日志收集

    进入 http://blog.csdn.net/zhouleilei/article/details/8568147

  9. 基于Flume的美团日志收集系统 架构和设计 改进和优化

    3种解决办法 https://tech.meituan.com/mt-log-system-arch.html 基于Flume的美团日志收集系统(一)架构和设计 - https://tech.meit ...

随机推荐

  1. 爬虫入门系列(三):用 requests 构建知乎 API

    爬虫入门系列目录: 爬虫入门系列(一):快速理解HTTP协议 爬虫入门系列(二):优雅的HTTP库requests 爬虫入门系列(三):用 requests 构建知乎 API 在爬虫系列文章 优雅的H ...

  2. kafka集群中常见错误的解决方法:kafka.common.KafkaException: Should not set log end offset on partition

    问题描述:kafka单台机器做集群操作是没有问题的,如果分布多台机器并且partitions或者备份的个数大于1都会报kafka.common.KafkaException: Should not s ...

  3. sphinx全文检索引擎

    今天刚刚学习了一下,就直接分享上去,有些还没有接触,如果有问题请指正,谢谢 sphinx是什么? Sphinx是一个全文检索引擎.主要为其他应用提供高速.低空间占用.高结果 相关度的全文搜索功能. S ...

  4. css代码实现

    纯 CSS 实现下面我们探讨下,使用纯 CSS 的方式能否实现. hover 伪类实现 使用 hover 伪类,在鼠标悬停在按钮上面时,控制动画样式的暂停. 关键代码如下: <div class ...

  5. Excception and Error

    exception and error都是继承throwable类; Exception就是程序中出现的异常,程序会去捕获: 但是error是比较严重的错误,程序是不会去捕获的: erroe:一般都是 ...

  6. 用Entity Framework往数据库插数据时,出现异常,怎么查看异常的详细信息呢?

    做项目时,在用Entity Framework往数据库插数据时,程序报异常,但是通过报的异常死活没法查看异常的详细信息.这让人很是烦恼.本着自己动手丰衣足食的原则,通过查看资料终于找到了显示异常详细信 ...

  7. springcloud(一):大话Spring Cloud

    研究了一段时间spring boot了准备向spirng cloud进发,公司架构和项目也全面拥抱了Spring Cloud.在使用了一段时间后发现Spring Cloud从技术架构上降低了对大型系统 ...

  8. Python:字符串的分片与索引、字符串的方法

    这是关于Python的第3篇文章,主要介绍下字符串的分片与索引.字符串的方法. 字符串的分片与索引: 字符串可以用过string[X]来分片与索引.分片,简言之,就是从字符串总拿出一部分,储存在另一个 ...

  9. [Cake] 0.C# Make自动化构建-简介

    0.Cake是什么? Cake是C# Make的缩写,是一个基于C# DSL的自动化构建系统.它可以用来编译代码,复制文件以及文件夹,运行单元测试,压缩文件以及构建Nuget包等等. 熟悉大名鼎鼎的M ...

  10. JS立即执行函数表达式(IIFE)

    原文为 http://benalman.com/news/2010/11/immediately-invoked-function-expression/#iife ----------------- ...