Flume的介绍和简单操作
Flume是什么
Flume是Cloudera提供的一个高可用的,高可靠的,分布式的海量日志采集、聚合和传输的系统,Flume支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(可定制)的能力。
Flume的功能
- 支持在日志系统中定制各类数据发送方,用于收集数据
- 提供对数据简单处理,并写到各类数据接收方(可定制)的能力
Flume的组成
- Agent:核心组件
- source 负责数据的产生或搜集
- channel 是一种短暂的存储容器,负责数据的存储持久化
- sink 负责数据的转发
Flume的工作流示意图
- 数据流模型

- 多Agent模型

- 合并模型

- 混合模型

Flume的安装
下载安装包并解压
wget http://www.apache.org/dyn/closer.lua/flume/1.8.0/apache-flume-1.8.0-bin.tar.gz
tar -zxvf apache-flume-1.8.0-bin.tar.gz
配置环境变量
vim ~/.bashrc
export FLUME_HOME=/usr/local/src/apache-flume-1.8.0-bin
export PATH=$PATH:$FLUME_HOME/bin
source ~/.bashrc
Flume简单操作
- netcat模式
进入conf目录下编写netcat.conf文件,内容如下:
agent.sources = netcatSource
agent.channels = memoryChannel
agent.sinks = loggerSink
agent.sources.netcatSource.type = netcat
agent.sources.netcatSource.bind = localhost
agent.sources.netcatSource.port = 11111
agent.sources.netcatSource.channels = memoryChannel
agent.sinks.loggerSink.type = logger
agent.sinks.loggerSink.channel = memoryChannel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100
agent.channels.memoryChannel.transactionCapacity = 10
启动一个实例
(py27) [root@master conf]# pwd
/usr/local/src/apache-flume-1.8.0-bin/conf
(py27) [root@master conf]# flume-ng agent --conf conf --conf-file ./netcat.conf --name agent -Dflume.root.logger=INFO,console
启动成功
18/10/24 11:26:35 INFO node.PollingPropertiesFileConfigurationProvider: Configuration provider starting
18/10/24 11:26:35 INFO node.PollingPropertiesFileConfigurationProvider: Reloading configuration file:./flume_netcat.conf
18/10/24 11:26:35 INFO conf.FlumeConfiguration: Processing:loggerSink
18/10/24 11:26:35 INFO conf.FlumeConfiguration: Processing:loggerSink
18/10/24 11:26:35 INFO conf.FlumeConfiguration: Added sinks: loggerSink Agent: agent
18/10/24 11:26:35 INFO conf.FlumeConfiguration: Post-validation flume configuration contains configuration for agents: [agent]
18/10/24 11:26:35 INFO node.AbstractConfigurationProvider: Creating channels
18/10/24 11:26:35 INFO channel.DefaultChannelFactory: Creating instance of channel memoryChannel type memory
18/10/24 11:26:35 INFO node.AbstractConfigurationProvider: Created channel memoryChannel
18/10/24 11:26:35 INFO source.DefaultSourceFactory: Creating instance of source netcatSource, type netcat
18/10/24 11:26:35 INFO sink.DefaultSinkFactory: Creating instance of sink: loggerSink, type: logger
18/10/24 11:26:35 INFO node.AbstractConfigurationProvider: Channel memoryChannel connected to [netcatSource, loggerSink]
18/10/24 11:26:35 INFO node.Application: Starting new configuration:{ sourceRunners:{netcatSource=EventDrivenSourceRunner: { source:org.apache.flume.source.NetcatSource{name:netcatSource,state:IDLE} }} sinkRunners:{loggerSink=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@262b92ac counterGroup:{ name:null counters:{} } }} channels:{memoryChannel=org.apache.flume.channel.MemoryChannel{name: memoryChannel}} }
18/10/24 11:26:35 INFO node.Application: Starting Channel memoryChannel
18/10/24 11:26:35 INFO node.Application: Waiting for channel: memoryChannel to start. Sleeping for 500 ms
18/10/24 11:26:36 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: CHANNEL, name: memoryChannel: Successfully registered new MBean.
18/10/24 11:26:36 INFO instrumentation.MonitoredCounterGroup: Component type: CHANNEL, name: memoryChannel started
18/10/24 11:26:36 INFO node.Application: Starting Sink loggerSink
18/10/24 11:26:36 INFO node.Application: Starting Source netcatSource
18/10/24 11:26:36 INFO source.NetcatSource: Source starting
18/10/24 11:26:36 INFO source.NetcatSource: Created serverSocket:sun.nio.ch.ServerSocketChannelImpl[/172.16.155.120:11111]
然后新开一个终端,发送数据
(py27) [root@master apache-flume-1.8.0-bin]# telnet localhost 11111
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
1
OK
查看接收数据
18/10/24 11:30:15 INFO sink.LoggerSink: Event: { headers:{} body: 31 0D 1. }
注:如果没有telnet工具,请先安装:yum install telnet
- Exec模式
编写配置文件exec.conf
agent.sources = netcatSource
agent.channels = memoryChannel
agent.sinks = loggerSink
agent.sources.netcatSource.type = exec
agent.sources.netcatSource.command = tail -f /home/master/FlumeTest/test_data/exec.log
agent.sources.netcatSource.channels = memoryChannel
agent.sinks.loggerSink.type = logger
agent.sinks.loggerSink.channel = memoryChannel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100
agent.channels.memoryChannel.transactionCapacity = 10
启动实例
(py27) [root@master conf]# flume-ng agent --conf conf --conf-file ./flume_exec.conf --name agent -Dflume.root.logger=INFO,console
启动成功后,创建配置文件中的exec.log文件
(py27) [root@master test_data]# ls
exec.log
(py27) [root@master test_data]# pwd
/home/master/FlumeTest/test_data
(py27) [root@master test_data]#
然后通过echo命令模拟日志的产生
(py27) [root@master test_data]# echo 'Hello World!!!' >> exec.log
查看接收的日志
18/10/25 09:19:52 INFO sink.LoggerSink: Event: { headers:{} body: 48 65 6C 6C 6F 20 57 6F 72 6C 64 21 21 21 Hello World!!! }
如何将日志保存到HDFS上
修改配置文件
agent.sources = netcatSource
agent.channels = memoryChannel
agent.sinks = loggerSink
agent.sources.netcatSource.type = exec
agent.sources.netcatSource.command = tail -f /home/master/FlumeTest/test_data/exec.log
agent.sources.netcatSource.channels = memoryChannel
agent.sinks.loggerSink.type = hdfs
agent.sinks.loggerSink.hdfs.path = /flume/%y-%m-%d/%H%M/
agent.sinks.loggerSink.hdfs.filePrefix = exec_hdfs_
agent.sinks.loggerSink.hdfs.round = true
agent.sinks.loggerSink.hdfs.roundValue = 1
agent.sinks.loggerSink.hdfs.roundUnit = minute
agent.sinks.loggerSink.hdfs.rollInterval = 3
agent.sinks.loggerSink.hdfs.rollSize = 20
agent.sinks.loggerSink.hdfs.rollCount = 5
agent.sinks.loggerSink.hdfs.useLocalTimeStamp = true
agent.sinks.loggerSink.hdfs.fileType = DataStream
agent.sinks.loggerSink.channel = memoryChannel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 100
agent.channels.memoryChannel.transactionCapacity = 10
然后启动实例
(py27) [root@master conf]# flume-ng agent --conf conf --conf-file ./flume_exec_hdfs.conf --name agent -Dflume.root.logger=INFO,console
然后可以看到它把exec.log文件里的日志给写到了HDFS上
18/10/25 09:54:26 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
18/10/25 09:54:26 INFO hdfs.BucketWriter: Creating /flume/18-10-25/0954//exec_hdfs_.1540475666623.tmp
18/10/25 09:54:32 INFO hdfs.BucketWriter: Closing /flume/18-10-25/0954//exec_hdfs_.1540475666623.tmp
18/10/25 09:54:32 INFO hdfs.BucketWriter: Renaming /flume/18-10-25/0954/exec_hdfs_.1540475666623.tmp to /flume/18-10-25/0954/exec_hdfs_.1540475666623
18/10/25 09:54:32 INFO hdfs.HDFSEventSink: Writer callback called.
我们进入HDFS查看,可以看到log里的内容
(py27) [root@master sbin]# hadoop fs -ls /flume/18-10-25/0954
Found 1 items
-rw-r--r-- 3 root supergroup 15 2018-10-25 09:54 /flume/18-10-25/0954/exec_hdfs_.1540475666623
(py27) [root@master sbin]# hadoop fs -text /flume/18-10-25/0954/exec_hdfs_.1540475666623
Hello World!!!
然后我们再次写入写的log,然后再查看
//写入新的log
(py27) [root@master test_data]# echo 'test001' >> exec.log
(py27) [root@master test_data]# echo 'test002' >> exec.log
//进入HDFS目录查看
(py27) [root@master sbin]# hadoop fs -ls /flume/18-10-25
Found 2 items
drwxr-xr-x - root supergroup 0 2018-10-25 09:54 /flume/18-10-25/0954
drwxr-xr-x - root supergroup 0 2018-10-25 09:56 /flume/18-10-25/0956
(py27) [root@master sbin]# hadoop fs -ls /flume/18-10-25/0956
Found 1 items
-rw-r--r-- 3 root supergroup 16 2018-10-25 09:56 /flume/18-10-25/0956/exec_hdfs_.1540475766338
(py27) [root@master sbin]# hadoop fs -text /flume/18-10-25/0956/exec_hdfs_.1540475766338
test001
test002
- 故障转移实例
首先需要三台机器,master、slave1、slave2,然后分别配置实例并启动,master上的agent实例发送日志,slave1和slave2接收日志
master配置
agent.sources = netcatSource
agent.channels = memoryChannel
agent.sinks = loggerSink1 loggerSink2
agent.sinkgroups = group
agent.sources.netcatSource.type = exec
agent.sources.netcatSource.command = tail -f /home/master/FlumeTest/test_data/exec.log
agent.sources.netcatSource.channels = memoryChannel
agent.sinks.loggerSink1.type = avro
agent.sinks.loggerSink1.hostname = slave1
agent.sinks.loggerSink1.port = 52020
agent.sinks.loggerSink1.channel = memoryChannel
agent.sinks.loggerSink2.type = avro
agent.sinks.loggerSink2.hostname = slave2
agent.sinks.loggerSink2.port = 52020
agent.sinks.loggerSink2.channel = memoryChannel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 10000
agent.channels.memoryChannel.transactionCapacity = 1000
agent.sinkgroups.group.sinks = loggerSink1 loggerSink2
agent.sinkgroups.group.processor.type = failover
agent.sinkgroups.group.processor.loggerSink1 = 10
agent.sinkgroups.group.processor.loggerSink2 = 1
agent.sinkgroups.group.processor.maxpenalty = 10000
slave1配置
agent.sources = netcatSource
agent.channels = memoryChannel
agent.sinks = loggerSink
agent.sources.netcatSource.type = avro
agent.sources.netcatSource.bind = slave1
agent.sources.netcatSource.port = 52020
agent.sources.netcatSource.channels = memoryChannel
agent.sinks.loggerSink.type = logger
agent.sinks.loggerSink.channel = memoryChannel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 10000
agent.channels.memoryChannel.transactionCapacity = 1000
slave2配置
agent.sources = netcatSource
agent.channels = memoryChannel
agent.sinks = loggerSink
agent.sources.netcatSource.type = avro
agent.sources.netcatSource.bind = slave2
agent.sources.netcatSource.port = 52020
agent.sources.netcatSource.channels = memoryChannel
agent.sinks.loggerSink.type = logger
agent.sinks.loggerSink.channel = memoryChannel
agent.channels.memoryChannel.type = memory
agent.channels.memoryChannel.capacity = 10000
agent.channels.memoryChannel.transactionCapacity = 1000
分别启动master、slave1、slave2的agent,然后在mater上写入日志,然后观察谁收到了
//master
(py27) [root@master test_data]# echo 'hello' >> exec.log
//slave1
18/10/25 10:53:53 INFO sink.LoggerSink: Event: { headers:{} body: 68 65 6C 6C 6F hello }
//slave2
18/10/25 10:43:00 INFO ipc.NettyServer: [id: 0x8da012e3, /172.16.155.120:39726 => /172.16.155.122:52020] CONNECTED: /172.16.155.120:39726
发现是slave1收到数据,然后我们把slave1的agent关掉,再次发送日志
//master
(py27) [root@master test_data]# echo '11111' >> exec.log
//slave2
18/10/25 10:43:00 INFO ipc.NettyServer: [id: 0x8da012e3, /172.16.155.120:39726 => /172.16.155.122:52020] CONNECTED: /172.16.155.120:39726
18/10/25 10:56:53 INFO sink.LoggerSink: Event: { headers:{} body: 31 31 31 31 31 11111 }
然后再次启动slave1的agent
//master
(py27) [root@master test_data]# echo '22222' >> exec.log
//slave1
18/10/25 10:58:21 INFO sink.LoggerSink: Event: { headers:{} body: 32 32 32 32 32 22222 }
//slave2
18/10/25 10:43:00 INFO ipc.NettyServer: [id: 0x8da012e3, /172.16.155.120:39726 => /172.16.155.122:52020] CONNECTED: /172.16.155.120:39726
18/10/25 10:56:53 INFO sink.LoggerSink: Event: { headers:{} body: 31 31 31 31 31 11111 }
欢迎关注公众号

Flume的介绍和简单操作的更多相关文章
- 进击的Python【第十二章】:mysql介绍与简单操作,sqlachemy介绍与简单应用
进击的Python[第十二章]:mysql介绍与简单操作,sqlachemy介绍与简单应用 一.数据库介绍 什么是数据库? 数据库(Database)是按照数据结构来组织.存储和管理数据的仓库,每个数 ...
- HDFS介绍及简单操作
目录 1.HDFS是什么? 2.HDFS设计基础与目标 3.HDFS体系结构 3.1 NameNode(NN)3.2 DataNode(DN)3.3 SecondaryNameNode(SNN)3.4 ...
- 金融量化分析【day110】:IPython介绍及简单操作
一. IPython介绍 ipython是一个python的交互式shell,比默认的python shell好用得多,支持变量自动补全,自动缩进,支持bash shell命令,内置了许多很有用的功能 ...
- SecureCRT的安装、介绍、简单操作
网上看到一篇名为<SecureCRT的使用方法和技巧(详细使用教程)>的secureCRT教程,可能软件版本与我不一样我安装的是8.1. 原文来源:http://www.jb51.net/ ...
- Git和Github的介绍、简单操作、冲突(上)
目的: 1.git与github简介 2.Git与SVN区别 3.Github 的简单使用 4.下载安装Git-20-64-bit.exe 5.Git常用命令 5.1Git命令使用场景 5. ...
- jenkins介绍及其简单操作
一.jenkins简介 Jenkins是一个开源软件项目,是基于Java开发的一种持续集成工具,用于监控持续重复的工作,旨在提供一个开放易用的软件平台,使软件的持续集成变成可能. Jenkins功能包 ...
- 【转载】salesforce 零基础开发入门学习(三)sObject简单介绍以及简单DML操作(SOQL)
salesforce 零基础开发入门学习(三)sObject简单介绍以及简单DML操作(SOQL) salesforce中对于数据库操作和JAVA等语言对于数据库操作是有一定区别的.salesfo ...
- 第一章 flume架构介绍
1.flume概念介绍 1.1 常见的分布式日志收集系统 Scribe是facebook开源的日志收集系统,在facebook内部已经得到大量的 ...
- Linq对XML的简单操作
前两章介绍了关于Linq创建.解析SOAP格式的XML,在实际运用中,可能会对xml进行一些其它的操作,比如基础的增删该查,而操作对象首先需要获取对象,针对于DOM操作来说,Linq确实方便了不少,如 ...
随机推荐
- Intel酷睿前世今生(二)
上一文,讲述到了酷睿构架的诞生.可以显而易见的知道,酷睿构架其实源于笔记本处理器构架.因为在当年的技术趋势中,因为提升主频而带来的负面影响如发热与高功率已经让普通消费者所不满.然而提升主频并没有提升多 ...
- [翻译] JTSlideShadowAnimation
JTSlideShadowAnimation 效果图: JTSlideShadowAnimation allow you to reproduce the famous "slide to ...
- linux 下通过过 hbase 的Java api 操作hbase
hbase版本:0.98.5 hadoop版本:1.2.1 使用自带的zk 本文的内容是在集群中创建java项目调用api来操作hbase,主要涉及对hbase的创建表格,删除表格,插入数据,删除数据 ...
- Django路由系统---Django重点之url别名
django重点之url别名[参数名必须是name,格式是name="XXX] 不论后台路径如何进行修改路径,前台访问的路径不变,永远是alias, 这样方便开发 前台根据 {{ url & ...
- TMG 2010 为HTTPS协议添加非标准端口(443)
1.添加加密端口时,编辑脚本addsslports.vbs addsslports.vbs 脚本内容如下: Dim root Dim tpRanges Dim newRange Set root = ...
- ASP.NET Core 使用 SQLite 教程,EF SQLite教程,修改模型更新数据库,适合初学者看懂详细、简单教程
SQLIte 操作方便,简单小巧,这里笔者就不再过多介绍,感兴趣可以到以下博文 https://blog.csdn.net/qq_31930499/article/details/80420246 文 ...
- 最优化 KKT条件
对于约束优化问题: 拉格朗日公式: 其KKT条件为: 求解 x.α.β 其中β*g(x)为互补松弛条件 KKT条件是使一组解成为最优解的必要条件,当原问题是凸问题的时候,KKT条件也是充分条件.
- Samba文件共享服务器配置
Samba起源: 早期网络想要在不同主机之间共享文件大多要用FTP协议来传输,但FTP协议仅能做到传输文件却不能直接修改对方主机的资料数据,这样确实不太方便,于是便出现了NFS开源文件共享程序:NFS ...
- December 30th 2016 Week 53rd Friday
Life without love is like a tree without blossoms or fruit. 缺少爱的生活就像从未开花结果的枯树. Love is not only the ...
- codeforces 156D Clues(prufer序列)
codeforces 156D Clues 题意 给定一个无向图,不保证联通.求添加最少的边使它联通的方案数. 题解 根据prufer序列,带标号无根树的方案数是\(n^{n-2}\) 依这个思想构建 ...



