Flume - Kafka日志平台整合
1. Flume介绍
Flume是Cloudera提供的一个高可用的,高可靠的,分布式的海量日志采集、聚合和传输的系统,Flume支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(可定制)的能力。
agent
agent本身是一个Java进程,运行在日志收集节点—所谓日志收集节点就是服务器节点。
agent里面包含3个核心的组件:source—->channel—–>sink,类似生产者、仓库、消费者的架构。source
source组件是专门用来收集数据的,可以处理各种类型、各种格式的日志数据,包括avro、thrift、exec、jms、spooling directory、netcat、sequence generator、syslog、http、legacy、自定义。channel
source组件把数据收集来以后,临时存放在channel中,即channel组件在agent中是专门用来存放临时数据的——对采集到的数据进行简单的缓存,可以存放在memory、jdbc、file等等。sink
sink组件是用于把数据发送到目的地的组件,目的地包括hdfs、logger、avro、thrift、ipc、file、null、Hbase、solr、自定义。event
将传输的数据进行封装,是flume传输数据的基本单位,如果是文本文件,通常是一行记录,event也是事务的基本单位。event从source,流向channel,再到sink,本身为一个字节数组,并可携带headers(头信息)信息。event代表着一个数据的最小完整单元,从外部数据源来,向外部的目的地去。
2. Kafka Channel && Kafka Sink
2.1 Kafka channel
Kafka channel可以应用在多样的场景中:
- Flume source and sink:
可以为event提供一个高可靠性和高可用的channel; - Flume source and interceptor but no sink:
其他应用可以将Fluem event写入kafka topic中; - With Flume sink, but no source:
提供低延迟、高容错的方式将Fluem event从kafka中写入其他sink,例如:HDFS,HBase或者Solr。
- Kafka Channel配置
加粗部分为必填属性。
Property Name | Default | Description |
---|---|---|
type | – | The component type name, needs to be org.apache.flume.channel.kafka.KafkaChannel |
kafka.bootstrap.servers | – | List of brokers in the Kafka cluster used by the channel This can be a partial list of brokers, but we recommend at least two for HA. The format is comma separated list of hostname:port |
kafka.topic | flume-channel | Kafka topic which the channel will use |
kafka.consumer.group.id | flume | Consumer group ID the channel uses to register with Kafka. Multiple channels must use the same topic and group to ensure that when one agent fails another can get the data Note that having non-channel consumers with the same ID can lead to data loss. |
parseAsFlumeEvent | true | Expecting Avro datums with FlumeEvent schema in the channel. This should be true if Flume source is writing to the channel and false if other producers are writing into the topic that the channel is using. Flume source messages to Kafka can be parsed outside of Flume by using org.apache.flume.source.avro.AvroFlumeEvent provided by the flume-ng-sdk artifact |
migrateZookeeperOffsets | true | When no Kafka stored offset is found, look up the offsets in Zookeeper and commit them to Kafka. This should be true to support seamless Kafka client migration from older versions of Flume. Once migrated this can be set to false, though that should generally not be required. If no Zookeeper offset is found the kafka.consumer.auto.offset.reset configuration defines how offsets are handled. |
pollTimeout | 500 | The amount of time(in milliseconds) to wait in the “poll()” call of the consumer. https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long) |
defaultPartitionId | – | Specifies a Kafka partition ID (integer) for all events in this channel to be sent to, unless overriden by partitionIdHeader. By default, if this property is not set, events will be distributed by the Kafka Producer’s partitioner - including by key if specified (or by a partitioner specified by kafka.partitioner.class). |
partitionIdHeader | – | When set, the producer will take the value of the field named using the value of this property from the event header and send the message to the specified partition of the topic. If the value represents an invalid partition the event will not be accepted into the channel. If the header value is present then this setting overrides defaultPartitionId. |
kafka.consumer.auto.offset.reset | latest | What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted): earliest: automatically reset the offset to the earliest offset latest: automatically reset the offset to the latest offset none: throw exception to the consumer if no previous offset is found for the consumer’s group anything else: throw exception to the consumer. |
kafka.producer.security.protocol | PLAINTEXT | Set to SASL_PLAINTEXT, SASL_SSL or SSL if writing to Kafka using some level of security. See below for additional info on secure setup. |
kafka.consumer.security.protocol | PLAINTEXT | Same as kafka.producer.security.protocol but for reading/consuming from Kafka. |
more producer/consumer security props | If using SASL_PLAINTEXT, SASL_SSL or SSL refer to Kafka security for additional properties that need to be set on producer/consumer. |
2.2 Kafka Sink
Flume 支持将数据发布到一个kafka topic。目前支持Kafka 0.9.x版本。
- KafkaSink 配置
加粗部分为必填配置
Property | Name | Default Description |
---|---|---|
type | – | Must be set to org.apache.flume.sink.kafka.KafkaSink |
kafka.bootstrap.servers | – | List of brokers Kafka-Sink will connect to, to get the list of topic partitions This can be a partial list of brokers, but we recommend at least two for HA. The format is comma separated list of hostname:port |
kafka.topic | default-flume-topic | The topic in Kafka to which the messages will be published. If this parameter is configured, messages will be published to this topic. If the event header contains a “topic” field, the event will be published to that topic overriding the topic configured here. Arbitrary header substitution is supported, eg. %{header} is replaced with value of event header named “header”. (If using the substitution, it is recommended to set “auto.create.topics.enable” property of Kafka broker to true.) |
flumeBatchSize | 100 | How many messages to process in one batch. Larger batches improve throughput while adding latency. |
kafka.producer.acks | 1 | How many replicas must acknowledge a message before its considered successfully written. Accepted values are 0 (Never wait for acknowledgement), 1 (wait for leader only), -1 (wait for all replicas) Set this to -1 to avoid data loss in some cases of leader failure. |
useFlumeEventFormat | false | By default events are put as bytes onto the Kafka topic directly from the event body. Set to true to store events as the Flume Avro binary format. Used in conjunction with the same property on the KafkaSource or with the parseAsFlumeEvent property on the Kafka Channel this will preserve any Flume headers for the producing side. |
defaultPartitionId | – | Specifies a Kafka partition ID (integer) for all events in this channel to be sent to, unless overriden by partitionIdHeader. By default, if this property is not set, events will be distributed by the Kafka Producer’s partitioner - including by key if specified (or by a partitioner specified by kafka.partitioner.class). |
partitionIdHeader | – | When set, the sink will take the value of the field named using the value of this property from the event header and send the message to the specified partition of the topic. If the value represents an invalid partition, an EventDeliveryException will be thrown. If the header value is present then this setting overrides defaultPartitionId. |
allowTopicOverride | true | When set, the sink will allow a message to be produced into a topic specified by the topicHeader property (if provided). |
topicHeader | topic | When set in conjunction with allowTopicOverride will produce a message into the value of the header named using the value of this property. Care should be taken when using in conjunction with the Kafka Source topicHeader property to avoid creating a loopback. |
kafka.producer.security.protocol | PLAINTEXT | Set to SASL_PLAINTEXT, SASL_SSL or SSL if writing to Kafka using some level of security. See below for additional info on secure setup. |
more producer security props | If using SASL_PLAINTEXT, SASL_SSL or SSL refer to Kafka security for additional properties that need to be set on producer. | |
Other Kafka Producer Properties | – | These properties are used to configure the Kafka Producer. Any producer property supported by Kafka can be used. The only requirement is to prepend the property name with the prefix kafka.producer. For example: kafka.producer.linger.ms |
3. Flume - Kafka配置示例
切换到flume/conf目录下,编辑配置文件:
agent.sources = s1
agent.channels = c1
agent.sinks = k1
# Source Config
agent.sources.s1.type = spooldir
agent.sources.s1.channels = c1
agent.sources.s1.bind = 192.168.100.105
agent.sources.s1.port = 9696
agent.sources.s1.includePattern = *.log
agent.sources.s1.spoolDir = /home/usr/tomcat-test/logs
# Sink Config
## 输出到kafka
agent.sinks.s1.type = org.apache.flume.sink.kafka.KafkaSink
agent.sinks.s1.channel = c1
agent.sinks.s1.topic = test_tomcat_logs
agent.sinks.s1.serializer.class = kafka.serializer.StringEncoder
agent.sinks.s1.brokerList = 192.168.100.105:9092
# Channel Config
agent.channels.c1.type = memory
agent.channels.c1.keep-alive = 10
agent.channels.c1.capacity = 65535
很明显,由配置文件可以了解到:
我们需要读取目录:
/home/usr/tomcat-test/logs
下日志文件;flume连接到kafka的地址是
192.168.100.105:9092
,注意不要配置出错了;flume会将采集后的内容输出到Kafka topic 为
test_tomcat_logs
,所以我们启动zk,kafka后需要打开一个终端消费topic kafkatest的内容。这样就可以看到flume与kafka之间开始工作了。
4. 运行
运行flume直接切换到flume目录执行以下命令即可:
$ bin/flume-ng agent --conf conf --conf-file example.conf --name a1 -Dflume.root.logger=INFO,console
参考资料:
[1] Flume Doc:
http://flume.apache.org/FlumeUserGuide.html#kafka-channel
Flume - Kafka日志平台整合的更多相关文章
- flume+kafka+spark streaming整合
1.安装好flume2.安装好kafka3.安装好spark4.流程说明: 日志文件->flume->kafka->spark streaming flume输入:文件 flume输 ...
- Flume+Kafka+storm的连接整合
Flume-ng Flume是一个分布式.可靠.和高可用的海量日志采集.聚合和传输的系统. Flume的文档可以看http://flume.apache.org/FlumeUserGuide.html ...
- 【转】flume+kafka+zookeeper 日志收集平台的搭建
from:https://my.oschina.net/jastme/blog/600573 flume+kafka+zookeeper 日志收集平台的搭建 收藏 jastme 发表于 10个月前 阅 ...
- 基于Flume+Kafka+ Elasticsearch+Storm的海量日志实时分析平台(转)
0背景介绍 随着机器个数的增加.各种服务.各种组件的扩容.开发人员的递增,日志的运维问题是日渐尖锐.通常,日志都是存储在服务运行的本地机器上,使用脚本来管理,一般非压缩日志保留最近三天,压缩保留最近1 ...
- Flume+Kafka+Storm+Hbase+HDSF+Poi整合
Flume+Kafka+Storm+Hbase+HDSF+Poi整合 需求: 针对一个网站,我们需要根据用户的行为记录日志信息,分析对我们有用的数据. 举例:这个网站www.hongten.com(当 ...
- Flume+Kafka+Storm整合
Flume+Kafka+Storm整合 1. 需求: 有一个客户端Client可以产生日志信息,我们需要通过Flume获取日志信息,再把该日志信息放入到Kafka的一个Topic:flume-to-k ...
- Flume+Kafka整合
脚本生产数据---->flume采集数据----->kafka消费数据------->storm集群处理数据 日志文件使用log4j生成,滚动生成! 当前正在写入的文件在满足一定的数 ...
- 基于Kafka+ELK搭建海量日志平台
早在传统的单体应用时代,查看日志大都通过SSH客户端登服务器去看,使用较多的命令就是 less 或者 tail.如果服务部署了好几台,就要分别登录到这几台机器上看,等到了分布式和微服务架构流行时代,一 ...
- 日志=>flume=>kafka=>spark streaming=>hbase
日志=>flume=>kafka=>spark streaming=>hbase 日志部分 #coding=UTF-8 import random import time ur ...
随机推荐
- 算法提高 金陵十三钗 状压DP
思路:深度搜索复杂度N!过不了.考虑动态规划:将已经选择的列记为1,未选择表示0,二进制压缩,例如110,就表示选择了第1列和第2列. d(i, t)表示当前已经匹配了i行,选择了t这些列.状态转移: ...
- UVA-10954 贪心+优先队列
贪心策略:每次选取最小的两个数相加,将和作为新的数加入序列,再从里面取两个最小的数...直到只剩下一个数.优先队列正好对付这题. PS :以前在学校OJ做过几乎一毛一样的这题 AC代码: #inclu ...
- uploadify上传文件(2)--基础语法
隔了好久,因为最近搬家,离开从小生活的城市,来到杭州.找工作.找房子等诸多事宜耽误了这篇文章许久.今天难得闲暇在旅馆中完成uploadify上传文件系列的第二篇--uploadify使用的基础语法. ...
- git 命令和使用场景总结
资料地址:https://www.liaoxuefeng.com/wiki/0013739516305929606dd18361248578c67b8067c8c017b000 http://w ...
- Hama顶点编程
Hama是基于HDFS上的BSP模型实现. Apache Hame是Google Pregel的开源实现 Pregel是Google提出的一个面向大规模图计算的通用编程模型.许多实际应用中都涉及到大型 ...
- WebApi 参数绑定方法
WebAPI 2参数绑定方法 简单类型参数 Example 1: Sending a simple parameter in the Url 01 02 03 04 05 06 07 08 09 ...
- 针对特定XML的解析器XMLParser
一.建立网页库和偏移文件 为文本搜索引擎建立网页库,首先要把所有的网页(这里是文章)格式化,并保存到指定的格式中.如以下格式: | ...
- 判断二叉树是否二叉排序树(BST)
算法思想:由于二叉排序树的中序遍历可以得到一个有序的序列,因此,我们可以使用中序遍历进行求解. 代码如下: #include <stack> using namespace std; ty ...
- linux之x86裁剪移植---字符界面sdl开发入门
linux下有没有TurboC2.0那样的画点.线.圆的图形函数库,有没有grapihcs.h,或者与之相对应或相似的函数库是什么?有没有DirectX这样的游戏开发库?SDL就是其中之一. SDL( ...
- Linux显示按文件名降序文件
Linux显示按文件名降序文件 youhaidong@youhaidong-ThinkPad-Edge-E545:~$ ls -lr 总用量 56 drwxr-xr-x 3 youhaidong yo ...