1. Flume介绍

Flume是Cloudera提供的一个高可用的,高可靠的,分布式的海量日志采集、聚合和传输的系统,Flume支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(可定制)的能力。

  • agent

    agent本身是一个Java进程,运行在日志收集节点—所谓日志收集节点就是服务器节点。

    agent里面包含3个核心的组件:source—->channel—–>sink,类似生产者、仓库、消费者的架构。

  • source

    source组件是专门用来收集数据的,可以处理各种类型、各种格式的日志数据,包括avro、thrift、exec、jms、spooling directory、netcat、sequence generator、syslog、http、legacy、自定义。

  • channel

    source组件把数据收集来以后,临时存放在channel中,即channel组件在agent中是专门用来存放临时数据的——对采集到的数据进行简单的缓存,可以存放在memory、jdbc、file等等。

  • sink

    sink组件是用于把数据发送到目的地的组件,目的地包括hdfs、logger、avro、thrift、ipc、file、null、Hbase、solr、自定义。

  • event

    将传输的数据进行封装,是flume传输数据的基本单位,如果是文本文件,通常是一行记录,event也是事务的基本单位。event从source,流向channel,再到sink,本身为一个字节数组,并可携带headers(头信息)信息。event代表着一个数据的最小完整单元,从外部数据源来,向外部的目的地去。

2. Kafka Channel && Kafka Sink

2.1 Kafka channel

Kafka channel可以应用在多样的场景中:

  1. Flume source and sink:

    可以为event提供一个高可靠性和高可用的channel;
  2. Flume source and interceptor but no sink:

    其他应用可以将Fluem event写入kafka topic中;
  3. With Flume sink, but no source:

    提供低延迟、高容错的方式将Fluem event从kafka中写入其他sink,例如:HDFS,HBase或者Solr。
  • Kafka Channel配置

加粗部分为必填属性。

Property Name Default Description
type The component type name, needs to be org.apache.flume.channel.kafka.KafkaChannel
kafka.bootstrap.servers List of brokers in the Kafka cluster used by the channel This can be a partial list of brokers, but we recommend at least two for HA. The format is comma separated list of hostname:port
kafka.topic flume-channel Kafka topic which the channel will use
kafka.consumer.group.id flume Consumer group ID the channel uses to register with Kafka. Multiple channels must use the same topic and group to ensure that when one agent fails another can get the data Note that having non-channel consumers with the same ID can lead to data loss.
parseAsFlumeEvent true Expecting Avro datums with FlumeEvent schema in the channel. This should be true if Flume source is writing to the channel and false if other producers are writing into the topic that the channel is using. Flume source messages to Kafka can be parsed outside of Flume by using org.apache.flume.source.avro.AvroFlumeEvent provided by the flume-ng-sdk artifact
migrateZookeeperOffsets true When no Kafka stored offset is found, look up the offsets in Zookeeper and commit them to Kafka. This should be true to support seamless Kafka client migration from older versions of Flume. Once migrated this can be set to false, though that should generally not be required. If no Zookeeper offset is found the kafka.consumer.auto.offset.reset configuration defines how offsets are handled.
pollTimeout 500 The amount of time(in milliseconds) to wait in the “poll()” call of the consumer. https://kafka.apache.org/090/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html#poll(long)
defaultPartitionId Specifies a Kafka partition ID (integer) for all events in this channel to be sent to, unless overriden by partitionIdHeader. By default, if this property is not set, events will be distributed by the Kafka Producer’s partitioner - including by key if specified (or by a partitioner specified by kafka.partitioner.class).
partitionIdHeader When set, the producer will take the value of the field named using the value of this property from the event header and send the message to the specified partition of the topic. If the value represents an invalid partition the event will not be accepted into the channel. If the header value is present then this setting overrides defaultPartitionId.
kafka.consumer.auto.offset.reset latest What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted): earliest: automatically reset the offset to the earliest offset latest: automatically reset the offset to the latest offset none: throw exception to the consumer if no previous offset is found for the consumer’s group anything else: throw exception to the consumer.
kafka.producer.security.protocol PLAINTEXT Set to SASL_PLAINTEXT, SASL_SSL or SSL if writing to Kafka using some level of security. See below for additional info on secure setup.
kafka.consumer.security.protocol PLAINTEXT Same as kafka.producer.security.protocol but for reading/consuming from Kafka.
more producer/consumer security props If using SASL_PLAINTEXT, SASL_SSL or SSL refer to Kafka security for additional properties that need to be set on producer/consumer.

2.2 Kafka Sink

Flume 支持将数据发布到一个kafka topic。目前支持Kafka 0.9.x版本。

  • KafkaSink 配置

加粗部分为必填配置

Property Name Default Description
type Must be set to org.apache.flume.sink.kafka.KafkaSink
kafka.bootstrap.servers List of brokers Kafka-Sink will connect to, to get the list of topic partitions This can be a partial list of brokers, but we recommend at least two for HA. The format is comma separated list of hostname:port
kafka.topic default-flume-topic The topic in Kafka to which the messages will be published. If this parameter is configured, messages will be published to this topic. If the event header contains a “topic” field, the event will be published to that topic overriding the topic configured here. Arbitrary header substitution is supported, eg. %{header} is replaced with value of event header named “header”. (If using the substitution, it is recommended to set “auto.create.topics.enable” property of Kafka broker to true.)
flumeBatchSize 100 How many messages to process in one batch. Larger batches improve throughput while adding latency.
kafka.producer.acks 1 How many replicas must acknowledge a message before its considered successfully written. Accepted values are 0 (Never wait for acknowledgement), 1 (wait for leader only), -1 (wait for all replicas) Set this to -1 to avoid data loss in some cases of leader failure.
useFlumeEventFormat false By default events are put as bytes onto the Kafka topic directly from the event body. Set to true to store events as the Flume Avro binary format. Used in conjunction with the same property on the KafkaSource or with the parseAsFlumeEvent property on the Kafka Channel this will preserve any Flume headers for the producing side.
defaultPartitionId Specifies a Kafka partition ID (integer) for all events in this channel to be sent to, unless overriden by partitionIdHeader. By default, if this property is not set, events will be distributed by the Kafka Producer’s partitioner - including by key if specified (or by a partitioner specified by kafka.partitioner.class).
partitionIdHeader When set, the sink will take the value of the field named using the value of this property from the event header and send the message to the specified partition of the topic. If the value represents an invalid partition, an EventDeliveryException will be thrown. If the header value is present then this setting overrides defaultPartitionId.
allowTopicOverride true When set, the sink will allow a message to be produced into a topic specified by the topicHeader property (if provided).
topicHeader topic When set in conjunction with allowTopicOverride will produce a message into the value of the header named using the value of this property. Care should be taken when using in conjunction with the Kafka Source topicHeader property to avoid creating a loopback.
kafka.producer.security.protocol PLAINTEXT Set to SASL_PLAINTEXT, SASL_SSL or SSL if writing to Kafka using some level of security. See below for additional info on secure setup.
more producer security props If using SASL_PLAINTEXT, SASL_SSL or SSL refer to Kafka security for additional properties that need to be set on producer.
Other Kafka Producer Properties These properties are used to configure the Kafka Producer. Any producer property supported by Kafka can be used. The only requirement is to prepend the property name with the prefix kafka.producer. For example: kafka.producer.linger.ms

3. Flume - Kafka配置示例

切换到flume/conf目录下,编辑配置文件:

agent.sources = s1
agent.channels = c1
agent.sinks = k1 # Source Config
agent.sources.s1.type = spooldir
agent.sources.s1.channels = c1
agent.sources.s1.bind = 192.168.100.105
agent.sources.s1.port = 9696
agent.sources.s1.includePattern = *.log
agent.sources.s1.spoolDir = /home/usr/tomcat-test/logs # Sink Config
## 输出到kafka
agent.sinks.s1.type = org.apache.flume.sink.kafka.KafkaSink
agent.sinks.s1.channel = c1
agent.sinks.s1.topic = test_tomcat_logs
agent.sinks.s1.serializer.class = kafka.serializer.StringEncoder
agent.sinks.s1.brokerList = 192.168.100.105:9092 # Channel Config
agent.channels.c1.type = memory
agent.channels.c1.keep-alive = 10
agent.channels.c1.capacity = 65535

很明显,由配置文件可以了解到:

  1. 我们需要读取目录:/home/usr/tomcat-test/logs下日志文件;

  2. flume连接到kafka的地址是 192.168.100.105:9092,注意不要配置出错了;

  3. flume会将采集后的内容输出到Kafka topic 为test_tomcat_logs,所以我们启动zk,kafka后需要打开一个终端消费topic kafkatest的内容。这样就可以看到flume与kafka之间开始工作了。

4. 运行

运行flume直接切换到flume目录执行以下命令即可:

$ bin/flume-ng agent --conf conf --conf-file example.conf --name a1 -Dflume.root.logger=INFO,console


参考资料:

[1] Flume Doc:

http://flume.apache.org/FlumeUserGuide.html#kafka-channel

Flume - Kafka日志平台整合的更多相关文章

  1. flume+kafka+spark streaming整合

    1.安装好flume2.安装好kafka3.安装好spark4.流程说明: 日志文件->flume->kafka->spark streaming flume输入:文件 flume输 ...

  2. Flume+Kafka+storm的连接整合

    Flume-ng Flume是一个分布式.可靠.和高可用的海量日志采集.聚合和传输的系统. Flume的文档可以看http://flume.apache.org/FlumeUserGuide.html ...

  3. 【转】flume+kafka+zookeeper 日志收集平台的搭建

    from:https://my.oschina.net/jastme/blog/600573 flume+kafka+zookeeper 日志收集平台的搭建 收藏 jastme 发表于 10个月前 阅 ...

  4. 基于Flume+Kafka+ Elasticsearch+Storm的海量日志实时分析平台(转)

    0背景介绍 随着机器个数的增加.各种服务.各种组件的扩容.开发人员的递增,日志的运维问题是日渐尖锐.通常,日志都是存储在服务运行的本地机器上,使用脚本来管理,一般非压缩日志保留最近三天,压缩保留最近1 ...

  5. Flume+Kafka+Storm+Hbase+HDSF+Poi整合

    Flume+Kafka+Storm+Hbase+HDSF+Poi整合 需求: 针对一个网站,我们需要根据用户的行为记录日志信息,分析对我们有用的数据. 举例:这个网站www.hongten.com(当 ...

  6. Flume+Kafka+Storm整合

    Flume+Kafka+Storm整合 1. 需求: 有一个客户端Client可以产生日志信息,我们需要通过Flume获取日志信息,再把该日志信息放入到Kafka的一个Topic:flume-to-k ...

  7. Flume+Kafka整合

    脚本生产数据---->flume采集数据----->kafka消费数据------->storm集群处理数据 日志文件使用log4j生成,滚动生成! 当前正在写入的文件在满足一定的数 ...

  8. 基于Kafka+ELK搭建海量日志平台

    早在传统的单体应用时代,查看日志大都通过SSH客户端登服务器去看,使用较多的命令就是 less 或者 tail.如果服务部署了好几台,就要分别登录到这几台机器上看,等到了分布式和微服务架构流行时代,一 ...

  9. 日志=>flume=>kafka=>spark streaming=>hbase

    日志=>flume=>kafka=>spark streaming=>hbase 日志部分 #coding=UTF-8 import random import time ur ...

随机推荐

  1. 用Postman做自动化测试的功能

    自动化测试应该在桌面应用有该功能,在chrome的插件不知道有没有,我也没装chrome版的Postman Postman工具介绍图 上面这张就是Postman的操作界面.一开始我就是这样做简单的数据 ...

  2. Mybatis入门1

    关于Mybatis的快速入门可以分为这样几步: 1.引入依赖或者引入jar包 2.编写全局配置文件(Mybatis-config.xml) <?xml version="1.0&quo ...

  3. Redis进阶实践之十四 Redis-cli命令行工具使用详解第一部分

    一.介绍       redis学了有一段时间了,以前都是看视频,看教程,很少看官方的东西.现在redis的东西要看的都差不多看完了.网上的东西也不多了.剩下来就看看官网的东西吧,一遍翻译,一遍测试. ...

  4. Android 插件化方案(动态加载)总结

    1.作用 大多数Android开发人员开始接触这个问题是因为 App 爆棚了,方法数超过了一个 Dex 最大方法数 65535 的上限,因而便有了插件化的概念,将一个 App 划分为多个插件(Apk ...

  5. JSP标签c:forEach报错(二)

    1.今天,我在用c标签写一些样例,结果出现一些错误,写下作为记录 具体错误如下: 三月 31, 2014 9:46:28 下午 org.apache.catalina.core.StandardWra ...

  6. Windows7 64位安装最新版本MySQL服务器

    Windows7 64位安装最新版本MySQL服务器 近期,一直在研究MySQL数据库,经常修改配置文件,导致MySQL数据库无法使用,不得不反复重装MySQL数据库.以下是在Windows7 64位 ...

  7. 【mongodb系统学习之十】mongodb查询(一)

    十.mongodb查询:find ;查询时条件中不能引用文档中其他键的值: 1).查询数据库全部数据:语法db.collectionName.find();默认只显示前20条,如图: 2).按条件查询 ...

  8. Java获取某年某周的第一天

    Java获取某年某周的第一天 1.设计源码 FirstDayOfWeek.java: /** * @Title:FirstDayOfWeek.java * @Package:com.you.freem ...

  9. 一种基于主板BIOS的身份认证方案及实现

    .分析AwardBIOSDOS工具cbrom cbrom的功能就是在BIOS文件中添加.删除与提取模块,以便满足用户自己的需求,用法如下: cbromBIOS文件名/参数模块名|RELEASE|EXT ...

  10. org.hibernate.TransientObjectException:The given object has a null identifier

    1.错误描述 org.hibernate.TransientObjectException:The given object has a null identifier:com.you.model.U ...