Flume 与 Kakfa结合例子(Kakfa 作为flume 的sink 输出到 Kafka topic)

进行准备工作:

$sudo mkdir -p /flume/web_spooldir
$sudo chmod a+w -R /flume

编辑 flume的配置文件:

$ cat /home/tester/flafka/spooldir_kafka.conf

# Name the components on this agent
agent1.sources = weblogsrc
agent1.sinks = kafka-sink
agent1.channels = memchannel

# Configure the source
agent1.sources.weblogsrc.type = spooldir
agent1.sources.weblogsrc.spoolDir = /flume/web_spooldir
agent1.sources.weblogsrc.channels = memchannel

# Configure the sink
agent1.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
agent1.sinks.kafka-sink.topic = weblogs
agent1.sinks.kafka-sink.brokerList = localhost:9092
agent1.sinks.kafka-sink.batchSize = 20
agent1.sinks.kafka-sink.channel = memchannel

# Use a channel which buffers events in memory
agent1.channels.memchannel.type = memory
agent1.channels.memchannel.capacity = 100000
agent1.channels.memchannel.transactionCapacity = 1000
$

运行 Flume-ng:

$ flume-ng agent --conf /etc/flume-ng/conf \
> --conf-file spooldir_kafka.conf \
> --name agent1 -Dflume.root.logger=INFO,console

输出类似如下:

Info: Sourcing environment configuration script /etc/flume-ng/conf/flume-env.sh
Info: Including Hadoop libraries found via (/usr/bin/hadoop) for HDFS access
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Including HBASE libraries found via (/usr/bin/hbase) for HBASE access
Info: Excluding /usr/lib/hbase/bin/../lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hbase/bin/../lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-log4j12.jar from classpath
Info: Including Hive libraries found via () for Hive access
+ exec /usr/java/default/bin/java -Xmx500m -Dflume.root.logger=INFO,console -cp '/etc/flume-ng/conf:/usr/lib/flume- 
ng/lib/*:/etc/hadoop/conf:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar

...

-Djava.library.path=:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hbase/bin/../lib/native/Linux-amd64-64 
org.apache.flume.node.Application --conf-file spooldir_kafka.conf --name agent1
2017-10-23 01:15:11,209 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start 
(PollingPropertiesFileConfigurationProvider.java:61)] Configuration provider starting
2017-10-23 01:15:11,223 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider 
$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading configuration file:spooldir_kafka.conf
2017-10-23 01:15:11,256 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty 
(FlumeConfiguration.java:1017)] Processing:kafka-sink

...

2017-10-23 01:15:11,933 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start 
(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: weblogsrc started
2017-10-23 01:15:13,003 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Verifying properties
2017-10-23 01:15:13,271 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property 
key.serializer.class is overridden to kafka.serializer.StringEncoder
2017-10-23 01:15:13,271 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property 
metadata.broker.list is overridden to localhost:9092
2017-10-23 01:15:13,277 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property 
request.required.acks is overridden to 1
2017-10-23 01:15:13,277 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property serializer.class 
is overridden to kafka.serializer.DefaultEncoder
2017-10-23 01:15:13,718 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register 
(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: kafka-sink: Successfully registered new MBean.
2017-10-23 01:15:13,719 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start 
(MonitoredCounterGroup.java:96)] Component type: SINK, name: kafka-sink started
...

2017-10-23 01:15:13,720 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents 
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:15:13,720 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile 
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-01-13.log to 
/flume/web_spooldir/2014-01-13.log.COMPLETED

..

2017-10-23 01:16:11,441 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents 
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:16:11,451 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile 
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-01-24.log to 
/flume/web_spooldir/2014-01-24.log.COMPLETED
2017-10-23 01:16:11,818 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents 
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:16:11,819 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile 
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-02-15.log to 
/flume/web_spooldir/2014-02-15.log.COMPLETED

执行kafka consumer 程序:

$kafka-console-consumer --zookeeper localhost:2181 --topic weblogs

在另外的一个终端窗口,向/flume/web_spooldir 目录输入 web log:

cp -rf /home/tester/weblogs /tmp/tmp_weblogs
mv /tmp/tmp_weblogs/* /flume/web_spooldir
rm -rf /tmp/tmp_weblogs

Flume-ng 窗口显示的内容(正在传输log文件到Kafka topic weblogs):

2017-10-23 01:36:28,436 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents 
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:36:28,449 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile 
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2013-09-22.log to 
/flume/web_spooldir/2013-09-22.log.COMPLETED
2017-10-23 01:36:28,971 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents 
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
...

2017-10-23 01:37:39,011 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile 
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-02-19.log to 
/flume/web_spooldir/2014-02-19.log.COMPLETED
2017-10-23 01:37:39,386 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents 
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:37:39,386 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile 
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-03-09.log to 
/flume/web_spooldir/2014-03-09.log.COMPLETED

Consumer 窗口,输出 所有 web 文件的内容(接收 topic weblogs,获得所有web log 内容):

...

213.125.211.10 - 66543 [09/Mar/2014:00:00:14 +0100] "GET /KBDOC-00131.html HTTP/1.0" 200 9807 "http://www.tester.com" "tester 
test 001"
213.125.211.10 - 66543 [09/Mar/2014:00:00:14 +0100] "GET /theme.css HTTP/1.0" 200 6448 "http://www.tester.com" "tester test 002"

$kafka-console-consumer --zookeeper localhost:2181 --topic weblogs

[Flume][Kafka]Flume 与 Kakfa结合例子(Kakfa 作为flume 的sink 输出到 Kafka topic)的更多相关文章

  1. [Spark][kafka]kafka 生产者,消费者 互动例子

    [Spark][kafka]kafka 生产者,消费者 互动例子 # pwd/usr/local/kafka_2.11-0.10.0.1/bin 创建topic:# ./kafka-topics.sh ...

  2. Kafka Cached zkVersion [62] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition) 问题分析

    我司业务Kafka集群是3节点(broker分别为10,20,30),每个Topic 3 Partition,3 Repilication的配置,早上起床突然发现所有Topic的Broker节点都变为 ...

  3. ELK日志方案--使用Filebeat收集日志并输出到Kafka

    1,Filebeat简介 Filebeat是一个使用Go语言实现的轻量型日志采集器.在微服务体系中他与微服务部署在一起收集微服务产生的日志并推送到ELK. 在我们的架构设计中Kafka负责微服务和EL ...

  4. kafka启动时出现FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) java.io.IOException: Permission denied错误解决办法(图文详解)

    首先,说明,我kafk的server.properties是 kafka的server.properties配置文件参考示范(图文详解)(多种方式) 问题详情 然后,我启动时,出现如下 [hadoop ...

  5. Flink 自定义source和sink,获取kafka的key,输出指定key

    --------20190905更新------- 沙雕了,可以用  JSONKeyValueDeserializationSchema,接收ObjectNode的数据,如果有key,会放在Objec ...

  6. elk-日志方案--使用Filebeat收集日志并输出到Kafka

      1,Filebeat简介 Filebeat是一个使用Go语言实现的轻量型日志采集器.在微服务体系中他与微服务部署在一起收集微服务产生的日志并推送到ELK. 在我们的架构设计中Kafka负责微服务和 ...

  7. Golang:将日志以Json格式输出到Kafka

    在上一篇文章中我实现了一个支持Debug.Info.Error等多个级别的日志库,并将日志写到了磁盘文件中,代码比较简单,适合练手.有兴趣的可以通过这个链接前往:https://github.com/ ...

  8. flume从log4j收集日志输出到kafka

    1. flume安装 (1)下载:wget http://archive.cloudera.com/cdh5/cdh/5/flume-ng-1.6.0-cdh5.7.1.tar.gz (2)解压:ta ...

  9. Flume下读取kafka数据后再打把数据输出到kafka,利用拦截器解决topic覆盖问题

    1:如果在一个Flume Agent中同时使用Kafka Source和Kafka Sink来处理events,便会遇到Kafka Topic覆盖问题,具体表现为,Kafka Source可以正常从指 ...

随机推荐

  1. springboot 常见请求方式

    转载请标明出处:http://blog.csdn.net/zhaoyanjun6/article/details/80404645 本文出自[赵彦军的博客] 用户模型类 package com.yib ...

  2. 使用VSTS的Git进行版本控制(三)——评审历史记录

    使用VSTS的Git进行版本控制(三)--评审历史记录 Git使用存储在每个提交中的父引用信息来管理开发的完整历史记录.评审该提交历史记录,能够找出文件更改的时间,并确定代码版本之间的差异. Git使 ...

  3. 使用Visual Studio Team Services敏捷规划和项目组合管理(五)——组合管理

    使用Visual Studio Team Services敏捷规划和项目组合管理(五)--组合管理 组合待办事项为产品所有者提供关于几个敏捷特性团队工作的洞察.产品所有者可以将高优先级的目标定义为Ep ...

  4. Spark数据倾斜及解决方案

    一.场景 1.绝大多数task执行得都非常快,但个别task执行极慢.比如,总共有100个task,97个task都在1s之内执行完了,但是剩余的task却要一两分钟.这种情况很常见. 2.原本能够正 ...

  5. [20180914]oracle 12c 表 full_hash_value如何计算.txt

    [20180914]oracle 12c 表 full_hash_value如何计算.txt --//昨天在12c下看表full_hash_value与11g的full_hash_value不同,不过 ...

  6. SQL Server Browser探究

    一.官网关于SQL SERVER Browser服务的解释(谷歌翻译后稍作修改的): https://docs.microsoft.com/en-us/sql/tools/configuration- ...

  7. php判断手机是安卓系统还是ios系统

    最近项目,要判断用户的手机是安卓的还是ios的,搜了一下相关的资料,最终获得的结果.事实证明,是有效的!主要是要用到HTTP_USER_AGENT,它表示的意思是用来检查浏览页面的访问者在用什么操作系 ...

  8. 4.6Python数据处理篇之Matplotlib系列(六)---plt.hist()与plt.hist2d()直方图

    目录 目录 前言 (一)直方图 (二)双直方图 目录 前言 今天我们学习的是直方图,导入的函数是: plt.hist(x=x, bins=10) 与plt.hist2D(x=x, y=y) (一)直方 ...

  9. java 发邮件

    //先从oracle 下载mail.jarpackage test; import javax.mail.BodyPart; import javax.mail.Message; import jav ...

  10. 【CQOI2014】危桥

    [CQOI2014]危桥 Description Alice和Bob居住在一个由N个岛屿组成的国家,岛屿被编号为\(0\)到\(N-1\).某些岛屿之间有桥相连,桥上的道路都是双向的,但是一次只能供一 ...