[Flume][Kafka]Flume 与 Kakfa结合例子(Kakfa 作为flume 的sink 输出到 Kafka topic)
Flume 与 Kakfa结合例子(Kakfa 作为flume 的sink 输出到 Kafka topic)
进行准备工作:
$sudo mkdir -p /flume/web_spooldir
$sudo chmod a+w -R /flume
编辑 flume的配置文件:
$ cat /home/tester/flafka/spooldir_kafka.conf
# Name the components on this agent
agent1.sources = weblogsrc
agent1.sinks = kafka-sink
agent1.channels = memchannel
# Configure the source
agent1.sources.weblogsrc.type = spooldir
agent1.sources.weblogsrc.spoolDir = /flume/web_spooldir
agent1.sources.weblogsrc.channels = memchannel
# Configure the sink
agent1.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
agent1.sinks.kafka-sink.topic = weblogs
agent1.sinks.kafka-sink.brokerList = localhost:9092
agent1.sinks.kafka-sink.batchSize = 20
agent1.sinks.kafka-sink.channel = memchannel
# Use a channel which buffers events in memory
agent1.channels.memchannel.type = memory
agent1.channels.memchannel.capacity = 100000
agent1.channels.memchannel.transactionCapacity = 1000
$
运行 Flume-ng:
$ flume-ng agent --conf /etc/flume-ng/conf \
> --conf-file spooldir_kafka.conf \
> --name agent1 -Dflume.root.logger=INFO,console
输出类似如下:
Info: Sourcing environment configuration script /etc/flume-ng/conf/flume-env.sh
Info: Including Hadoop libraries found via (/usr/bin/hadoop) for HDFS access
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Including HBASE libraries found via (/usr/bin/hbase) for HBASE access
Info: Excluding /usr/lib/hbase/bin/../lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hbase/bin/../lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/hadoop/lib/slf4j-log4j12.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-api-1.7.5.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar from classpath
Info: Excluding /usr/lib/zookeeper/lib/slf4j-log4j12.jar from classpath
Info: Including Hive libraries found via () for Hive access
+ exec /usr/java/default/bin/java -Xmx500m -Dflume.root.logger=INFO,console -cp '/etc/flume-ng/conf:/usr/lib/flume-
ng/lib/*:/etc/hadoop/conf:/usr/lib/hadoop/lib/activation-1.1.jar:/usr/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar
...
-Djava.library.path=:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hbase/bin/../lib/native/Linux-amd64-64
org.apache.flume.node.Application --conf-file spooldir_kafka.conf --name agent1
2017-10-23 01:15:11,209 (lifecycleSupervisor-1-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider.start
(PollingPropertiesFileConfigurationProvider.java:61)] Configuration provider starting
2017-10-23 01:15:11,223 (conf-file-poller-0) [INFO - org.apache.flume.node.PollingPropertiesFileConfigurationProvider
$FileWatcherRunnable.run(PollingPropertiesFileConfigurationProvider.java:133)] Reloading configuration file:spooldir_kafka.conf
2017-10-23 01:15:11,256 (conf-file-poller-0) [INFO - org.apache.flume.conf.FlumeConfiguration$AgentConfiguration.addProperty
(FlumeConfiguration.java:1017)] Processing:kafka-sink
...
2017-10-23 01:15:11,933 (lifecycleSupervisor-1-3) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start
(MonitoredCounterGroup.java:96)] Component type: SOURCE, name: weblogsrc started
2017-10-23 01:15:13,003 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Verifying properties
2017-10-23 01:15:13,271 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property
key.serializer.class is overridden to kafka.serializer.StringEncoder
2017-10-23 01:15:13,271 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property
metadata.broker.list is overridden to localhost:9092
2017-10-23 01:15:13,277 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property
request.required.acks is overridden to 1
2017-10-23 01:15:13,277 (lifecycleSupervisor-1-1) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Property serializer.class
is overridden to kafka.serializer.DefaultEncoder
2017-10-23 01:15:13,718 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.register
(MonitoredCounterGroup.java:120)] Monitored counter group for type: SINK, name: kafka-sink: Successfully registered new MBean.
2017-10-23 01:15:13,719 (lifecycleSupervisor-1-1) [INFO - org.apache.flume.instrumentation.MonitoredCounterGroup.start
(MonitoredCounterGroup.java:96)] Component type: SINK, name: kafka-sink started
...
2017-10-23 01:15:13,720 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:15:13,720 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-01-13.log to
/flume/web_spooldir/2014-01-13.log.COMPLETED
..
2017-10-23 01:16:11,441 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:16:11,451 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-01-24.log to
/flume/web_spooldir/2014-01-24.log.COMPLETED
2017-10-23 01:16:11,818 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:16:11,819 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-02-15.log to
/flume/web_spooldir/2014-02-15.log.COMPLETED
执行kafka consumer 程序:
$kafka-console-consumer --zookeeper localhost:2181 --topic weblogs
在另外的一个终端窗口,向/flume/web_spooldir 目录输入 web log:
cp -rf /home/tester/weblogs /tmp/tmp_weblogs
mv /tmp/tmp_weblogs/* /flume/web_spooldir
rm -rf /tmp/tmp_weblogs
Flume-ng 窗口显示的内容(正在传输log文件到Kafka topic weblogs):
2017-10-23 01:36:28,436 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:36:28,449 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2013-09-22.log to
/flume/web_spooldir/2013-09-22.log.COMPLETED
2017-10-23 01:36:28,971 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
...
2017-10-23 01:37:39,011 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-02-19.log to
/flume/web_spooldir/2014-02-19.log.COMPLETED
2017-10-23 01:37:39,386 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents
(ReliableSpoolingFileEventReader.java:258)] Last read took us just up to a file boundary. Rolling to the next file, if there is one.
2017-10-23 01:37:39,386 (pool-4-thread-1) [INFO - org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile
(ReliableSpoolingFileEventReader.java:348)] Preparing to move file /flume/web_spooldir/2014-03-09.log to
/flume/web_spooldir/2014-03-09.log.COMPLETED
Consumer 窗口,输出 所有 web 文件的内容(接收 topic weblogs,获得所有web log 内容):
...
213.125.211.10 - 66543 [09/Mar/2014:00:00:14 +0100] "GET /KBDOC-00131.html HTTP/1.0" 200 9807 "http://www.tester.com" "tester
test 001"
213.125.211.10 - 66543 [09/Mar/2014:00:00:14 +0100] "GET /theme.css HTTP/1.0" 200 6448 "http://www.tester.com" "tester test 002"
$kafka-console-consumer --zookeeper localhost:2181 --topic weblogs
[Flume][Kafka]Flume 与 Kakfa结合例子(Kakfa 作为flume 的sink 输出到 Kafka topic)的更多相关文章
- [Spark][kafka]kafka 生产者,消费者 互动例子
[Spark][kafka]kafka 生产者,消费者 互动例子 # pwd/usr/local/kafka_2.11-0.10.0.1/bin 创建topic:# ./kafka-topics.sh ...
- Kafka Cached zkVersion [62] not equal to that in zookeeper, skip updating ISR (kafka.cluster.Partition) 问题分析
我司业务Kafka集群是3节点(broker分别为10,20,30),每个Topic 3 Partition,3 Repilication的配置,早上起床突然发现所有Topic的Broker节点都变为 ...
- ELK日志方案--使用Filebeat收集日志并输出到Kafka
1,Filebeat简介 Filebeat是一个使用Go语言实现的轻量型日志采集器.在微服务体系中他与微服务部署在一起收集微服务产生的日志并推送到ELK. 在我们的架构设计中Kafka负责微服务和EL ...
- kafka启动时出现FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer) java.io.IOException: Permission denied错误解决办法(图文详解)
首先,说明,我kafk的server.properties是 kafka的server.properties配置文件参考示范(图文详解)(多种方式) 问题详情 然后,我启动时,出现如下 [hadoop ...
- Flink 自定义source和sink,获取kafka的key,输出指定key
--------20190905更新------- 沙雕了,可以用 JSONKeyValueDeserializationSchema,接收ObjectNode的数据,如果有key,会放在Objec ...
- elk-日志方案--使用Filebeat收集日志并输出到Kafka
1,Filebeat简介 Filebeat是一个使用Go语言实现的轻量型日志采集器.在微服务体系中他与微服务部署在一起收集微服务产生的日志并推送到ELK. 在我们的架构设计中Kafka负责微服务和 ...
- Golang:将日志以Json格式输出到Kafka
在上一篇文章中我实现了一个支持Debug.Info.Error等多个级别的日志库,并将日志写到了磁盘文件中,代码比较简单,适合练手.有兴趣的可以通过这个链接前往:https://github.com/ ...
- flume从log4j收集日志输出到kafka
1. flume安装 (1)下载:wget http://archive.cloudera.com/cdh5/cdh/5/flume-ng-1.6.0-cdh5.7.1.tar.gz (2)解压:ta ...
- Flume下读取kafka数据后再打把数据输出到kafka,利用拦截器解决topic覆盖问题
1:如果在一个Flume Agent中同时使用Kafka Source和Kafka Sink来处理events,便会遇到Kafka Topic覆盖问题,具体表现为,Kafka Source可以正常从指 ...
随机推荐
- Android项目实战(四十三):夜神模拟器
一.下载模拟器到电脑 夜神模拟器 二.环境配置 计算机--系统--高级系统设置--环境变量 PATH 里面加入夜神模拟器的安装目录下的bin文件 三.启动模拟器 四.运行cmd命令,cd到夜神安装目录 ...
- TeamViewer试用期满转免费版本方法
TeamViewer安装完企业版以后,当试用期结束,到期后,无论你卸载.重装了多少次,都无法无法成功安装个人版,从网上搜索来得到的解决办法就是:安装TeamViewer的时候与你的电脑以及网卡地址进行 ...
- NoHttp封装--08 用一个实体类接收所有接口数据
1.用户信息获取--bean实体类形式返回数据 ①服务器端: 代码: protected void onHandler(HttpServletRequest request, HttpServletR ...
- Ubuntu-18.04安装Docker
Docker 介绍 Docker 是一个开源的应用容器引擎,让开发者可以打包他们的应用以及依赖包到一个可移植的容器中,然后发布到任何流行的 Linux 机器上,也可以实现虚拟化.容器是完全使用沙箱机制 ...
- phpstudy 上怎么运行 thinkPHP ?
最近在学习 thinkPHP ,但是本地使用的是 phpstudy ,就想在 phpstudy 中使用 thinkPHP ,这样我的环境就不用再改变也可以学习. 首先,先要 下载 thinkPHP , ...
- 在 Centos 安装 MySQL
MySQL是开源的数据库管理系统,通常作为LEMP(Linux, Nginx, MySQL/MariaDB, PHP/Python/Perl)技术栈的一部分,而被安装.RedHat 会害怕 Oracl ...
- sql server 通用修改表数据存储过程
ALTER PROC [dbo].[UpdateTableData] ), ), ), ), ) AS BEGIN ) SET @sql ='UPDATE '+@TableName; --获取SqlS ...
- Windows Server 2016-Wbadmin命令行备份域控制器
在上一章我们讲到Windows Server 2016-图形化备份域控制器的方法,本章我们聊聊如何通过命令行Wbadmin对域控制器进行备份.在Windows Server Active Direct ...
- 如何设置可以避免php代码中的中文在浏览器中成为乱码?
其实很简单,只需要在代码开始的前面加上一条这样的语句就行: //这里面我的浏览器中的字符编码格式为utf-8,所以这里我设置为utf-8,如果你的浏览器中的默认编码不是这个,请选择浏览器默认的编码格式 ...
- ueditor富文本编辑器跨域上传图片解决办法
在使用百度富文本编辑器上传图片的过程中,如果是有一台单独的图片服务器就需要将上传的图片放到图片服务器,比如在a.com的编辑器中上传图片,图片要保存到img.com,这就涉及到跨域上传图片,而在ued ...