flume使用之httpSource
flume自带很长多的source,如:exe、kafka...其中有一个非常简单的source——httpsource,使用httpSource,flume启动后会拉起一个web服务来监听指定的ip和port。常用的使用场景:对于有些应用环境中,不能部署Flume SDK及其依赖项,可以在代码中通过HTTP而不是Flume的PRC发送数据的情况,此时HTTP SOURCE可以用来将数据接收到Flume中。
1、httpsource 参数:
| 配置参数 | 默认值 | 描述 |
| type | http (org.apache.fluem.source.httpSource) | |
| bind | 绑定的IP地址或主机名 | |
| port | 绑定的端口号 | |
| enableSSL | false | |
| keystore | 使用的keystore文件的路径 | |
| keystorePassword | 能够进入keystore的密码 | |
| handler | JSONHandler | HTTP SOURCE使用的处理程序类 |
| handler.* | 传给处理程序类的任何参数 可以 通过使用此参数(*)配置传入 |
1)handler:
Flume使用一个可插拔的“handler”程序来实现转换,如果不指定默认是:JSONHandler,它能处理JSON格式的事件,格式如下。此外用户可以自定义handler,必须实现HTTPSourceHandler接口。
json数据格式:
- [ { "headers":{"":"","":""
- },
- "body":"the first event"
- },
- { "headers":{"":"","":""
- },
- "body":"the second event"
- }
- ]
2、简单介绍一下flume的logger sink:
记录INFO级别的日志,一般用于调试。本文将使用这种类型的sink,配置的属性:
- type logger
- maxBytesToLog 16 Maximum number of bytes of the Event body to log
注意:要求必须在 --conf 参数指定的目录下有 log4j的配置文件,可以通过-Dflume.root.logger=INFO,console在命令启动时手动指定log4j参数。
3、简单的httpSource实例:
1)下载flume、解压:
- cd /usr/local/
- wget http://mirror.bit.edu.cn/apache/flume/1.7.0/apache-flume-1.7.0-bin.tar.gz
- tar -xvzf apache-flume-1.7.9-bin.tar.gz
配置flume的环境变量:
- vim /etc/profile
- export PS1="[\u@`/sbin/ifconfig eth0|grep 'inet '|awk -F'[: ]+' '{print $4}'` \W]"'$ '
- export FLUME_HOME=/usr/local/apache-flume-1.6.0-bin
- export PATH=$PATH:$FLUME_HOME/bin
2)安装jdk、配置环境变量;
3)配置flume:
- cd /usr/local/flume/conf
- vim flume-env.sh
指定java_home,同时放入如下log4j.properties
- ### set log levels ###
- log4j.rootLogger = info,stdout , D , E
- ###
- log4j.appender.stdout = org.apache.log4j.ConsoleAppender
- log4j.appender.stdout.Target = System.out
- log4j.appender.stdout.layout = org.apache.log4j.PatternLayout
- log4j.appender.stdout.layout.ConversionPattern = [%d{MM-dd HH:mm:ss}] [%p] [%c:%L] %m%n
- ### 输出到日志文件 ###
- log4j.appender.D = org.apache.log4j.DailyRollingFileAppender
- log4j.appender.D.File = /data/logs/flume/flume.log
- log4j.appender.D.Append = true
- log4j.appender.D.Threshold = info
- log4j.appender.D.layout = org.apache.log4j.PatternLayout
- log4j.appender.D.layout.ConversionPattern = [%d{MM-dd HH:mm:ss}] [%p] [%c:%L] %m%n
- ### 保存异常信息到单独文件 ###
- log4j.appender.E = org.apache.log4j.DailyRollingFileAppender
- log4j.appender.E.File =/data/logs/flume/flume_error.log
- log4j.appender.E.Append = true
- log4j.appender.E.Threshold = ERROR
- log4j.appender.E.layout = org.apache.log4j.PatternLayout
- log4j.appender.E.layout.ConversionPattern = [%d{MM-dd HH:mm:ss}] [%p] [%c:%L] %m%n
- ### sink
- log4j.logger.com.iqiyi.ttbrain.log.flume.sink.MysqlSink= INFO, F, EE
- log4j.additivity.com.iqiyi.ttbrain.log.flume.sink.MysqlSink = false
- log4j.appender.F= org.apache.log4j.DailyRollingFileAppender
- log4j.appender.F.File=/data/logs/flume/flume_sink.log
- log4j.appender.F.Append = true
- log4j.appender.F.Threshold = info
- log4j.appender.F.layout=org.apache.log4j.PatternLayout
- log4j.appender.F.layout.ConversionPattern= [%d{MM-dd HH:mm:ss}] [%p] [%c:%L] %m%n
- log4j.appender.EE= org.apache.log4j.DailyRollingFileAppender
- log4j.appender.EE.File=/data/logs/flume/flume_sink_error.log
- log4j.appender.EE.Append = true
- log4j.appender.EE.Threshold = ERROR
- log4j.appender.EE.layout=org.apache.log4j.PatternLayout
- log4j.appender.EE.layout.ConversionPattern= [%d{MM-dd HH:mm:ss}] [%p] [%c:%L] %m%n
4)配置httpSource:
- cd /usr/local/flume/conf
- vim http_test.conf
- a1.sources=r1
- a1.sinks=k1
- a1.channels=c1
- a1.sources.r1.type=http
- a1.sources.r1.bind=localhost
- a1.sources.r1.port=50000
- a1.sources.r1.channels=c1
- a1.sinks.k1.type=logger
- a1.sinks.k1.channel=c1
- a1.channels.c1.type=memory
- a1.channels.c1.capacity=1000
- a1.channels.c1.transactionCapacity=100
5)启动flume:
- flume-ng agent -c /usr/local/flume/conf/ -f /usr/local/flume/conf/http_test.conf -n a1
6)测试:
开一个shell窗口,输入命令:
- curl -X POST -d'[{"headers":{"h1":"v1","h2":"v2"},"body":"hello body"}]' http://localhost:50000
在/data/log/flume/flume.log 文件中可以看到:
- [09-29 10:31:12] [INFO] [org.apache.flume.sink.LoggerSink:94] Event: { headers:{h1=v1, h2=v2} body: 68 65 6C 6C 6F 20 62 6F 64 79 hello body }
4、自定义handler:
假定xml请求格式,期望格式如下:
- <events>
- <event>
- <headers><header1>value1</header1></headers>
- <body>test</body>
- </event>
- <event>
- <headers><header1>value1</header1></headers>
- <body>test2</body>
- </event>
- </events>
1)pom.xml
- <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
- xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
- <modelVersion>4.0.0</modelVersion>
- <groupId>org.pq</groupId>
- <artifactId>flume-demo</artifactId>
- <packaging>jar</packaging>
- <version>1.0</version>
- <name>flume-demo Maven jar</name>
- <url>http://maven.apache.org</url>
- <dependencies>
- <dependency>
- <groupId>junit</groupId>
- <artifactId>junit</artifactId>
- <version>4.8.2</version>
- <scope>test</scope>
- </dependency>
- <dependency>
- <groupId>org.slf4j</groupId>
- <artifactId>slf4j-log4j12</artifactId>
- <version>1.7.7</version>
- <scope>compile</scope>
- </dependency>
- <dependency>
- <groupId>org.apache.flume</groupId>
- <artifactId>flume-ng-core</artifactId>
- <version>1.6.0</version>
- <scope>compile</scope>
- </dependency>
- </dependencies>
- <build>
- <finalName>flume-demo</finalName>
- </build>
- </project>
2)自定义handler:
- package org.pq.flumeDemo.sources;
- import com.google.common.base.Preconditions;
- import org.apache.flume.Context;
- import org.apache.flume.Event;
- import org.apache.flume.event.EventBuilder;
- import org.apache.flume.source.http.HTTPBadRequestException;
- import org.apache.flume.source.http.HTTPSourceHandler;
- import org.slf4j.Logger;
- import org.slf4j.LoggerFactory;
- import org.w3c.dom.Document;
- import org.w3c.dom.Element;
- import org.w3c.dom.Node;
- import org.w3c.dom.NodeList;
- import org.xml.sax.SAXException;
- import javax.servlet.http.HttpServletRequest;
- import javax.xml.parsers.DocumentBuilder;
- import javax.xml.parsers.DocumentBuilderFactory;
- import java.util.ArrayList;
- import java.util.HashMap;
- import java.util.List;
- import java.util.Map;
- public class HTTPSourceXMLHandler implements HTTPSourceHandler {
- private final String ROOT = "events";
- private final String EVENT_TAG = "event";
- private final String HEADERS_TAG = "headers";
- private final String BODY_TAG = "body";
- private final String CONF_INSERT_TIMESTAMP = "insertTimestamp";
- private final String TIMESTAMP_HEADER = "timestamp";
- private final DocumentBuilderFactory documentBuilderFactory
- = DocumentBuilderFactory.newInstance();
- // Document builders are not thread-safe.
- // So make sure we have one for each thread.
- private final ThreadLocal<DocumentBuilder> docBuilder
- = new ThreadLocal<DocumentBuilder>();
- private boolean insertTimestamp;
- private static final Logger LOG = LoggerFactory.getLogger(HTTPSourceXMLHandler.class);
- public List<Event> getEvents(HttpServletRequest httpServletRequest) throws HTTPBadRequestException, Exception {
- if (docBuilder.get() == null) {
- docBuilder.set(documentBuilderFactory.newDocumentBuilder());
- }
- Document doc;
- final List<Event> events;
- try {
- doc = docBuilder.get().parse(httpServletRequest.getInputStream());
- Element root = doc.getDocumentElement();
- root.normalize();
- // Verify that the root element is "events"
- Preconditions.checkState(
- ROOT.equalsIgnoreCase(root.getTagName()));
- NodeList nodes = root.getElementsByTagName(EVENT_TAG);
- LOG.info("get nodes={}",nodes);
- int eventCount = nodes.getLength();
- events = new ArrayList<Event>(eventCount);
- for (int i = 0; i < eventCount; i++) {
- Element event = (Element) nodes.item(i);
- // Get all headers. If there are multiple header sections,
- // combine them.
- NodeList headerNodes
- = event.getElementsByTagName(HEADERS_TAG);
- Map<String, String> eventHeaders
- = new HashMap<String, String>();
- for (int j = 0; j < headerNodes.getLength(); j++) {
- Node headerNode = headerNodes.item(j);
- NodeList headers = headerNode.getChildNodes();
- for (int k = 0; k < headers.getLength(); k++) {
- Node header = headers.item(k);
- // Read only element nodes
- if (header.getNodeType() != Node.ELEMENT_NODE) {
- continue;
- }
- // Make sure a header is inserted only once,
- // else the event is malformed
- Preconditions.checkState(
- !eventHeaders.containsKey(header.getNodeName()),
- "Header expected only once " + header.getNodeName());
- eventHeaders.put(
- header.getNodeName(), header.getTextContent());
- }
- }
- Node body = event.getElementsByTagName(BODY_TAG).item(0);
- if (insertTimestamp) {
- eventHeaders.put(TIMESTAMP_HEADER, String.valueOf(System
- .currentTimeMillis()));
- }
- events.add(EventBuilder.withBody(
- body.getTextContent().getBytes(
- httpServletRequest.getCharacterEncoding()),
- eventHeaders));
- }
- } catch (SAXException ex) {
- throw new HTTPBadRequestException(
- "Request could not be parsed into valid XML", ex);
- } catch (Exception ex) {
- throw new HTTPBadRequestException(
- "Request is not in expected format. " +
- "Please refer documentation for expected format.", ex);
- }
- return events;
- }
- public void configure(Context context) {
- insertTimestamp = context.getBoolean(CONF_INSERT_TIMESTAMP,
- false);
- }
- }
打包成dependency,然后放到flume的lib下。
3)flume配置文件:
- a1.sources=r1
- a1.sinks=k1
- a1.channels=c1
- a1.sources.r1.type=http
- a1.sources.r1.bind=localhost
- a1.sources.r1.port=50000
- a1.sources.r1.channels=c1
- a1.sources.r1.handler=org.pq.flumeDemo.sources.HTTPSourceXMLHandler
- a1.sources.r1.insertTimestamp=true
- a1.sinks.k1.type=logger
- a1.sinks.k1.channel=c1
- a1.channels.c1.type=memory
- a1.channels.c1.capacity=1000
- a1.channels.c1.transactionCapacity=100
4)启动:
- $ bin/flume-ng agent -c conf -f conf/http_test.conf -n a1 -Dflume.root.logger=INFO,console
flume使用之httpSource的更多相关文章
- send data to Flume client-sdk flume使用之httpSource
https://flume.apache.org/FlumeDeveloperGuide.html#client-sdk flume使用之httpSource - CSDN博客 https://blo ...
- flume使用示例
flume的特点: flume是一个分布式.可靠.和高可用的海量日志采集.聚合和传输的系统.支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受 ...
- Flume环境部署和配置详解及案例大全
flume是一个分布式.可靠.和高可用的海量日志采集.聚合和传输的系统.支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(比如文本.HDF ...
- 常见的几种Flume日志收集场景实战
这里主要介绍几种常见的日志的source来源,包括监控文件型,监控文件内容增量,TCP和HTTP. Spool类型 用于监控指定目录内数据变更,若有新文件,则将新文件内数据读取上传 在教你一步搭建Fl ...
- flume安装及入门实例
1. 如何安装? 1)将下载的flume包,解压到/home/hadoop目录中 2)修改 flume-env.sh 配置文件,主要是JAVA_HOME变量设置 root@m1:/home/hadoo ...
- 【翻译】Flume 1.8.0 User Guide(用户指南) Processors
翻译自官网flume1.8用户指南,原文地址:Flume 1.8.0 User Guide 篇幅限制,分为以下5篇: [翻译]Flume 1.8.0 User Guide(用户指南) [翻译]Flum ...
- Flume配置Multiplexing Channel Selector
1 官网内容 上面配置的是根据不同的heder当中state值走不同的channels,如果是CZ就走c1 如果是US就走c2 c3 其他默认走c4 2 我的详细配置信息 一个监听http端口 然后 ...
- 海量日志采集Flume(HA)
海量日志采集Flume(HA) 1.介绍: Flume是Cloudera提供的一个高可用的,高可靠的,分布式的海量日志采集.聚合和传输的系统,Flume支持在日志系统中定制各类数据发送方,用于收集数据 ...
- Flume组件汇总2
Component Interface Type Alias Implementation Class org.apache.flume.Channel memory org.apache.flume ...
随机推荐
- 51Nod 1459:迷宫游戏(最短路)
1459 迷宫游戏 基准时间限制:1 秒 空间限制:131072 KB 分值: 0 难度:基础题 收藏 关注 你来到一个迷宫前.该迷宫由若干个房间组成,每个房间都有一个得分,第一次进入这个房间, ...
- 2017.7.11 linux 挂载
挂载:Liunx采用树形的文件管理系统,也就是在Linux系统中,可以说已经没有分区的概念了.分区在Linux和其他设备一样都只是一个文件.要使用一个分区必须把它加载到文件系统中.这可能难于理解,继续 ...
- Js 分别取一个数的百位,十位,个位
<!DOCTYPE HTML> <html lang="en-US"> <head> <meta charset="UTF-8& ...
- opengl 几何着色器
绘制4条线段 #define GLEW_STATIC #include <GL/glew.h> #include <GLFW/glfw3.h> #include "S ...
- python去除\ufeff、\xa0、\u3000
今天使用python处理一个txt文件的时候,遇到几个特殊字符:\ufeff.\xa0.\u3000,记录一下处理方法 代码:with open(file_path, mode='r') as f: ...
- 2、visualBox虚拟机扩容
1.找到VBoxManager工具 1)打开Finder,找到[应用程序],在右侧找到VirtualBox.app,然后打开右键,找到[显示包内容],点击打开 2.打开终端,来到这个目录下 cd /A ...
- @RequestMapping、@Responsebody、@RequestBody和@PathVariable详解(转)
一.预备知识:@RequestMapping RequestMapping是一个用来处理请求地址映射的注解,可用于类或方法上.用于类上,表示类中的所有响应请求的方法都是以该地址作为父路径. @Requ ...
- Linux vi文本编辑器
vi文本编辑器 1.最基本用法 vi somefile.4 1/ 首先会进入“一般模式”,此模式只接受各种命令快捷键,不能编辑文件内容 2/ 按i键,就会从一般模式进入编辑模式,此模式下,敲入的都是 ...
- Spring Cloud(Dalston.SR5)--Ribbon 中间层负载均衡
Spring Cloud 集成了 Ribbon 并结合 Eureka 可以实现客户端的负载均衡,使用 @LoadBalanced 修饰的 RestTemplate 类拥有了负载均衡功能,在 Sprin ...
- Centos7安装WPS和截图工具shutter
centos7安装WPS 1..在wps官网上下载rpm安装包 2..rpm包安装命令 yum install xxx[安装包的名字] 注意:执行此项命令需要root权限 3.安装完成后即可使用 Ce ...