a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2 # Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444 # Describe the sink
a1.sinks.k1.type =file_roll
a1.sinks.k1.sink.directory=/home/chenyun/data/flume/file_sinke1 a1.sinks.k2.type =file_roll
a1.sinks.k2.sink.directory=/home/chenyun/data/flume/file_sinke2 # Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100 a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100 # Bind the source and sink to the channel
a1.sources.r1.channels = c1 c2
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type = regex_extractor
a1.sources.r1.interceptors.i1.regex = (\\w+):(\\w+):(\\w+)
a1.sources.r1.interceptors.i1.serializers = s1 s2 s3
a1.sources.r1.interceptors.i1.serializers.s1.name = one
a1.sources.r1.interceptors.i1.serializers.s2.name = two
a1.sources.r1.interceptors.i1.serializers.s3.name = three
a1.sources.r1.selector.type = multiplexing
a1.sources.r1.selector.header = one
a1.sources.r1.selector.mapping.hadoop = c1
a1.sources.r1.selector.default = c2
a1.sinks.k1.channel=c1
a1.sinks.k2.channel = c2

Hadoop实战-Flume之Source multiplexing(十五)的更多相关文章

  1. Hadoop实战-Flume之Source replicating(十四)

    a1.sources = r1 a1.sinks = k1 k2 a1.channels = c1 c2 # Describe/configure the source a1.sources.r1.t ...

  2. Hadoop实战-Flume之Source regex_extractor(十二)

    a1.sources = r1 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = ...

  3. Hadoop实战-Flume之Sink Failover(十六)

    a1.sources = r1 a1.sinks = k1 k2 a1.channels = c1 # Describe/configure the source a1.sources.r1.type ...

  4. Hadoop实战-Flume之Hdfs Sink(十)

    a1.sources = r1 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = ...

  5. Hadoop实战-Flume之自定义Sink(十九)

    import java.io.File; import java.io.FileNotFoundException; import java.io.FileOutputStream; import j ...

  6. Hadoop实战-Flume之Source regex_filter(十三)

    a1.sources = r1 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = ...

  7. Hadoop实战-Flume之Source interceptor(十一)(2017-05-16 22:40)

    a1.sources = r1 a1.sinks = k1 a1.channels = c1 # Describe/configure the source a1.sources.r1.type = ...

  8. .NET Core实战项目之CMS 第十五章 各层联动工作实现增删改查业务

    连着两天更新叙述性的文章大家可别以为我转行了!哈哈!今天就继续讲讲我们的.NET Core实战项目之CMS系统的教程吧!这个系列教程拖得太久了,所以今天我就以菜单部分的增删改查为例来讲述下我的项目分层 ...

  9. Hadoop实战-Flume之自定义Source(十八)

    import java.nio.charset.Charset; import java.util.HashMap; import java.util.Random; import org.apach ...

随机推荐

  1. 洛谷——P1621 集合

    P1621 集合 题目描述 现在给你一些连续的整数,它们是从A到B的整数.一开始每个整数都属于各自的集合,然后你需要进行一下的操作: 每次选择两个属于不同集合的整数,如果这两个整数拥有大于等于P的公共 ...

  2. my -> mysql on duplicate key update使用总结

    CREATE TABLE `t_duplicate` ( `a` int(11) NOT NULL, `b` int(255) DEFAULT NULL, `c` int(255) DEFAULT N ...

  3. 怎样去主动拿一个锁并占有?synchronized关键字即可

    怎样主动去拿一个?synchronized关键字即可 怎样去释放一个锁呢?要求锁对象主动释放,打乱占有当前锁的线程即可

  4. HTTP Range - [Web开发]

    版权声明:转载时请以超链接形式标明文章原始出处和作者信息及本声明http://minms.blogbus.com/logs/39569593.html 所谓 Range,是在 HTTP/1.1(htt ...

  5. pandas常用操作

    删除某列: concatdfs.drop('Unnamed: 0',axis=1) 打印所有列名: .columns

  6. 【前端阅读】——《活用PHP、MySQL建构Web世界》摘记之设计技巧

    二.设计技巧 Programming的习惯因人而异,这里只提供一些经验,可以参考. 1.利用Include模块化你的程序代码 Include函数基本上说:就像是把另一个文件(HTML或者PHP程序)读 ...

  7. vuex 深入理解

    参考自:https://mp.weixin.qq.com/s?src=11&timestamp=1528275978&ver=922&signature=ZeHPZ2ZrLir ...

  8. Java实现分布式锁方式

    1.数据库乐观锁 2.redis 3.zookeeper

  9. 存储过程清理N天前数据

    CREATE OR REPLACE PROCEDURE APICALL_LOG_INTERFACE_CLEAN ( CLEANDAY IN Number --天数 ) AS v_cleanDay nu ...

  10. caffe学习--cifar10学习-ubuntu16.04-gtx650tiboost--1g--03--20171103

    classification ./examples/cifar10/cifar10_full.prototxt ./examples/cifar10/cifar10_full_iter_70000.c ...