不说过程了,直接说结果!一对相连接的channel-HdfsSink,无意间配置如下:
...
agent.channels.common-channel.transactionCapacity=10
...
agent.sinks.hdfs-sink.hdfs.batchSize=20

  简单测试之后发现flume报如下异常,倒也正常……

[2015-12-17 11:42:09:694 ERROR][org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:467)]process failed
org.apache.flume.ChannelException: Take list for MemoryTransaction, capacity 10 full, consider committing more frequently, increasing capacity, or in creasing thread count
        at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doTake(MemoryChannel.java:96)
        at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
        at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
        at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:382)
        at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:745)
[2015-12-17 11:42:09:696 ERROR][org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:160)]Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: org.apache.flume.ChannelException: Take list for MemoryTransaction, capacity 10 full, consider committing more frequently, increasing capacity, or increasing thread count
        at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:471)
        at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.flume.ChannelException: Take list for MemoryTransaction, capacity 10 full, consider committing more frequently, increasing capacity, or increasing thread count
        at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doTake(MemoryChannel.java:96)
        at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:113)
        at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:95)
        at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:382)
        ... 3 more

  但是......但是老子发现停掉发送,channelSize一直不减,hdfs里的数据也一直在涨!!!而且永远停不下来,数据被永远重放!!!
  查看相关HdfsSink代码如下:

  public Status process() throws EventDeliveryException {
Channel channel = getChannel();
Transaction transaction = channel.getTransaction();
List<BucketWriter> writers = Lists.newArrayList();
transaction.begin();
try {
int txnEventCount = 0;
for (txnEventCount = 0; txnEventCount < batchSize; txnEventCount++) {
Event event = channel.take();
if (event == null) {
break;
}
...... bucketWriter.append(event);
...... transaction.commit(); if (txnEventCount < 1) {
return Status.BACKOFF;
} else {
sinkCounter.addToEventDrainSuccessCount(txnEventCount);
return Status.READY;
}
} catch (IOException eIO) {
transaction.rollback();
LOG.warn("HDFS IO error", eIO);
return Status.BACKOFF;
} catch (Throwable th) {
transaction.rollback();
LOG.error("process failed", th);
if (th instanceof Error) {
throw (Error) th;
} else {
throw new EventDeliveryException(th);
}
} finally {
transaction.close();
}
}

  看到了嘛!异常在取第11个数据时channel.take();出现,然后此次事物被回滚,但是,但是尼玛之前取出来的10个Event都被bucketWriter.append(event);了,也就是被写到hdfs了;
然后就是最初的现象了,事物一直不断的被回滚,但部分取到的数据却也写到hdfs了,这尼玛算是什么事务……
  虽然是不合理的配置参数,但flume启动时有一大陀检测参数的代码也没检测到这些,至少给报个错或WARN嘛!

flume坑之channel.transactionCapacity和HdfsSink.batchSize的更多相关文章

  1. org.apache.flume.conf.ConfigurationException: Channel c1 not in active set.

    1 错误详细信息 WARN conf.FlumeConfiguration: Could not configure sink k1 due to: Channel c1 not in active ...

  2. [从源码学设计] Flume 之 memory channel

    [从源码学设计] Flume 之 memory channel 目录 [从源码学设计] Flume 之 memory channel 0x00 摘要 0x01 业务范畴 1.1 用途和特点 1.2 C ...

  3. 实时事件统计项目:优化flume:用file channel代替mem channel

    背景:利用kafka+flume+morphline+solr做实时统计. solr从12月23号开始一直没有数据.查看日志发现,因为有一个同事加了一条格式错误的埋点数据,导致大量error. 据推断 ...

  4. Hadoop生态圈-Flume的主流Channel源配置

    Hadoop生态圈-Flume的主流Channel源配置 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任.   一. 二. 三.

  5. flume的memeryChannel中transactionCapacity和sink的batchsize需要注意事项

    一. fluem中出现,transactionCapacity查询一下,得出一下这些: 最近在做flume的实时日志收集,用flume默认的配置后,发现不是完全实时的,于是看了一下,原来是memery ...

  6. Flume配置Replicating Channel Selector

    1 官网内容 上面的配置是r1获取到的内容会同时复制到c1 c2 c3 三个channel里面 2 详细配置信息 # Name the components on this agent a1.sour ...

  7. Flume配置Multiplexing Channel Selector

    1 官网内容 上面配置的是根据不同的heder当中state值走不同的channels,如果是CZ就走c1 如果是US就走c2 c3 其他默认走c4 2 我的详细配置信息 一个监听http端口 然后 ...

  8. flume 报File Channel transaction capacity cannot be greater than the capacity of the channel capacity错误

    今天在部署flume集群时,在启动collector服务器没报错,启动agent服务器报错: File Channel transaction capacity cannot be greater t ...

  9. 一次flume exec source采集日志到kafka因为单条日志数据非常大同步失败的踩坑带来的思考

    本次遇到的问题描述,日志采集同步时,当单条日志(日志文件中一行日志)超过2M大小,数据无法采集同步到kafka,分析后,共踩到如下几个坑.1.flume采集时,通过shell+EXEC(tail -F ...

随机推荐

  1. babel 解构赋值无法问题

    这个东西需要第二级, babel-preset-stage-2,然后再presets里引入stage-2的设置,再plugins离引入对应的插件 { "presets": [&qu ...

  2. 面试题-Stack的最小值o(1)

    // Stack.cpp : 定义控制台应用程序的入口点. // #include "stdafx.h" #include <iostream> using names ...

  3. 怎样获取Windows平台下SQL server性能计数器值

    转载自工作伙伴Garrett, Helen "SQL Server Performance Counter captures" Capturing Windows Performa ...

  4. Sublime3 快捷键

    Sublime3 快捷键 blog 选择类 Ctrl+D 选中光标所占的文本,继续操作则会选中下一个相同的文本. Alt+F3 选中文本按下快捷键,即可一次性选择全部的相同文本进行同时编辑.举个栗子: ...

  5. 阻止iOS中页面弹性回滚,只允许div.phone_body的区块有弹性

    使用说明:只要替换选择器:var selector = '.phone_body'; /** * 阻止iOS中页面弹性回滚,只允许div.scroller的区块有弹性 */ (function () ...

  6. ProcessBuilder 、Runtime和Process 的区别

    1.版本原因 ProcessBuilder是从java1.5加进来的,而exec系列方法是从1.0开始就有的,后续版本不断的重载这个方法,到了1.5已经有6个之多. 2.ProcessBuilder. ...

  7. WebForm Application Viewstate 以及分页(功能性的知识点)

    Application: 全局公共变量组 存放位置:服务器 特点:所有访问用户都是访问同一个变量,但只要服务器不停机,变量一直存在于服务器的内存中,不要使用循环大量的创建Application对象,可 ...

  8. 微信web开发者工具

    http://mp.weixin.qq.com/wiki/10/e5f772f4521da17fa0d7304f68b97d7e.html#.E4.B8.8B.E8.BD.BD.E5.9C.B0.E5 ...

  9. Mifare系列3-卡的能源和数据传递(转)

    文/闫鑫原创转载请注明出处http://blog.csdn.net/yxstars/article/details/38080175 在MIFARE卡中,能量和数据通过天线传输,卡中天线为几匝线圈,直 ...

  10. Sprint(第十二天11.25)