flume在抽取MySQL数据到kafka时报错,如下

[SinkRunner-PollingRunner-DefaultSinkProcessor] ERROR org.apache.flume.sink.kafka.KafkaSink - Failed to publish events
org.apache.flume.ChannelException: Take list for MemoryTransaction, capacity full, consider committing more frequently, increasing capacity, or increasing thread count
at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doTake(MemoryChannel.java:)
at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:)
at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:)
at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:)
at java.lang.Thread.run(Thread.java:)
[SinkRunner-PollingRunner-DefaultSinkProcessor] ERROR org.apache.flume.SinkRunner - Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: Failed to publish events
at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:)
at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:)
at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:)
at java.lang.Thread.run(Thread.java:)
Caused by: org.apache.flume.ChannelException: Take list for MemoryTransaction, capacity full, consider committing more frequently, increasing capacity, or increasing thread count
at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doTake(MemoryChannel.java:)
at org.apache.flume.channel.BasicTransactionSemantics.take(BasicTransactionSemantics.java:)
at org.apache.flume.channel.BasicChannelSemantics.take(BasicChannelSemantics.java:)
at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:)
... more
MemoryChannel的事务的Put队列满
解决:将channel的capacity的值调大即可

a1.channels.ch-.capacity = 
注意:Flume配置每行后边不能加注释,否则会导致配置不生效

org.apache.flume.ChannelException: Take list for MemoryTransaction, capacity 100 full, consider committing more frequently, increasing capacity, or increasing thread count的更多相关文章

  1. Put queue for MemoryTransaction of capacity 10000 full, consider committing more frequently, increasing capacity or increasing thread count flume capacity 时间数

    package com.test; import org.apache.http.*;import org.apache.http.entity.ContentType;import org.apac ...

  2. 一个Flume 异常(Put queue for MemoryTransaction of capacity 100 full)的排查和解决思路

    最近在做一个分布式调用链跟踪系统, 在两个地方采用了flume (我使用的flume版本是1.5.0-cdh5.4.4),一个是宿主系统 ,用flume agent进行日志搜集. 一个是从kafka拉 ...

  3. Flume启动运行时报错org.apache.flume.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight解决办法(图文详解)

        前期博客 Flume自定义拦截器(Interceptors)或自带拦截器时的一些经验技巧总结(图文详解) 问题详情 启动agent服务 [hadoop@master flume-1.7.0]$ ...

  4. Apache Flume 学习笔记

    # 从http://flume.apache.org/download.html 下载flume ############################################# # 概述: ...

  5. Apache Flume 1.7.0 发布,日志服务器

    Apache Flume 1.7.0 发布了,Flume 是一个分布式.可靠和高可用的服务,用于收集.聚合以及移动大量日志数据,使用一个简单灵活的架构,就流数据模型.这是一个可靠.容错的服务. 本次更 ...

  6. org.apache.flume.FlumeException: NettyAvroRpcClient { host: xxx.xxx.xxx.xxx, port: 41100 }: RPC

    2014-12-19 01:05:42,141 (lifecycleSupervisor-1-1) [WARN - org.apache.flume.sink.AbstractRpcSink.star ...

  7. Apache Flume日志收集系统简介

    Apache Flume是一个分布式.可靠.可用的系统,用于从大量不同的源有效地收集.聚合.移动大量日志数据进行集中式数据存储. Flume简介 Flume的核心是Agent,Agent中包含Sour ...

  8. Apache Flume 1.7.0 各个模块简介

    Flume简介 Apache Flume是一个分布式.可靠.高可用的日志收集系统,支持各种各样的数据来源,如http,log文件,jms,监听端口数据等等,能将这些数据源的海量日志数据进行高效收集.聚 ...

  9. Flafka: Apache Flume Meets Apache Kafka for Event Processing

    The new integration between Flume and Kafka offers sub-second-latency event processing without the n ...

随机推荐

  1. shell 命令getopts用法

    写shell脚本常见sh test.sh -m 2 -d 3的写法 事例脚本: #!/bin/bash while getopts ":a:b:c:" arg #选项后面的冒号表示 ...

  2. UTI 唯一类型标识

    本文转载至 http://blog.csdn.net/zaitianaoxiang/article/details/6657231   applicationdocumentationtypessys ...

  3. CentOS系统bash: groupadd: command not found问题

    如果我们需要在CentOS执行新建用户组命令的时候,需要进入到ROOT权限,如果你用以下命令: 1 su2 su root 进入到ROOT账户,那么会出现上述的错误信息:“bash: groupadd ...

  4. 内核源码阅读vim+cscope+ctags+taglist

    杜斌博客:http://blog.db89.org/kernel-source-read-vim-cscope-ctags-taglist/ 武特博客:http://edsionte.com/tech ...

  5. [Spring Data MongoDB]学习笔记--_id和类型映射

    _id字段的映射: MongoDB要求所有的document都要有一个_id的字段. 如果我们在使用中没有传入_id字段,它会自己创建一个ObjectId. { , "accounts&qu ...

  6. L - Sum It Up(DFS)

    L - Sum It Up Time Limit:1000MS     Memory Limit:10000KB     64bit IO Format:%I64d & %I64u Descr ...

  7. 《挑战程序设计竞赛》2.1 深度优先搜索 POJ2386 POJ1979 AOJ0118 AOJ0033 POJ3009

    POJ2386 Lake Counting Time Limit: 1000MS   Memory Limit: 65536K Total Submissions: 25366   Accepted: ...

  8. api文档的书写

    写文档写要与写代码一样,增加复用. 比如 model 说明就只需要一个,api中含有哪些字段,就在api说明中增加到那些 models 的链接. 使用 sophinx 如何生成目录 .. toctre ...

  9. JavaScript中对事件简单的理解

    事件(event) 1.什么是JavaScript事件? 事件是文档或浏览器中发生的特定交互瞬间. 2.事件流 事件流描述的是从页面中接受事件的顺序,包含IE提出的事件冒泡流与Netscape提出的事 ...

  10. Python3.6全栈开发实例[002]

    2.判断用户传入的对象(字符串.列表.元组)长度是否大于5. li = [11,22,33,44,55,66,77,88,99,000,111,222] def func2(lst): if len( ...