在启动Flume的时候,出现下面的异常,但是程序照样能运行,Kafka也能够收到数据,只是偶尔会断点。

2016-08-25 15:32:54,561 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Fetching metadata from broker id:2,host:10.208.129.5,port:9092 with correlation id 192126244 for 1 topic(s) Set(TRAFFICxxx_LOG)
2016-08-25 15:32:54,562 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - kafka.utils.Logging$class.error(Logging.scala:103)] Producer connection to 10.208.129.5:9092 unsuccessful
java.net.ConnectException: Connection refused
        at sun.nio.ch.Net.connect0(Native Method)
        at sun.nio.ch.Net.connect(Net.java:454)
        at sun.nio.ch.Net.connect(Net.java:446)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
        at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
        at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
        at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
        at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
        at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67)
        at kafka.utils.Utils$.swallow(Utils.scala:167)
        at kafka.utils.Logging$class.swallowError(Logging.scala:106)
        at kafka.utils.Utils$.swallowError(Utils.scala:46)
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67)
        at kafka.producer.Producer.send(Producer.scala:76)
        at kafka.javaapi.producer.Producer.send(Producer.scala:42)
        at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:129)
        at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:745)
2016-08-25 15:32:54,563 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - kafka.utils.Logging$class.warn(Logging.scala:89)] Fetching topic metadata with correlation id 192126244 for topics [Set(TRAFFICxxx_LOG)] from broker [id:2,host:10.208.129.5,port:9092] failed
java.net.ConnectException: Connection refused
        at sun.nio.ch.Net.connect0(Native Method)
        at sun.nio.ch.Net.connect(Net.java:454)
        at sun.nio.ch.Net.connect(Net.java:446)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
        at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
        at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
        at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
        at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
        at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67)
        at kafka.utils.Utils$.swallow(Utils.scala:167)
        at kafka.utils.Logging$class.swallowError(Logging.scala:106)
        at kafka.utils.Utils$.swallowError(Utils.scala:46)
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67)
        at kafka.producer.Producer.send(Producer.scala:76)
        at kafka.javaapi.producer.Producer.send(Producer.scala:42)
        at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:129)
        at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:745)
2016-08-25 15:32:54,564 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Fetching metadata from broker id:1,host:10.208.129.4,port:9092 with correlation id 192126244 for 1 topic(s) Set(TRAFFICxxx_LOG)
2016-08-25 15:32:54,565 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - kafka.utils.Logging$class.error(Logging.scala:103)] Producer connection to 10.208.129.4:9092 unsuccessful
java.net.ConnectException: Connection refused
        at sun.nio.ch.Net.connect0(Native Method)
        at sun.nio.ch.Net.connect(Net.java:454)
        at sun.nio.ch.Net.connect(Net.java:446)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
        at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
        at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
        at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
        at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
        at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67)
        at kafka.utils.Utils$.swallow(Utils.scala:167)
        at kafka.utils.Logging$class.swallowError(Logging.scala:106)
        at kafka.utils.Utils$.swallowError(Utils.scala:46)
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67)
        at kafka.producer.Producer.send(Producer.scala:76)
        at kafka.javaapi.producer.Producer.send(Producer.scala:42)
        at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:129)
        at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:745)
2016-08-25 15:32:54,566 (SinkRunner-PollingRunner-DefaultSinkProcessor) [WARN - kafka.utils.Logging$class.warn(Logging.scala:89)] Fetching topic metadata with correlation id 192126244 for topics [Set(TRAFFICxxx_LOG)] from broker [id:1,host:10.208.129.4,port:9092] failed
java.net.ConnectException: Connection refused
        at sun.nio.ch.Net.connect0(Native Method)
        at sun.nio.ch.Net.connect(Net.java:454)
        at sun.nio.ch.Net.connect(Net.java:446)
        at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:648)
        at kafka.network.BlockingChannel.connect(BlockingChannel.scala:57)
        at kafka.producer.SyncProducer.connect(SyncProducer.scala:141)
        at kafka.producer.SyncProducer.getOrMakeConnection(SyncProducer.scala:156)
        at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:68)
        at kafka.producer.SyncProducer.send(SyncProducer.scala:112)
        at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:53)
        at kafka.producer.BrokerPartitionInfo.updateInfo(BrokerPartitionInfo.scala:82)
        at kafka.producer.async.DefaultEventHandler$$anonfun$handle$1.apply$mcV$sp(DefaultEventHandler.scala:67)
        at kafka.utils.Utils$.swallow(Utils.scala:167)
        at kafka.utils.Logging$class.swallowError(Logging.scala:106)
        at kafka.utils.Utils$.swallowError(Utils.scala:46)
        at kafka.producer.async.DefaultEventHandler.handle(DefaultEventHandler.scala:67)
        at kafka.producer.Producer.send(Producer.scala:76)
        at kafka.javaapi.producer.Producer.send(Producer.scala:42)
        at org.apache.flume.sink.kafka.KafkaSink.process(KafkaSink.java:129)
        at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:68)
        at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:147)
        at java.lang.Thread.run(Thread.java:745)
2016-08-25 15:32:54,566 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Fetching metadata from broker id:0,host:10.208.129.3,port:9092 with correlation id 192126244 for 1 topic(s) Set(TRAFFICxxx_LOG)
2016-08-25 15:32:54,567 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Connected to 10.208.129.3:9092 for producing
2016-08-25 15:32:54,569 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Disconnecting from 10.208.129.3:9092
2016-08-25 15:32:54,570 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Disconnecting from kafka2:9092
2016-08-25 15:32:54,572 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - kafka.utils.Logging$class.info(Logging.scala:68)] Connected to kafka5:9092 for producing

一开始怀疑.4和.5号机器ping不通或者端口没开,然后查了下端口,发现.4和.5上的Kafka没启动。为了验证这个还专门去Zookeeper上查看了下Kafka的节点信息。

10.208.129.3:2182上有Zookeeper,但上面没有跑Kafka,只跑了4个Storm拓扑
[zk: 10.208.129.3:2182(CONNECTED) 0] ls /
[consumers, storm, brokers, zookeeper]
[zk: 10.208.129.3:2182(CONNECTED) 1] ls /storm
[workerbeats, storms, supervisors, errors, assignments]
[zk: 10.208.129.3:2182(CONNECTED) 2] ls /storm/storms
[rule-hot-config-stat-5-1471598199, traffic-info-stat-8-1471599337, ruleStatTopology-9-1471599960, rule-isp-stat-4-1471598183]
[zk: 10.208.129.3:2182(CONNECTED) 3] ls /consumers   
[]
[zk: 10.208.129.3:2182(CONNECTED) 4] ls /brokers  
[topics]
[zk: 10.208.129.3:2182(CONNECTED) 5] ls /brokers/topics
[]
[zk: 10.208.129.3:2182(CONNECTED) 6] ls /zookeeper     
[quota]
[zk: 10.208.129.3:2182(CONNECTED) 7] ls /zookeeper/quota
[]

10.208.129.4:2181上有Zookeeper,上面跑了Kafka,并且显示启动的Kafka broker id为144、139、141分别对应10.208.129.3、10.208.129.6、10.208.129.11。
[zk: 10.208.129.4:2181(CONNECTED) 0] ls /
[traffic-infoxxx-stat-offset, rule-hot-config-offset, rule-info-stat-offset, traffic-info-stat-offset, zookeeper, rule-isp-stat-offset, consumers, kafka, hive_zookeeper_namespace_hive, rule-config-stat-offset, brokers]
[zk: 10.208.129.4:2181(CONNECTED) 1] ls /kafka
[consumers, config, controller, brokers, admin, controller_epoch]
[zk: 10.208.129.4:2181(CONNECTED) 2] ls /kafka/brokers
[topics, ids]
[zk: 10.208.129.4:2181(CONNECTED) 3] ls /kafka/brokers/ids
[144, 139, 141]

因为Kafka是别的团队搭建的,不好改动。所以只能改Flume的配置文件,修改如下:

agent1.sinks.kafkaSink.type = org.apache.flume.sink.kafka.KafkaSink
agent1.sinks.kafkaSink.topic = TRAFFICxxx_LOG
#agent1.sinks.kafkaSink.brokerList = 10.208.129.3:9092,10.208.129.4:9092,10.208.129.5:9092
#agent1.sinks.kafkaSink.metadata.broker.list = 10.208.129.3:9092,10.208.129.4:9092,10.208.129.5:9092
agent1.sinks.kafkaSink.brokerList = 10.208.129.3:9092,10.208.129.6:9092,10.208.129.11:9092
agent1.sinks.kafkaSink.metadata.broker.list = 10.208.129.3:9092,10.208.129.6:9092,10.208.129.11:9092
agent1.sinks.kafkaSink.producer.type=sync
agent1.sinks.kafkaSink.serializer.class=kafka.serializer.DefaultEncoder
agent1.sinks.kafkaSink.channel = memoryChannel

将broker.list和metadata.broker.list修改为启动了Kafka的ip,一切运行正常

Flume连接Kafka的broker出错的更多相关文章

  1. Flume连接oracle实时推送数据到kafka

    版本号: RedHat6.5   JDK1.8    flume-1.6.0   kafka_2.11-0.8.2.1 flume安装 RedHat6.5安装单机flume1.6:RedHat6.5安 ...

  2. 物联网架构成长之路(8)-EMQ-Hook了解、连接Kafka发送消息

    1. 前言 按照我自己设计的物联网框架,对于MQTT集群中的所有消息,是要持久化到磁盘的,这里采用一个消息队列中间件Kafka作为数据缓冲,缓冲结果存到数据仓库中,以供后续作为数据分析.由于MQTT集 ...

  3. Kafka实战-Flume到Kafka

    1.概述 前面给大家介绍了整个Kafka项目的开发流程,今天给大家分享Kafka如何获取数据源,即Kafka生产数据.下面是今天要分享的目录: 数据来源 Flume到Kafka 数据源加载 预览 下面 ...

  4. flume从kafka中读取数据

    a1.sources = r1 a1.sinks = k1 a1.channels = c1 #使用内置kafka source a1.sources.r1.type = org.apache.flu ...

  5. 【转】Kafka实战-Flume到Kafka

    Kafka实战-Flume到Kafka Kafka   2015-07-03 08:46:24 发布 您的评价:       0.0   收藏     2收藏 1.概述 前面给大家介绍了整个Kafka ...

  6. Flume与Kafka集成

    一.Flume介绍 Flume是一个分布式.可靠.和高可用的海量日志聚合的系统,支持在系统中定制各类数据发送方,用于收集数据:同时,Flume提供对数据进行简单处理,并写到各种数据接受方(可定制)的能 ...

  7. 消息系统Flume与Kafka的区别

    首先Flume和Kafka都是消息系统,但是它俩也有着很多不同的地方,Flume更趋向于消息采集系统,而Kafka更趋向于消息缓存系统. [一]设计上的不同 Flume是消息采集系统,它主要解决问题是 ...

  8. 使用Flume消费Kafka数据到HDFS

    1.概述 对于数据的转发,Kafka是一个不错的选择.Kafka能够装载数据到消息队列,然后等待其他业务场景去消费这些数据,Kafka的应用接口API非常的丰富,支持各种存储介质,例如HDFS.HBa ...

  9. ambari下的flume和kafka整合

    1.配置flume #扫描指定文件配置 agent.sources = s1 agent.channels = c1 agent.sinks = k1 agent.sources.s1.type=ex ...

随机推荐

  1. bzoj 1040 [ZJOI2008]骑士(基环外向树,树形DP)

    [题目链接] http://www.lydsy.com/JudgeOnline/problem.php?id=1040 [题意] 给一个基环森林,每个点有一个权值,求一个点集使得点集中的点无边相连且权 ...

  2. good bye 2015 B - New Year and Old Property

    题意:统计在n,m之间的数的二进制表示形式只有一个零的数目. 位运算模拟+dfs #include<iostream> #include<string> #include< ...

  3. 4.3 Reduction代码(Heterogeneous Parallel Programming class lab)

    首先添加上Heterogeneous Parallel Programming class 中 lab: Reduction的代码: myReduction.c // MP Reduction // ...

  4. dataStructure@ Check whether a given graph is Bipartite or not

    Check whether a given graph is Bipartite or not A Bipartite Graph is a graph whose vertices can be d ...

  5. 使用Mono Cecil 动态获取运行时数据 (Atribute形式 进行注入 用于写Log) [此文报考 xxx is declared in another module and needs to be imported的解决方法]-摘自网络

    目录 一:普通写法 二:注入定义 三:Weave函数 四:参数构造 五:业务编写 六:注入调用 7.  怎么调用别的程序集的方法示例 8. [is declared in another module ...

  6. fuse文件系统

    用户空间文件系统(Filesystem in Userspace,简称FUSE)是操作系统中的概念,指完全在用户态实现的文件系统.目前Linux通过内核模块对此进行支持.一些文件系统如ZFS,glus ...

  7. [C语言 - 1.2] 类型说明符、字符、数组

    A.类型说明符(只能修饰int) short int: = short 2字节 long int: long 8字节 = long   输出占位符 %ld signed int: = signed 默 ...

  8. hibernate分页实现

    1.创建分页实体类 public class PageBean { private int page; // 页码 private int rows; // 每页显示行数 private int st ...

  9. Redis 配置文件 Redis.conf 参数说明

    Redis 配置文件 Redis.conf 参数说明 参数名 参数说明 参数实例 daemonize 是否以后台守护进程运行,默认为 no, 取值 yes, no   daemonize no     ...

  10. 使用 StoryBoard 的时候加入用户引导页面

    如果想让一个APP加上引导页面是一个非常完美的举动 但是,总会遇到一些问题 (不要忘记在APDelegate里面加上用户引导页面的头文件和程序启动时的第一个页面哦) 情况一:纯代码 判断是否是第一次启 ...