logstash output kafka ip 设置的坑
原设置
output {
kafka {
acks => ""
enable_metric => false
codec => "json"
topic_id => "topic_test"
bootstrap_servers =>"kafka:9092"
batch_size => 2
}
stdout {
codec => "json"
}
}
异常
ERROR logstash.pipeline - Error registering plugin {:plugin=>"#<LogStash::OutputDelegator:0x7f6968f0 @namespaced_metric=#<LogStash::Instrument::NamespacedMetric:0x6a481298 @metric=#<LogStash::Instrument::Metric:0x373adc76 @collector=#<LogStash::Instrument::Collector:0x5c413763 @agent=nil, @metric_store=#<LogStash::Instrument::MetricStore:0x68dbfaf3 @store=#<Concurrent::Map:0x00000000061f98 entries=2 default_proc=nil>, @structured_lookup_mutex=#<Mutex:0x422de9a2>, @fast_lookup=#<Concurrent::Map:0x00000000061f9c entries=53 default_proc=nil>>>>, @namespace_name=[:stats, :pipelines, :main, :plugins, :outputs, :\"ea6e3d1fb3cb9d03054be3c38cb045c1fb3aae21-4\"]>, @metric=#<LogStash::Instrument::NamespacedMetric:0x763a85c3 @metric=#<LogStash::Instrument::Metric:0x373adc76 @collector=#<LogStash::Instrument::Collector:0x5c413763 @agent=nil, @metric_store=#<LogStash::Instrument::MetricStore:0x68dbfaf3 @store=#<Concurrent::Map:0x00000000061f98 entries=2 default_proc=nil>, @structured_lookup_mutex=#<Mutex:0x422de9a2>, @fast_lookup=#<Concurrent::Map:0x00000000061f9c entries=53 default_proc=nil>>>>, @namespace_name=[:stats, :pipelines, :main, :plugins, :outputs]>, @logger=#<LogStash::Logging::Logger:0x6ccc74ee @logger=#<Java::OrgApacheLoggingLog4jCore::Logger:0x143e31e1>>, @strategy=#<LogStash::OutputDelegatorStrategies::Shared:0x4741e2d6 @output=<LogStash::Outputs::Kafka acks=>\"0\", codec=><LogStash::Codecs::JSON id=>\"json_f412b478-1559-45fd-9d73-7b500ed05f4a\", enable_metric=>true, charset=>\"UTF-8\">, topic_id=>\"qingbo_news\", bootstrap_servers=>\"kafka_l:9092\", batch_size=>2, id=>\"ea6e3d1fb3cb9d03054be3c38cb045c1fb3aae21-4\", enable_metric=>true, workers=>1, block_on_buffer_full=>true, buffer_memory=>33554432, compression_type=>\"none\", key_serializer=>\"org.apache.kafka.common.serialization.StringSerializer\", linger_ms=>0, max_request_size=>1048576, metadata_fetch_timeout_ms=>60000, metadata_max_age_ms=>300000, receive_buffer_bytes=>32768, reconnect_backoff_ms=>10, retries=>0, retry_backoff_ms=>100, send_buffer_bytes=>131072, ssl=>false, security_protocol=>\"PLAINTEXT\", sasl_mechanism=>\"GSSAPI\", timeout_ms=>30000, value_serializer=>\"org.apache.kafka.common.serialization.StringSerializer\">>, @id=\"ea6e3d1fb3cb9d03054be3c38cb045c1fb3aae21-4\", @metric_events=#<LogStash::Instrument::NamespacedMetric:0x9221af6 @metric=#<LogStash::Instrument::Metric:0x373adc76 @collector=#<LogStash::Instrument::Collector:0x5c413763 @agent=nil, @metric_store=#<LogStash::Instrument::MetricStore:0x68dbfaf3 @store=#<Concurrent::Map:0x00000000061f98 entries=2 default_proc=nil>, @structured_lookup_mutex=#<Mutex:0x422de9a2>, @fast_lookup=#<Concurrent::Map:0x00000000061f9c entries=53 default_proc=nil>>>>, @namespace_name=[:stats, :pipelines, :main, :plugins, :outputs, :\"ea6e3d1fb3cb9d03054be3c38cb045c1fb3aae21-4\", :events]>, @output_class=LogStash::Outputs::Kafka>", :error=>"Failed to construct kafka producer"}
06:49:57.474 [[main]-pipeline-manager] ERROR logstash.agent - Pipeline aborted due to error {:exception=>org.apache.kafka.common.KafkaException: Failed to construct kafka producer, :backtrace=>["org.apache.kafka.clients.producer.KafkaProducer.<init>(org/apache/kafka/clients/producer/KafkaProducer.java:335)", "org.apache.kafka.clients.producer.KafkaProducer.<init>(org/apache/kafka/clients/producer/KafkaProducer.java:188)", "java.lang.reflect.Constructor.newInstance(java/lang/reflect/Constructor.java:423)", "RUBY.create_producer(/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-kafka-5.1.7/lib/logstash/outputs/kafka.rb:242)", "RUBY.register(/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-kafka-5.1.7/lib/logstash/outputs/kafka.rb:178)", "RUBY.register(/usr/share/logstash/logstash-core/lib/logstash/output_delegator_strategies/shared.rb:9)", "RUBY.register(/usr/share/logstash/logstash-core/lib/logstash/output_delegator.rb:41)", "RUBY.register_plugin(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:281)", "RUBY.register_plugins(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:292)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)", "RUBY.register_plugins(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:292)", "RUBY.start_workers(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301)", "RUBY.run(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:226)", "RUBY.start_pipeline(/usr/share/logstash/logstash-core/lib/logstash/agent.rb:398)", "java.lang.Thread.run(java/lang/Thread.java:748)"]}
06:49:57.611 [Api Webserver] INFO logstash.agent - Successfully started Logstash API endpoint {:port=>9600}
找不到任何相关信息
https://discuss.elastic.co/t/failed-to-construct-kafka-producer/76195
最相近的贴子时这个,但没有得到回复,问题过期也禁止回复
kafka 写在 /etc/hosts 里,网络是通的
bash-4.3# ping kafka
PING kafka (172.32.255.81): 56 data bytes
64 bytes from 172.32.255.81: seq=0 ttl=64 time=0.301 ms
64 bytes from 172.32.255.81: seq=1 ttl=64 time=0.093 ms
以前搞 hadoop 碰上过相反的问题,hadoop当时配的ip,失败报奇葩错误,配成host才正常。
可以换 host换 ip 试试 换成ip,便通过了
output {
kafka {
acks => ""
enable_metric => false
codec => "json"
topic_id => "topic_test"
bootstrap_servers =>"172.32.255.81:9092"
batch_size => 2
}
stdout {
codec => "json"
}
}
logstash output kafka ip 设置的坑的更多相关文章
- Logstash读取Kafka数据写入HDFS详解
强大的功能,丰富的插件,让logstash在数据处理的行列中出类拔萃 通常日志数据除了要入ES提供实时展示和简单统计外,还需要写入大数据集群来提供更为深入的逻辑处理,前边几篇ELK的文章介绍过利用lo ...
- 使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程
使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程 先列出来总体启动流程: (1)启动zookeeper集群(hadoop01.hadoop02和hadoop03这3台机 ...
- logstash通过kafka传输nginx日志(三)
单个进程 logstash 可以实现对数据的读取.解析和输出处理.但是在生产环境中,从每台应用服务器运行 logstash 进程并将数据直接发送到 Elasticsearch 里,显然不是第一选择:第 ...
- logstash与kafka消息传输<一>
1.版本: logstash6.1.2.kafka-0.11.kafka-0.8.2.java1.8 Note: Logstash requires Java 8. Java 9 is not sup ...
- ELK学习笔记之配置logstash消费kafka多个topic并分别生成索引
0x00 filebeat配置多个topic filebeat.prospectors: - input_type: log encoding: GB2312 # fields_under_root: ...
- 浅尝 Elastic Stack (五) Logstash + Beats + Kafka
在 Elasticsearch.Kibana.Beats 安装 中讲到推荐架构: 本文基于 Logstash + Beats 读取 Spring Boot 日志 将其改为上述架构 如果没有安装 Kaf ...
- 配置logstash消费kafka多个topic,分别生成索引
filebeat配置多个topic #filebeat.prospectors: filebeat.inputs: - input_type: log encoding: GB2312 # field ...
- 第一种SUSE Linux IP设置方法
第一种SUSE Linux IP设置方法ifconfig eth0 192.168.1.22 netmask 255.255.255.0 uproute add default gw 192.168. ...
- ubuntu12.04静态ip设置问题
由于linux知识不是学的很深,所以仅代表我自己的设置成功总结. 第一步是设置/etc/network/interfaces 增加静态ip设置 auto eth0iface eth0 inet sta ...
随机推荐
- vue组件传值 part2
非父子组件传值 轻量级 视图层框架 复杂组件间传值,引进其他的工具或者设计模式 1.vuex 2.总线机制 //main line 1.在main.js中注册全局的bus Vue.prototype. ...
- 树上问题&图论模板整理
去除过水的模板,包括但不限于dijkstra(甚至堆优化都被过滤了).SPFA.kruskal.拓扑排序等. 欧拉回路:http://uoj.ac/problem/117 #include<bi ...
- 基于基因调控网络(Hopfield network)构建沃丁顿表观遗传景观
基因调控网络的概念在之前已经简要介绍过:https://www.cnblogs.com/pear-linzhu/p/12313951.html 沃丁顿表观遗传景观(The Waddington's e ...
- ServiceComb 集成 Shiro 实践|火影专场发布
Shiro简介 Apache Shiro是一款功能强大.易用的轻量级开源Java安全框架,它主要提供认证.鉴权.加密和会话管理等功能.Spring Security可能是业界用的最广泛的安全框架,但是 ...
- 用c语言实现的几个小项目
1.参考:Linux系统编程 2.参考:制作简单计算器 3.参考:制作2048小游戏 4.参考:五子棋实现
- 关于Java自动拆箱装箱中的缓存问题
package cn.zhang.test; /** * 测试自动装箱拆箱 * 自动装箱:基本类型自动转为包装类对象 * 自动拆箱:包装类对象自动转化为基本数据类型 * * * /*缓存问题*/ /* ...
- 【转】我们为什么要使用 Markdown
目录 从前码字时我们面临着什么困境 标记语言显神威 到底什么是 Markdown 所以为什么我们要使用 Markdown Markdown 简明语法 段落和换行 标题 区块引用 列表 强调 代码标识和 ...
- Linux-编写简单守护进程
1.任何一个进程都可以将自己实现成一个守护进程 2.create_daemon函数要素 (1).子进程要等待父进程退出 (2).子进程使用setsid创建新的会话期,脱离控制台 (3).调用chdir ...
- apk反编译安装工具
一.需要工具 apktool:反编译APK文件,得到classes.dex文件,同时也能获取到资源文件以及布局文件. dex2jar:将反编译后的classes.dex文件转化为.jar文件. jd- ...
- 903A. Hungry Student Problem#饥饿的学生(暴力&双层枚举)
题目出处:http://codeforces.com/problemset/problem/903/A 题目大意就是:一个数能否用正整数个另外两个数合成 #include<iostream> ...