logstash6.6.0-6.6.2版本使用jdbc input plugin时如果设置了jdbc_default_timezone,会报错:

{ 2012 rufus-scheduler intercepted an error:
2012 job:
2012 Rufus::Scheduler::CronJob "* * * * *" {}
2012 error:
2012 2012
2012 NoMethodError
2012 undefined method `utc_total_offset_rational' for #<TZInfo::TransitionsTimezonePeriod:0x78b60bbe>
Did you mean? utc_total_offset
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/extensions/named_timezones.rb:81:in `convert_output_datetime_other'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/timezones.rb:54:in `convert_output_timestamp'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/database/misc.rb:219:in `from_application_timestamp'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1058:in `format_timestamp'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1226:in `literal_datetime'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1231:in `literal_datetime_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:105:in `literal_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:602:in `placeholder_literal_string_sql_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/sql.rb:112:in `to_s_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1236:in `literal_expression_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:89:in `literal_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1571:in `static_sql'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:236:in `select_sql'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:147:in `sql'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1585:in `subselect_sql_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1211:in `literal_dataset_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:109:in `literal_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:271:in `aliased_expression_sql_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/sql.rb:112:in `to_s_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1236:in `literal_expression_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:89:in `literal_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1104:in `identifier_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1114:in `block in identifier_list_append'
2012 org/jruby/RubyArray.java:1734:in `each'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1112:in `identifier_list_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1542:in `source_list_append'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:1395:in `select_from_sql'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/sql.rb:244:in `select_sql'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/actions.rb:729:in `single_value!'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/dataset/actions.rb:110:in `count'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/sequel-5.17.0/lib/sequel/extensions/pagination.rb:58:in `each_page'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/plugin_mixins/jdbc/jdbc.rb:251:in `perform_query'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/plugin_mixins/jdbc/jdbc.rb:229:in `execute_statement'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/inputs/jdbc.rb:277:in `execute_query'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/logstash-input-jdbc-4.3.13/lib/logstash/inputs/jdbc.rb:258:in `block in run'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:234:in `do_call'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:258:in `do_trigger'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:300:in `block in start_work_thread'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:299:in `block in start_work_thread'
2012 org/jruby/RubyKernel.java:1292:in `loop'
2012 /data/logstash-6.6.2/vendor/bundle/jruby/2.3.0/gems/rufus-scheduler-3.0.9/lib/rufus/scheduler/jobs.rb:289:in `block in start_work_thread'
2012 tz:
2012 ENV['TZ']:
2012 Time.now: 2019-03-22 11:23:04 +0800
2012 scheduler:
2012 object_id: 2010
2012 opts:
2012 {:max_work_threads=>1}
2012 frequency: 0.3
2012 scheduler_lock: #<Rufus::Scheduler::NullLock:0x17bfc482>
2012 trigger_lock: #<Rufus::Scheduler::NullLock:0x6b986244>
2012 uptime: 63.011622 (1m3s13)
2012 down?: false
2012 threads: 2
2012 thread: #<Thread:0x5aaa15f6>
2012 thread_key: rufus_scheduler_2010
2012 work_threads: 1
2012 active: 1
2012 vacant: 0
2012 max_work_threads: 1
2012 mutexes: {}
2012 jobs: 1
2012 at_jobs: 0
2012 in_jobs: 0
2012 every_jobs: 0
2012 interval_jobs: 0
2012 cron_jobs: 1
2012 running_jobs: 1
2012 work_queue: 0

这个bug是由于升级zinfo1.2.5->2.0.0导致的

$LOGSTASH_HOME/vendor/bundle/jruby/2.3.0/gems/tzinfo-2.0.0

预计会在下个版本修复,只能先降级到6.5.4版本;

参考:
https://github.com/logstash-plugins/logstash-input-jdbc/issues/325

【原创】大叔经验分享(43)logstash设置jdbc_default_timezone后报错的更多相关文章

  1. 【原创】大叔经验分享(51)docker报错Exited (137)

    docker container启动失败,报错:Exited (137) *** ago,比如 Exited (137) 16 seconds ago 这时通过docker logs查不到任何日志,从 ...

  2. 【原创】大叔经验分享(53)kudu报错unable to find SASL plugin: PLAIN

    kudu安装后运行不正常,master中找不到任何tserver,查看tserver日志发现有很多报错: Failed to heartbeat to master:7051: Invalid arg ...

  3. 【原创】经验分享:一个小小emoji尽然牵扯出来这么多东西?

    前言 之前也分享过很多工作中踩坑的经验: 一个线上问题的思考:Eureka注册中心集群如何实现客户端请求负载及故障转移? [原创]经验分享:一个Content-Length引发的血案(almost.. ...

  4. 【原创】大叔经验分享(82)logstash一个实例运行多个配置文件

    logstash一个实例运行多个配置文件,将所有配置文件放到以下目录即可 /usr/share/logstash/pipeline 但是默认行为不是每个配置文件独立运行,而是作为一个整体,每个inpu ...

  5. 【原创】大叔经验分享(84)spark sql中设置hive.exec.max.dynamic.partitions无效

    spark 2.4 spark sql中执行 set hive.exec.max.dynamic.partitions=10000; 后再执行sql依然会报错: org.apache.hadoop.h ...

  6. 【原创】大叔经验分享(79)mysql内存设置

    mysql内存设置,首先要知道当前的设置 MySQL [(none)]> show variables like '%buffer%'; +--------------------------- ...

  7. 【原创】大叔经验分享(28)ELK分析nginx日志

    提前安装好elk(elasticsearch.logstach.kibana) 一 启动logstash $LOGSTASH_HOME默认位于/usr/share/logstash或/opt/logs ...

  8. 【原创】大叔经验分享(19)spark on yarn提交任务之后执行进度总是10%

    spark 2.1.1 系统中希望监控spark on yarn任务的执行进度,但是监控过程发现提交任务之后执行进度总是10%,直到执行成功或者失败,进度会突然变为100%,很神奇, 下面看spark ...

  9. 【原创】大叔经验分享(13)spark运行报错WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting port 1.

    本地运行spark报错 18/12/18 12:56:55 WARN Utils: Service 'sparkDriver' could not bind on port 0. Attempting ...

随机推荐

  1. WinForm调用钉钉获取考勤结果

    关注点: 1.钉钉AccessToken的获取和防止过期 2.使用TPL并行编程调用钉钉接口 需求详解 公司前台有个大屏,领导想显示全部员工的考勤结果统计情况和车间的实时监控视频,还有车间的看板.简单 ...

  2. Entity Framework Core系列之DbContext(删除)

    上一篇我们介绍了Entity Framework Core系列之DbContext(修改),这一篇我们介绍下删除数据 修改实体的方法取决于context是否正在跟踪需要删除的实体. 下面的示例中con ...

  3. Oracle Data Provider for .Net classes文档

    官方文档详见:https://docs.oracle.com/en/database/oracle/oracle-data-access-components/18.3/odpnt/odp-dot-n ...

  4. HDU 3518 Boring counting

    题目:Boring counting 链接:http://acm.hdu.edu.cn/showproblem.php?pid=3518 题意:给一个字符串,问有多少子串出现过两次以上,重叠不能算两次 ...

  5. JDBC连接池之C3P0

    1.导入jar包 c3p0-0.9.1.jar mchange-commons-java-0.2.3.4(注:该jar包是c3p0数据库连接池的辅助包,没有这个包系统启动的时候会报classnotfo ...

  6. Linux(Ubuntu 16) 下Java开发环境的配置(二)------Tomcat的配置及常见问题

    前言 相比于java JDK的配置,Tomcat的配置简单的多,简直就相当于直接运行了,本文以Tomcat8.0为例进行配置   1.Tomcat的下载 地址:https://tomcat.apach ...

  7. 题解 UVA1567 【A simple stone game】

    题目大意 一堆石子有n个,首先第一个人开始可以去1~

  8. yii2 redirect重定向

    redirect使用方法 $this->redirect(array('/site/contact','id'=>12)); //http://www.kuitao8.com/testwe ...

  9. GWAS条件分析(conditional analysis)

    一.为什么要做GWAS的条件分析(conditional analysis) 我们做GWAS的时候,经常扫出一堆显著的信号,假设rs121是我们扫出来与某表型最显著相关的位点(P=1.351e-36) ...

  10. os.listdir()、os.walk()和os.mkdir()的用法

    内容主要参照博客https://blog.csdn.net/xxn_723911/article/details/78795033 http://www.runoob.com/python/os-wa ...