Have you ever wondered if Logstash was sending data to your outputs? There's a brand new way to check if Logstash has a "pulse." Introducing the heartbeat input plugin! It’s bundled with Logstash 1.5 so you can start using it immediately!

Why?

Logstash currently has a single pipeline. All events generated by inputs travel through the filter block, and then out of Logstash through the output block.

Even if you have multiple outputs and are separating events using conditionals all events pass through this single pipeline. If any one of your outputs backs up, the entire pipeline stops flowing. The heartbeat plugin takes advantage of this to help you know when the flow of events slows, or stops altogether.

How?

The heartbeat plugin sends a message at a definable interval. Here are the options available for the message configuration parameter:

  • Any string value: The message field will contain the specified string value.  If unset, the message field will contain the string value ok
  • epoch: Rather than a message field, this will result in a clock field which will contain the current epoch timestamp (UTC). If you are unfamiliar with this, it means the number of seconds elapsed since Jan 1, 1970.
  • sequence: Rather than a message field, this will result in a clock field which will contain a number. At start time, the sequence starts at zero and will increment each time your specified interval time has elapsed. Note that this means that if you restart Logstash, the counter resets to zero again.

Examples

Be sure to assign a type to your heartbeat events. This will make it possible to conditionally act on these events later on.

"ok" Message

Perhaps you only want to know that Logstash is still sending messages. Your monitoring system can interpret an "ok" received within a time window as an indicator that everything is working. Your monitoring system would be responsible for tracking the time between "ok" messages.

I can send the default "ok" message every 10 seconds like this:

input {
heartbeat {
interval => 10
type => "heartbeat"
}
# ... other input blocks go here
}

The events would look like this:

{"message":"ok","host":"example.com","@version":"1","@timestamp":"2015-03-18T17:05:24.696Z","type":"heartbeat"}
{"message":"ok","host":"example.com","@version":"1","@timestamp":"2015-03-18T17:05:34.696Z","type":"heartbeat"}
{"message":"ok","host":"example.com","@version":"1","@timestamp":"2015-03Read More

Epoch timestamp

Perhaps your monitoring system uses unix timestamps to track event timing (like Zabbix, for example). If so, you can use the epoch timestamp in the clock field to calculate the difference between "now" and when Logstash generated the heartbeat event. You can calculate lag in this way. This may be especially useful if you inject the heartbeat before events go into a broker, or buffering system, like Redis, RabbitMQ, or Kafka. If the buffer begins to fill up, the time difference will become immediately apparent. You could use this to track the elapsed time--from event creation, to indexing--for your entire Logstash pipeline.

This example will send the epoch timestamp in the clock field:

input {

  heartbeat {
message => "epoch"
interval => 10
type => "heartbeat"
}
# ... other input blocks go here
}

The events would look like this:

{"clock":1426698365,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:06:05.360Z","type":"heartbeat"}
{"clock":1426698375,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:06:15.364Z","type":"heartbeat"}
{"clock":1426698385,"host":"example.com","@version":"1","@timestamp":"2015Read More

Sequence of numbers

This example makes it easy to immediately check if new events are occurring because the clock will continuously increase.

input {
heartbeat {
message => "sequence"
interval => 10
type => "heartbeat"
}
# ... other input blocks go here
}

The events would look like this:

{"clock":1,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:08:13.024Z","type":"heartbeat"}
{"clock":2,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:08:23.027Z","type":"heartbeat"}
{"clock":3,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17Read More

Output

Now let's add a conditional to send this to our monitoring system, and not to our other outputs:

output {
if [type] == "heartbeat" {
# Define the output block for your monitoring system here
} else {
# ... other output blocks go here
}
}

Of course, if you do want your heartbeat messages to be indexed alongside your log data, you are free to do so.

Conclusion

The new heartbeat plugin provides a simple, but effective way to monitor the availability of your Logstash instances right now. We have big plans for the future, though.  Take a look at our road map!

In the future we plan to have a full API, complete with visibility into the pipeline, plugin performance, queue status, event throughput and so much more.  We are super excited to bring these improvements to you!

Happy Logstashing!

input {
heartbeat {
tags => ["heartbeat"]
type => "heartbeat"
message => "epoch"
interval =>
}
} output {
if "heartbeat" in [tags] {
file {
path => "/var/log/cloudchef/logstash/logstash-hearbeat.log"
}
}
}

How to check Logstash's pulse的更多相关文章

  1. DIY PIXHAWK APM等飞控用的PPM转接板

    需要的硬件 一块arduino pro mini(推荐这个,比较小,当然如果你没有USB转转口的烧写工具买个ardunio nano板也是不错的,直接用USB线连接电脑就可以,用nano板要注意.它的 ...

  2. logstash服务启动脚本

    logstash服务启动脚本 最近在弄ELK,发现logstash没有sysv类型的服务启动脚本,于是按照网上一个老外提供的模板自己进行修改 #添加用户 useradd logstash -M -s ...

  3. 日志分析 第五章 安装logstash

    logstash是java应用,依赖JDK,首先需要安装JDK,在安装jdk过程中,logstash-2.3.4使用JDK-1.7版本有bug,使用JDK-1.8版本正常,因此我们安装JDK-1.8版 ...

  4. ELK——安装 logstash 2.2.0、elasticsearch 2.2.0 和 Kibana 3.0

    本文内容 Elasticsearch logstash Kibana 参考资料 本文介绍安装 logstash 2.2.0 和 elasticsearch 2.2.0,操作系统环境版本是 CentOS ...

  5. logstash

    logstash作为数据搜集器,主要分为三个部分:input->filter->output  作为pipeline的形式进行处理,支持复杂的操作,如发邮件等 input配置数据的输入和简 ...

  6. Centos7下使用ELK(Elasticsearch + Logstash + Kibana)搭建日志集中分析平台

    日志监控和分析在保障业务稳定运行时,起到了很重要的作用,不过一般情况下日志都分散在各个生产服务器,且开发人员无法登陆生产服务器,这时候就需要一个集中式的日志收集装置,对日志中的关键字进行监控,触发异常 ...

  7. 使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程

    使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程 先列出来总体启动流程: (1)启动zookeeper集群(hadoop01.hadoop02和hadoop03这3台机 ...

  8. Manage Spring Boot Logs with Elasticsearch, Logstash and Kibana

    下载地址:https://www.elastic.co/downloads When time comes to deploy a new project, one often overlooked ...

  9. logstash 安装zabbix插件

    <pre name="code" class="html">[root@xxyy yum.repos.d]# yum install ruby Lo ...

随机推荐

  1. java基础-day26

    第03天 java基础加强 今日内容介绍 u BeanUtils概述及使用 u XML简介及约束 u XML解析 第1章   XML简介 1.1  XML基本语法 1.1.1 XML概述 XML全称为 ...

  2. Redis中在程序中的应用

    1.导入redis的配置文件,因为要交给web容器管理,所以直接命名为ApplicationContext-redis.xml,具体配置如下: <beans xmlns="http:/ ...

  3. Python自动化开发 - 模块与包

    本节内容 一.模块  1.import module 2.from module  import 3.from module  import * 4.模块的__name__属性 5.模块搜索路径 6. ...

  4. MODBUS串行通信协议详细说明

    第一章  简介 本文详细地描述了装置在MODBUS 通讯模式下的输入和输出命令.信息和数据,以便第三方使用和开发. 1.1 串行通讯协议的目的 通信协议的作用是使信息和数据在上位机主站和装置之间有效地 ...

  5. Kali Linux渗透测试实战 1.4 小试牛刀

    目录 1.4 小试牛刀 1.4.1 信息搜集 whois查询 服务指纹识别 端口扫描 综合性扫描 1.4.2 发现漏洞 1.4.3 攻击与权限维持 小结 1.4 小试牛刀 本节作为第一章的最后一节,给 ...

  6. 有标号的DAG图计数1~4

    前言 我什么都不会,菜的被关了起来. 有标号的DAG图I Solution 考虑递推,设\(f_i\)表示i个点的答案,显然这个东西是可以组合数+容斥递推? 设\(f_i\)表示i个点的答案,我们考虑 ...

  7. 关于各种算法以及好的blog的整理(持续更新)

    一堆博客先扔着,等有空的时候再去看……好像没几个会的…… 以下都是待学习的算法 博弈论 https://www.cnblogs.com/cjyyb/p/9495131.html https://blo ...

  8. mysql日期时间函数使用总结

     获取函数 mysql默认的时间格式: yyyy-MM-dd 或者 yyyy-MM-dd HH:mm:ss 1. Date() 返回日期部分, date('2018-02-14 17:03:04') ...

  9. tomcat单机多实例部署

    最近在面试的过程中,一家公司在面试时提到了有关tomcat单机多实例部署的提问, 正好, 之前使用IntelliJ IDEA 13.1.4这款IDE开发web项目,在开发的过程中,因为有多个web项目 ...

  10. input输入框添加内部图标

    有可能在制作网页的过程中遇到各种美化表单设计,这次我们来试着做一个demo 将input输入框添加内部图标 话不多说,看一下最终效果 我们的思路是,在一个div中,加入一个div和一个input标签, ...