How to check Logstash's pulse
Have you ever wondered if Logstash was sending data to your outputs? There's a brand new way to check if Logstash has a "pulse." Introducing the heartbeat input plugin! It’s bundled with Logstash 1.5 so you can start using it immediately!
Why?
Logstash currently has a single pipeline. All events generated by inputs travel through the filter block, and then out of Logstash through the output block.
Even if you have multiple outputs and are separating events using conditionals all events pass through this single pipeline. If any one of your outputs backs up, the entire pipeline stops flowing. The heartbeat plugin takes advantage of this to help you know when the flow of events slows, or stops altogether.
How?
The heartbeat plugin sends a message at a definable interval. Here are the options available for the message configuration parameter:
- Any string value: The message field will contain the specified string value. If unset, the message field will contain the string value ok
- epoch: Rather than a message field, this will result in a clock field which will contain the current epoch timestamp (UTC). If you are unfamiliar with this, it means the number of seconds elapsed since Jan 1, 1970.
- sequence: Rather than a message field, this will result in a clock field which will contain a number. At start time, the sequence starts at zero and will increment each time your specified interval time has elapsed. Note that this means that if you restart Logstash, the counter resets to zero again.
Examples
Be sure to assign a type to your heartbeat events. This will make it possible to conditionally act on these events later on.
"ok" Message
Perhaps you only want to know that Logstash is still sending messages. Your monitoring system can interpret an "ok" received within a time window as an indicator that everything is working. Your monitoring system would be responsible for tracking the time between "ok" messages.
I can send the default "ok" message every 10 seconds like this:
input {
heartbeat {
interval => 10
type => "heartbeat"
}
# ... other input blocks go here
}
The events would look like this:
{"message":"ok","host":"example.com","@version":"1","@timestamp":"2015-03-18T17:05:24.696Z","type":"heartbeat"}
{"message":"ok","host":"example.com","@version":"1","@timestamp":"2015-03-18T17:05:34.696Z","type":"heartbeat"}
{"message":"ok","host":"example.com","@version":"1","@timestamp":"2015-03Read More
Epoch timestamp
Perhaps your monitoring system uses unix timestamps to track event timing (like Zabbix, for example). If so, you can use the epoch timestamp in the clock field to calculate the difference between "now" and when Logstash generated the heartbeat event. You can calculate lag in this way. This may be especially useful if you inject the heartbeat before events go into a broker, or buffering system, like Redis, RabbitMQ, or Kafka. If the buffer begins to fill up, the time difference will become immediately apparent. You could use this to track the elapsed time--from event creation, to indexing--for your entire Logstash pipeline.
This example will send the epoch timestamp in the clock field:
input { heartbeat {
message => "epoch"
interval => 10
type => "heartbeat"
}
# ... other input blocks go here
}
The events would look like this:
{"clock":1426698365,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:06:05.360Z","type":"heartbeat"}
{"clock":1426698375,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:06:15.364Z","type":"heartbeat"}
{"clock":1426698385,"host":"example.com","@version":"1","@timestamp":"2015Read More
Sequence of numbers
This example makes it easy to immediately check if new events are occurring because the clock will continuously increase.
input {
heartbeat {
message => "sequence"
interval => 10
type => "heartbeat"
}
# ... other input blocks go here
}
The events would look like this:
{"clock":1,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:08:13.024Z","type":"heartbeat"}
{"clock":2,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17:08:23.027Z","type":"heartbeat"}
{"clock":3,"host":"example.com","@version":"1","@timestamp":"2015-03-18T17Read More
Output
Now let's add a conditional to send this to our monitoring system, and not to our other outputs:
output {
if [type] == "heartbeat" {
# Define the output block for your monitoring system here
} else {
# ... other output blocks go here
}
}
Of course, if you do want your heartbeat messages to be indexed alongside your log data, you are free to do so.
Conclusion
The new heartbeat plugin provides a simple, but effective way to monitor the availability of your Logstash instances right now. We have big plans for the future, though. Take a look at our road map!
In the future we plan to have a full API, complete with visibility into the pipeline, plugin performance, queue status, event throughput and so much more. We are super excited to bring these improvements to you!
Happy Logstashing!
input {
heartbeat {
tags => ["heartbeat"]
type => "heartbeat"
message => "epoch"
interval =>
}
} output {
if "heartbeat" in [tags] {
file {
path => "/var/log/cloudchef/logstash/logstash-hearbeat.log"
}
}
}
How to check Logstash's pulse的更多相关文章
- DIY PIXHAWK APM等飞控用的PPM转接板
需要的硬件 一块arduino pro mini(推荐这个,比较小,当然如果你没有USB转转口的烧写工具买个ardunio nano板也是不错的,直接用USB线连接电脑就可以,用nano板要注意.它的 ...
- logstash服务启动脚本
logstash服务启动脚本 最近在弄ELK,发现logstash没有sysv类型的服务启动脚本,于是按照网上一个老外提供的模板自己进行修改 #添加用户 useradd logstash -M -s ...
- 日志分析 第五章 安装logstash
logstash是java应用,依赖JDK,首先需要安装JDK,在安装jdk过程中,logstash-2.3.4使用JDK-1.7版本有bug,使用JDK-1.8版本正常,因此我们安装JDK-1.8版 ...
- ELK——安装 logstash 2.2.0、elasticsearch 2.2.0 和 Kibana 3.0
本文内容 Elasticsearch logstash Kibana 参考资料 本文介绍安装 logstash 2.2.0 和 elasticsearch 2.2.0,操作系统环境版本是 CentOS ...
- logstash
logstash作为数据搜集器,主要分为三个部分:input->filter->output 作为pipeline的形式进行处理,支持复杂的操作,如发邮件等 input配置数据的输入和简 ...
- Centos7下使用ELK(Elasticsearch + Logstash + Kibana)搭建日志集中分析平台
日志监控和分析在保障业务稳定运行时,起到了很重要的作用,不过一般情况下日志都分散在各个生产服务器,且开发人员无法登陆生产服务器,这时候就需要一个集中式的日志收集装置,对日志中的关键字进行监控,触发异常 ...
- 使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程
使用Nginx和Logstash以及kafka来实现网站日志采集的详细步骤和过程 先列出来总体启动流程: (1)启动zookeeper集群(hadoop01.hadoop02和hadoop03这3台机 ...
- Manage Spring Boot Logs with Elasticsearch, Logstash and Kibana
下载地址:https://www.elastic.co/downloads When time comes to deploy a new project, one often overlooked ...
- logstash 安装zabbix插件
<pre name="code" class="html">[root@xxyy yum.repos.d]# yum install ruby Lo ...
随机推荐
- poj2828 线段树单点更新
Buy Tickets Time Limit: 4000 MS Memory Limit: 65536 KB 64-bit integer IO format: %I64d , %I64u Java ...
- DevExpress的TcxDBLookupComboBox使用方法及问题
使用TcxDBLookupComboBox需要设置以下属性: 1.DataBinding.DataSource:数据感知控件的数据源. 2.DataBinding.DataField:数据感知控件的连 ...
- Constructor in depth
There are two types of constructor:Instance Constructor and Type Constructor(or so-called Static Con ...
- 实验4 IIC通讯与EEPROM接口
1. 用C语言编程,利用定时器产生一个0~99秒变化的秒表,并且显示在数码管上,每过一秒将这个变化写入实验板上AT24C02,当关闭实验板电源,并再次打开实验板电源时,单片机从AT24C0 ...
- ovs QOS
实验拓扑 拓扑实现脚本 ip netns add ns1 ip netns add ns2 ip netns add ns3 ip netns add ns4 ovs-vsctl add-br br0 ...
- JVM活学活用——GC算法 垃圾收集器
概述 垃圾收集 Garbage Collection 通常被称为“GC”,它诞生于1960年 MIT 的 Lisp 语言,经过半个多世纪,目前已经十分成熟了. jvm 中,程序计数器.虚拟机栈.本地方 ...
- [leetcode.com]算法题目 - Jump Game
Given an array of non-negative integers, you are initially positioned at the first index of the arra ...
- 【编程之外】从《海贼王》的视角走进BAT的世界
写在前面的话: 1.从写第一篇博客起到现在篇“纯”技术博客,所以呢-,就想写点不一样的东西,所以就有了这篇文章了 2.本文纯属瞎写,不代表任何第三方的观点.仅仅是出于我对于博客园和那部热血动漫的热爱. ...
- Liferay开发实战(1):入门
网址: https://www.liferay.com/zh/ 文档: https://dev.liferay.com/develop 入门文章网上很多,中文的较少,存在版本太旧的问题,也缺少一步一步 ...
- Vue的声明周期
以下简单介绍,以自己的理解进行分析.如有不好,请大牛勿喷!!!!!! new Vue() 创建 Vue 实例 beforeCreate(){}: 第一生命周期 表示实例完全创建出来,此函数执行是,da ...