logstash高可用体现为不丢数据(前提为服务器短时间内不可用后可恢复比如重启服务器或重启进程),具体有两个方面:

  • 进程重启(服务器重启)
  • 事件消息处理失败

在logstash中对应的解决方案为:

  • Persistent Queues
  • Dead Letter Queues

默认都没有开启;

另外可以通过docker或marathon或systemd来实现进程的自动重启;

As data flows through the event processing pipeline, Logstash may encounter situations that prevent it from delivering events to the configured output. For example, the data might contain unexpected data types, or Logstash might terminate abnormally.
To guard against data loss and ensure that events flow through the pipeline without interruption, Logstash provides the following data resiliency features.

  • Persistent Queues protect against data loss by storing events in an internal queue on disk.
  • Dead Letter Queues provide on-disk storage for events that Logstash is unable to process. You can easily reprocess events in the dead letter queue by using the dead_letter_queue input plugin.

These resiliency features are disabled by default.

1 Persistent Queues

By default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline workers) to buffer events. The size of these in-memory queues is fixed and not configurable. If Logstash experiences a temporary machine failure, the contents of the in-memory queue will be lost. Temporary machine failures are scenarios where Logstash or its host machine are terminated abnormally but are capable of being restarted.
In order to protect against data loss during abnormal termination, Logstash has a persistent queue feature which will store the message queue on disk. Persistent queues provide durability of data within Logstash.

logstash默认使用内存queue来缓冲事件消息,一旦进程重启则内存queue里的数据全部丢失;

好处

  • Absorbs bursts of events without needing an external buffering mechanism like Redis or Apache Kafka.
  • Provides an at-least-once delivery guarantee against message loss during a normal shutdown as well as when Logstash is terminated abnormally.

实现

The queue sits between the input and filter stages in the same process:

input → queue → filter + output

When an input has events ready to process, it writes them to the queue. When the write to the queue is successful, the input can send an acknowledgement to its data source.
When processing events from the queue, Logstash acknowledges events as completed, within the queue, only after filters and outputs have completed. The queue keeps a record of events that have been processed by the pipeline. An event is recorded as processed (in this document, called "acknowledged" or "ACKed") if, and only if, the event has been processed completely by the Logstash pipeline.

配置

queue.type: persisted
path.queue: "path/to/data/persistent_queue"

其他配置

queue.page_capacity
queue.drain
queue.max_events
queue.max_bytes

更进一步

First, the queue itself is a set of pages. There are two kinds of pages: head pages and tail pages. The head page is where new events are written. There is only one head page. When the head page is of a certain size (see queue.page_capacity), it becomes a tail page, and a new head page is created. Tail pages are immutable, and the head page is append-only. Second, the queue records details about itself (pages, acknowledgements, etc) in a separate file called a checkpoint file.
When recording a checkpoint, Logstash will:

Call fsync on the head page.
Atomically write to disk the current state of the queue.
The process of checkpointing is atomic, which means any update to the file is saved if successful.

If Logstash is terminated, or if there is a hardware-level failure, any data that is buffered in the persistent queue, but not yet checkpointed, is lost.
You can force Logstash to checkpoint more frequently by setting queue.checkpoint.writes. This setting specifies the maximum number of events that may be written to disk before forcing a checkpoint. The default is 1024. To ensure maximum durability and avoid losing data in the persistent queue, you can set queue.checkpoint.writes: 1 to force a checkpoint after each event is written. Keep in mind that disk writes have a resource cost. Setting this value to 1 can severely impact performance.

即使开启persistent queue,也有可能会有数据丢失,影响因素是flush间隔(checkpoint),默认是1024个事件flush一次,设置为1则每个事件flush一次,虽然不丢消息,但是对性能影响较大;

queue.checkpoint.writes: 1

2 Dead Letter Queues

By default, when Logstash encounters an event that it cannot process because the data contains a mapping error or some other issue, the Logstash pipeline either hangs or drops the unsuccessful event. In order to protect against data loss in this situation, you can configure Logstash to write unsuccessful events to a dead letter queue instead of dropping them.
Each event written to the dead letter queue includes the original event, along with metadata that describes the reason the event could not be processed, information about the plugin that wrote the event, and the timestamp for when the event entered the dead letter queue.
To process events in the dead letter queue, you simply create a Logstash pipeline configuration that uses the dead_letter_queue input plugin to read from the queue.

当logstash遇到无法处理的数据(mapping错误等),logstash要么卡住,要么丢掉不成功的事件;为了避免这种情况下的数据丢失,可以配置logstash将不成功的事件写到一个dead letter queue而不是直接丢掉;

使用限制

The dead letter queue feature is currently supported for the elasticsearch output only. Additionally, The dead letter queue is only used where the response code is either 400 or 404, both of which indicate an event that cannot be retried. Support for additional outputs will be available in future releases of the Logstash plugins. Before configuring Logstash to use this feature, refer to the output plugin documentation to verify that the plugin supports the dead letter queue feature.

目前dead letter queue只支持elasticsearch output;其他output将在未来支持;

配置

dead_letter_queue.enable: true
path.dead_letter_queue: "path/to/data/dead_letter_queue"

参考:
https://www.elastic.co/guide/en/logstash/current/resiliency.html
https://www.elastic.co/guide/en/logstash/current/persistent-queues.html
https://www.elastic.co/guide/en/logstash/current/dead-letter-queues.html

【原创】大数据基础之Logstash(4)高可用的更多相关文章

  1. 入门大数据---基于Zookeeper搭建Kafka高可用集群

    一.Zookeeper集群搭建 为保证集群高可用,Zookeeper 集群的节点数最好是奇数,最少有三个节点,所以这里搭建一个三个节点的集群. 1.1 下载 & 解压 下载对应版本 Zooke ...

  2. 大数据 - hadoop - HDFS+Zookeeper实现高可用

    高可用(Hign Availability,HA) 一.概念 作用:用于解决负载均衡和故障转移(Failover)问题. 问题描述:一个NameNode挂掉,如何启动另一个NameNode.怎样让两个 ...

  3. 入门大数据---基于Zookeeper搭建Spark高可用集群

    一.集群规划 这里搭建一个 3 节点的 Spark 集群,其中三台主机上均部署 Worker 服务.同时为了保证高可用,除了在 hadoop001 上部署主 Master 服务外,还在 hadoop0 ...

  4. 【原创】大数据基础之Logstash(3)应用之http(in和out)

    一个logstash很容易通过http打断成两个logstash实现跨服务器或者跨平台间数据同步,比如原来的流程是 logstash: nginx log -> kafka 打断成两个是 log ...

  5. 【原创】大数据基础之Logstash(1)简介、安装、使用

    Logstash 6.6.2 官方:https://www.elastic.co/products/logstash 一 简介 Centralize, Transform & Stash Yo ...

  6. 【原创】大数据基础之Logstash(2)应用之mysql-kafka

    应用一:mysql数据增量同步到kafka 1 准备mysql测试表 mysql> create table test_sync(id int not null auto_increment, ...

  7. 【原创】大数据基础之Logstash(5)监控

    有两种方式来监控logstash: api ui(xpack) When you run Logstash, it automatically captures runtime metrics tha ...

  8. 【原创】大数据基础之Logstash(3)应用之file解析(grok/ruby/kv)

    从nginx日志中进行url解析 /v1/test?param2=v2&param3=v3&time=2019-03-18%2017%3A34%3A14->{'param1':' ...

  9. 【原创】大数据基础之Logstash(6)mongo input

    logstash input插件之mongodb是第三方的,配置如下: input { mongodb { uri => 'mongodb://mongo_server:27017/db' pl ...

随机推荐

  1. JSON获取地址

    JSON获取地址一: https://github.com/stleary/JSON-java JSON获取地址二: http://genson.io/ JSON获取地址一: https://code ...

  2. Linux centos 推拉、共享、监控的设置的分享

    新建四台虚拟机 打开第一台连接shell更改主机名.网卡 backup 1.主机名网卡配置 [root@jytcentos7.6 ~]# hostnamectl set-hostname backup ...

  3. vue.js实战——props单向数据流

    Vue2.x通过props传递数据是单向的了,也就是父组件数据变化时会传递给子组件,但是反过来不行. 业务中会经常遇到两种需要改变prop的情况, 一种是父组件传递初始值进来,子组件将它作为初始值保存 ...

  4. for循环增强

    for(声明语句 : 表达式) { //代码句子 } 声明语句:声明新的局部变量,该变量的类型必须和数组元素的类型匹配.其作用域限定在循环语句块,其值与此时数组元素的值相等. 表达式:表达式是要访问的 ...

  5. Python实现FTP文件的上传和下载

    # coding: utf-8 import os from ftplib import FTP def ftp_connect(host, username, password): ftp = FT ...

  6. ubuntu apt update failed to fetch

    When I do command sudo apt update, always get belowing errors: Err:1 http://archive.ubuntu.com/ubunt ...

  7. LVS负载均衡DR模式实现

    LVS负载均衡之DR模式配置 DR 模式架构图: 操作步骤 实验环境准备:(centos7平台) 所有服务器上配置 # systemctl stop firewalld //关闭防火墙 # sed - ...

  8. [WC2006]水管局长(LCT)

    题目大意: 给定一张图,支持删边,求两点的路径中所有权值的最大值的最小值,貌似很绕的样子 由于有删边,不难想到\(LCT\),又因为\(LCT\)不支持维护图,而且只有删边操作,于是我们考虑时间回溯. ...

  9. 集合源码分析[1]-Collection 源码分析

    目录 Collection 1. 介绍 2. 继承关系 3. 方法 4. JDK8新增的方法 removeIf(Predicate<? super E> filter) Spliterat ...

  10. Vue针对性笔记

    Github原文阅读 MVVM(Model-View-ViewModel)模型 MVVM分为Model.View.ViewModel三部分. Model代表数据模型,定义数据和业务逻辑,访问数据层 V ...