logstash高可用体现为不丢数据(前提为服务器短时间内不可用后可恢复比如重启服务器或重启进程),具体有两个方面:

  • 进程重启(服务器重启)
  • 事件消息处理失败

在logstash中对应的解决方案为:

  • Persistent Queues
  • Dead Letter Queues

默认都没有开启;

另外可以通过docker或marathon或systemd来实现进程的自动重启;

As data flows through the event processing pipeline, Logstash may encounter situations that prevent it from delivering events to the configured output. For example, the data might contain unexpected data types, or Logstash might terminate abnormally.
To guard against data loss and ensure that events flow through the pipeline without interruption, Logstash provides the following data resiliency features.

  • Persistent Queues protect against data loss by storing events in an internal queue on disk.
  • Dead Letter Queues provide on-disk storage for events that Logstash is unable to process. You can easily reprocess events in the dead letter queue by using the dead_letter_queue input plugin.

These resiliency features are disabled by default.

1 Persistent Queues

By default, Logstash uses in-memory bounded queues between pipeline stages (inputs → pipeline workers) to buffer events. The size of these in-memory queues is fixed and not configurable. If Logstash experiences a temporary machine failure, the contents of the in-memory queue will be lost. Temporary machine failures are scenarios where Logstash or its host machine are terminated abnormally but are capable of being restarted.
In order to protect against data loss during abnormal termination, Logstash has a persistent queue feature which will store the message queue on disk. Persistent queues provide durability of data within Logstash.

logstash默认使用内存queue来缓冲事件消息,一旦进程重启则内存queue里的数据全部丢失;

好处

  • Absorbs bursts of events without needing an external buffering mechanism like Redis or Apache Kafka.
  • Provides an at-least-once delivery guarantee against message loss during a normal shutdown as well as when Logstash is terminated abnormally.

实现

The queue sits between the input and filter stages in the same process:

input → queue → filter + output

When an input has events ready to process, it writes them to the queue. When the write to the queue is successful, the input can send an acknowledgement to its data source.
When processing events from the queue, Logstash acknowledges events as completed, within the queue, only after filters and outputs have completed. The queue keeps a record of events that have been processed by the pipeline. An event is recorded as processed (in this document, called "acknowledged" or "ACKed") if, and only if, the event has been processed completely by the Logstash pipeline.

配置

queue.type: persisted
path.queue: "path/to/data/persistent_queue"

其他配置

queue.page_capacity
queue.drain
queue.max_events
queue.max_bytes

更进一步

First, the queue itself is a set of pages. There are two kinds of pages: head pages and tail pages. The head page is where new events are written. There is only one head page. When the head page is of a certain size (see queue.page_capacity), it becomes a tail page, and a new head page is created. Tail pages are immutable, and the head page is append-only. Second, the queue records details about itself (pages, acknowledgements, etc) in a separate file called a checkpoint file.
When recording a checkpoint, Logstash will:

Call fsync on the head page.
Atomically write to disk the current state of the queue.
The process of checkpointing is atomic, which means any update to the file is saved if successful.

If Logstash is terminated, or if there is a hardware-level failure, any data that is buffered in the persistent queue, but not yet checkpointed, is lost.
You can force Logstash to checkpoint more frequently by setting queue.checkpoint.writes. This setting specifies the maximum number of events that may be written to disk before forcing a checkpoint. The default is 1024. To ensure maximum durability and avoid losing data in the persistent queue, you can set queue.checkpoint.writes: 1 to force a checkpoint after each event is written. Keep in mind that disk writes have a resource cost. Setting this value to 1 can severely impact performance.

即使开启persistent queue,也有可能会有数据丢失,影响因素是flush间隔(checkpoint),默认是1024个事件flush一次,设置为1则每个事件flush一次,虽然不丢消息,但是对性能影响较大;

queue.checkpoint.writes: 1

2 Dead Letter Queues

By default, when Logstash encounters an event that it cannot process because the data contains a mapping error or some other issue, the Logstash pipeline either hangs or drops the unsuccessful event. In order to protect against data loss in this situation, you can configure Logstash to write unsuccessful events to a dead letter queue instead of dropping them.
Each event written to the dead letter queue includes the original event, along with metadata that describes the reason the event could not be processed, information about the plugin that wrote the event, and the timestamp for when the event entered the dead letter queue.
To process events in the dead letter queue, you simply create a Logstash pipeline configuration that uses the dead_letter_queue input plugin to read from the queue.

当logstash遇到无法处理的数据(mapping错误等),logstash要么卡住,要么丢掉不成功的事件;为了避免这种情况下的数据丢失,可以配置logstash将不成功的事件写到一个dead letter queue而不是直接丢掉;

使用限制

The dead letter queue feature is currently supported for the elasticsearch output only. Additionally, The dead letter queue is only used where the response code is either 400 or 404, both of which indicate an event that cannot be retried. Support for additional outputs will be available in future releases of the Logstash plugins. Before configuring Logstash to use this feature, refer to the output plugin documentation to verify that the plugin supports the dead letter queue feature.

目前dead letter queue只支持elasticsearch output;其他output将在未来支持;

配置

dead_letter_queue.enable: true
path.dead_letter_queue: "path/to/data/dead_letter_queue"

参考:
https://www.elastic.co/guide/en/logstash/current/resiliency.html
https://www.elastic.co/guide/en/logstash/current/persistent-queues.html
https://www.elastic.co/guide/en/logstash/current/dead-letter-queues.html

【原创】大数据基础之Logstash(4)高可用的更多相关文章

  1. 入门大数据---基于Zookeeper搭建Kafka高可用集群

    一.Zookeeper集群搭建 为保证集群高可用,Zookeeper 集群的节点数最好是奇数,最少有三个节点,所以这里搭建一个三个节点的集群. 1.1 下载 & 解压 下载对应版本 Zooke ...

  2. 大数据 - hadoop - HDFS+Zookeeper实现高可用

    高可用(Hign Availability,HA) 一.概念 作用:用于解决负载均衡和故障转移(Failover)问题. 问题描述:一个NameNode挂掉,如何启动另一个NameNode.怎样让两个 ...

  3. 入门大数据---基于Zookeeper搭建Spark高可用集群

    一.集群规划 这里搭建一个 3 节点的 Spark 集群,其中三台主机上均部署 Worker 服务.同时为了保证高可用,除了在 hadoop001 上部署主 Master 服务外,还在 hadoop0 ...

  4. 【原创】大数据基础之Logstash(3)应用之http(in和out)

    一个logstash很容易通过http打断成两个logstash实现跨服务器或者跨平台间数据同步,比如原来的流程是 logstash: nginx log -> kafka 打断成两个是 log ...

  5. 【原创】大数据基础之Logstash(1)简介、安装、使用

    Logstash 6.6.2 官方:https://www.elastic.co/products/logstash 一 简介 Centralize, Transform & Stash Yo ...

  6. 【原创】大数据基础之Logstash(2)应用之mysql-kafka

    应用一:mysql数据增量同步到kafka 1 准备mysql测试表 mysql> create table test_sync(id int not null auto_increment, ...

  7. 【原创】大数据基础之Logstash(5)监控

    有两种方式来监控logstash: api ui(xpack) When you run Logstash, it automatically captures runtime metrics tha ...

  8. 【原创】大数据基础之Logstash(3)应用之file解析(grok/ruby/kv)

    从nginx日志中进行url解析 /v1/test?param2=v2&param3=v3&time=2019-03-18%2017%3A34%3A14->{'param1':' ...

  9. 【原创】大数据基础之Logstash(6)mongo input

    logstash input插件之mongodb是第三方的,配置如下: input { mongodb { uri => 'mongodb://mongo_server:27017/db' pl ...

随机推荐

  1. Java的selenium代码随笔(6)

    //获取元素列表public List<WebElement> ListElements(WebElement webElement, By parentBy, By childrenBy ...

  2. JS学习笔记:(三)JS执行机制

    首先我们先明确一点:JavaScript是一门单线程语言.单线程也就是说同一时间只能执行一个任务,所有的任务都必须排队顺序执行.那么如果一个任务耗时很长,阻塞了其它任务的执行,就会给用户造成不友好的体 ...

  3. Oracle篇 之 子查询

    子查询:先执行内部再外部 Select last_name,salary,dept_id From s_emp Where dept_id in ( Select dept_id From s_emp ...

  4. jqgrid的增删改查

    这个是要写的页面(需要引入下面的js页面) 1 <div class="modal-body" width="100%" style="over ...

  5. cinder-volume报错vmdk2 is reporting problems, not sending heartbeat. Service will appear "down".

    cinder-volume报错vmdk2 is reporting problems, not sending heartbeat. Service will appear "down&qu ...

  6. <TCP/IP原理> (二) OSI模型和TCP/IP协议族

    1.OSI参考模型 1)作用 2)各层的名称和功能 2.对分层网络协议体系的理解 1)不同节点:层次组成不同,作用不同 2)横向理解:虚通信.对等实体.协议.PDU 3)纵向理解:封装与解封.服务.接 ...

  7. [GoogleBlog]new-approach-to-china

    https://googleblog.blogspot.com/2010/01/new-approach-to-china.html

  8. luogu4166 最大土地面积 (旋转卡壳)

    首先这样的点一定在凸包上 然后旋转卡壳就可以 具体来说,枚举对角线的一个端点,另一个端点在凸包上转,剩下两个点就是一个叉积最大一个最小,而这两个点也是跟着转的 所以是$O(N^2)$ #include ...

  9. jmeter 多个sql写在一个jdbc请求中注意事项

    在url里面加上?allowMultiQueries=true 类型选callableStatement

  10. Apache Hadoop 2.9.2 的快照管理

    Apache Hadoop 2.9.2 的快照管理 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 快照相当于对目录做一个备份.并不会立即复制所有文件,而是指向同一个文件.当写入发生 ...