4.6 Message Delivery Semantics

Now that we understand a little about how producers and consumers work, let's discuss the semantic guarantees Kafka provides between producer and consumer. Clearly there are multiple possible message delivery guarantees that could be provided:

  • At most once—Messages may be lost but are never redelivered.
  • At least once—Messages are never lost but may be redelivered.
  • Exactly once—this is what people actually want, each message is delivered once and only once.

It's worth noting that this breaks down into two problems: the durability guarantees for publishing a message and the guarantees when consuming a message.

Many systems claim to provide "exactly once" delivery semantics, but it is important to read the fine print, most of these claims are misleading (i.e. they don't translate to the case where consumers or producers can fail, cases where there are multiple consumer processes, or cases where data written to disk can be lost).

Kafka's semantics are straight-forward. When publishing a message we have a notion of the message being "committed" to the log. Once a published message is committed it will not be lost as long as one broker that replicates the partition to which this message was written remains "alive". The definition of committed message, alive partition as well as a description of which types of failures we attempt to handle will be described in more detail in the next section. For now let's assume a perfect, lossless broker and try to understand the guarantees to the producer and consumer. If a producer attempts to publish a message and experiences a network error it cannot be sure if this error happened before or after the message was committed. This is similar to the semantics of inserting into a database table with an autogenerated key.

Prior to 0.11.0.0, if a producer failed to receive a response indicating that a message was committed, it had little choice but to resend the message. This provides at-least-once delivery semantics since the message may be written to the log again during resending if the original request had in fact succeeded. Since 0.11.0.0, the Kafka producer also supports an idempotent delivery option which guarantees that resending will not result in duplicate entries in the log. To achieve this, the broker assigns each producer an ID and deduplicates messages using a sequence number that is sent by the producer along with every message. Also beginning with 0.11.0.0, the producer supports the ability to send messages to multiple topic partitions using transaction-like semantics: i.e. either all messages are successfully written or none of them are. The main use case for this is exactly-once processing between Kafka topics (described below).

Not all use cases require such strong guarantees. For uses which are latency sensitive we allow the producer to specify the durability level it desires. If the producer specifies that it wants to wait on the message being committed this can take on the order of 10 ms. However the producer can also specify that it wants to perform the send completely asynchronously or that it wants to wait only until the leader (but not necessarily the followers) have the message.

Now let's describe the semantics from the point-of-view of the consumer. All replicas have the exact same log with the same offsets. The consumer controls its position in this log. If the consumer never crashed it could just store this position in memory, but if the consumer fails and we want this topic partition to be taken over by another process the new process will need to choose an appropriate position from which to start processing. Let's say the consumer reads some messages -- it has several options for processing the messages and updating its position.

  1. It can read the messages, then save its position in the log, and finally process the messages. In this case there is a possibility that the consumer process crashes after saving its position but before saving the output of its message processing. In this case the process that took over processing would start at the saved position even though a few messages prior to that position had not been processed. This corresponds to "at-most-once" semantics as in the case of a consumer failure messages may not be processed.
  2. It can read the messages, process the messages, and finally save its position. In this case there is a possibility that the consumer process crashes after processing messages but before saving its position. In this case when the new process takes over the first few messages it receives will already have been processed. This corresponds to the "at-least-once" semantics in the case of consumer failure. In many cases messages have a primary key and so the updates are idempotent (receiving the same message twice just overwrites a record with another copy of itself).

So what about exactly once semantics (i.e. the thing you actually want)? When consuming from a Kafka topic and producing to another topic (as in a Kafka Streams application), we can leverage the new transactional producer capabilities in 0.11.0.0 that were mentioned above. The consumer's position is stored as a message in a topic, so we can write the offset to Kafka in the same transaction as the output topics receiving the processed data. If the transaction is aborted, the consumer's position will revert to its old value and the produced data on the output topics will not be visible to other consumers, depending on their "isolation level." In the default "read_uncommitted" isolation level, all messages are visible to consumers even if they were part of an aborted transaction, but in "read_committed," the consumer will only return messages from transactions which were committed (and any messages which were not part of a transaction).

When writing to an external system, the limitation is in the need to coordinate the consumer's position with what is actually stored as output. The classic way of achieving this would be to introduce a two-phase commit between the storage of the consumer position and the storage of the consumers output. But this can be handled more simply and generally by letting the consumer store its offset in the same place as its output. This is better because many of the output systems a consumer might want to write to will not support a two-phase commit. As an example of this, consider a Kafka Connect connector which populates data in HDFS along with the offsets of the data it reads so that it is guaranteed that either data and offsets are both updated or neither is. We follow similar patterns for many other data systems which require these stronger semantics and for which the messages do not have a primary key to allow for deduplication.

So effectively Kafka supports exactly-once delivery in Kafka Streams, and the transactional producer/consumer can be used generally to provide exactly-once delivery when transfering and processing data between Kafka topics. Exactly-once delivery for other destination systems generally requires cooperation with such systems, but Kafka provides the offset which makes implementing this feasible (see also Kafka Connect). Otherwise, Kafka guarantees at-least-once delivery by default, and allows the user to implement at-most-once delivery by disabling retries on the producer and committing offsets in the consumer prior to processing a batch of messages.

Message Delivery Semantics》。

总之,Kafka 默认保证 At least once,并且允许通过设置 producer 异步提交来实现 At most once(见文章《kafka consumer防止数据丢失》)。而 Exactly once 要求与外部存储系统协作,幸运的是 kafka 提供的 offset 可以非常直接非常容易得使用这种方式。

Message Delivery Semantics的更多相关文章

  1. Kafka消息delivery可靠性保证(Message Delivery Semantics)

    原文见:http://kafka.apache.org/documentation.html#semantics kafka在生产者和消费者之间的传输是如何保证的,我们可以知道有这么几种可能提供的de ...

  2. Apache Kafka(八)- Kafka Delivery Semantics for Consumers

    Kafka Delivery Semantics 在Kafka Consumer中,有3种delivery semantics,分别为:至多一次(at most once).至少一次(at least ...

  3. kafka学习笔记:知识点整理

    一.为什么需要消息系统 1.解耦: 允许你独立的扩展或修改两边的处理过程,只要确保它们遵守同样的接口约束. 2.冗余: 消息队列把数据进行持久化直到它们已经被完全处理,通过这一方式规避了数据丢失风险. ...

  4. Kafka基本原理

    简介 Apache Kafka是分布式发布-订阅消息系统.它最初由LinkedIn公司开发,之后成为Apache项目的一部分.Kafka是一种快速.可扩展的.设计内在就是分布式的,分区的和可复制的提交 ...

  5. Flume官方文档翻译——Flume 1.7.0 User Guide (unreleased version)(一)

    Flume 1.7.0 User Guide Introduction(简介) Overview(综述) System Requirements(系统需求) Architecture(架构) Data ...

  6. kafka producer源码

    producer接口: /** * Licensed to the Apache Software Foundation (ASF) under one or more * contributor l ...

  7. Kafka 0.10.0

    2.1 Producer API We encourage all new development to use the new Java producer. This client is produ ...

  8. Apache kafka原理与特性(0.8V)

    前言: kafka是一个轻量级的/分布式的/具备replication能力的日志采集组件,通常被集成到应用系统中,收集"用户行为日志"等,并可以使用各种消费终端(consumer) ...

  9. kafka概念

    一.结构与概念解释 1.基础概念 topics: kafka通过topics维护各类信息. producer:发布消息到Kafka topic的进程. consumer:订阅kafka topic进程 ...

随机推荐

  1. Log4net的配置-按照日期+文件大小混合分割

    ender name="DebugAppender" type="log4net.Appender.RollingFileAppender"><fi ...

  2. jQuery写一个简单的弹幕墙

    概述 近几年由于直播,弹幕流行起来,之前看到过用js制作弹幕墙的思路,觉得很有趣.自己就花了点时间把他做成了更灵活的jQuery插件,现在分享出来. 详细 代码下载:http://www.demoda ...

  3. Python程序数据溢出问题或出现 NAN 问题

    [数据溢出问题] overflow:溢出 overflow:上溢 underflow:下溢 数据溢出包括上溢和下溢. 上溢可以理解为:你想用一个int类型来保存一个非常非常大的数,而这个超出了int类 ...

  4. 【LeetCode】141. Linked List Cycle (2 solutions)

    Linked List Cycle Given a linked list, determine if it has a cycle in it. Follow up:Can you solve it ...

  5. php Zend虚拟机

    在前⾯的章节中,我们了解到⼀个PHP⽂件在服务器端的执⾏过程包括以下两个⼤的过程:1. 递给php程序需要执⾏的⽂件, php程序完成基本的准备⼯作后启动PHP及Zend引擎, 加载注册的扩展模块.2 ...

  6. WEB网络问题的排查【转】

    Browser/Server结构主要是利用了不断成熟的Web浏览器技术:结合浏览器的多种脚本语言和ActiveX技术,用通用浏览器实现原来需要复杂专用软件才能实现的强大功能,同时节约了开发成本.B/S ...

  7. C#开发Windows Services服务--服务安装失败的解决办法

    问题1:“System.Security.SecurityException:未找到源,但未能搜索某些或全部事件日志.不可访问的日志: Security.” 正在运行事务处理安装. 正在开始安装的“安 ...

  8. Unix环境高级编程(十一)线程

    一个进程在同一时刻只能做一件事情,线程可以把程序设计成在同一时刻能够做多件事情,每个线程处理各自独立的任务.线程包括了表示进程内执行环境必需的信息,包括进程中标识线程的线程ID.一组寄存器值.栈.调度 ...

  9. 计算机硬盘大小转换(B,KB,MB,GB,TB,PB之间的大小转换)

    程序猿都非常懒.你懂的! java程序猿在实际的开发中会遇到非常多的单位换算问题.今天我给大家带来的是关于计算机硬盘大小的换算.多数情况下.一般要求b,kb,mb,gb,tb,pb之间的大小转换,我们 ...

  10. [hihoCoder] 骨牌覆盖问题·二

    时间限制:10000ms 单点时限:1000ms 内存限制:256MB 描述 上一周我们研究了2xN的骨牌问题,这一周我们不妨加大一下难度,研究一下3xN的骨牌问题?所以我们的题目是:对于3xN的棋盘 ...