不多说,直接上干货!

  一切来源于官网

http://kafka.apache.org/documentation/

Topics and Logs

话题和日志 (Topic和Log)

Let's first dive into the core abstraction Kafka provides for a stream of records—the topic.

让我们更深入的了解Kafka中的Topic。

  A topic is a category or feed name to which records are published. Topics in Kafka are always multi-subscriber; that is, a topic can have zero, one, or many consumers that subscribe to the data written to it.

  For each topic, the Kafka cluster maintains a partitioned log that looks like this:

Topic是发布的消息的类别或者种子Feed名。对于每一个Topic,Kafka集群维护这一个分区的log,就像下图中的示例:

                  

  Each partition is an ordered, immutable sequence of records that is continually appended to—a structured commit log. The records in the partitions are each assigned a sequential id number called the offset that uniquely identifies each record within the partition.

每一个分区都是一个顺序的、不可变的消息队列, 并且可以持续的添加。分区中的消息都被分了一个序列号,称之为偏移量(offset),在每个分区中此偏移量都是唯一的。

  The Kafka cluster retains all published records—whether or not they have been consumed—using a configurable retention period. For example, if the retention policy is set to two days, then for the two days after a record is published, it is available for consumption, after which it will be discarded to free up space. Kafka's performance is effectively constant with respect to data size so storing data for a long time is not a problem.

 Kafka集群保持所有的消息,直到它们过期, 无论消息是否被消费了。 实际上消费者所持有的仅有的元数据就是这个偏移量,也就是消费者在这个log中的位置。
 这个偏移量由消费者控制:正常情况当消费者消费消息的时候,偏移量也线性的的增加。
 但是实际偏移量由消费者控制,消费者可以将偏移量重置为更老的一个偏移量,重新读取消息。
 可以看到这种设计对消费者来说操作自如, 一个消费者的操作不会影响其它消费者对此log的处理。
 再说说分区。Kafka中采用分区的设计有几个目的。一是可以处理更多的消息,不受单台服务器的限制。
 Topic拥有多个分区意味着它可以不受限的处理更多的数据。第二,分区可以作为并行处理的单元,稍后会谈到这一点。

              

  In fact, the only metadata retained on a per-consumer basis is the offset or position of that consumer in the log. This offset is controlled by the consumer: normally a consumer will advance its offset linearly as it reads records, but, in fact, since the position is controlled by the consumer it can consume records in any order it likes. For example a consumer can reset to an older offset to reprocess data from the past or skip ahead to the most recent record and start consuming from "now".

  This combination of features means that Kafka consumers are very cheap—they can come and go without much impact on the cluster or on other consumers. For example, you can use our command line tools to "tail" the contents of any topic without changing what is consumed by any existing consumers.

  The partitions in the log serve several purposes. First, they allow the log to scale beyond a size that will fit on a single server. Each individual partition must fit on the servers that host it, but a topic may have many partitions so it can handle an arbitrary amount of data. Second they act as the unit of parallelism—more on that in a bit.

1.1 Introduction中 Topics and Logs官网剖析(博主推荐)的更多相关文章

  1. Flume Channel Selectors官网剖析(博主推荐)

    不多说,直接上干货! Flume Sources官网剖析(博主推荐) Flume Channels官网剖析(博主推荐) 一切来源于flume官网 http://flume.apache.org/Flu ...

  2. Flume Channels官网剖析(博主推荐)

    不多说,直接上干货! Flume Sources官网剖析(博主推荐) 一切来源于flume官网 http://flume.apache.org/FlumeUserGuide.html Flume Ch ...

  3. Flume Source官网剖析(博主推荐)

    不多说,直接上干货! 一切来源于flume官网 http://flume.apache.org/FlumeUserGuide.html Flume Sources Avro Source Thrift ...

  4. 1.2 Use Cases中 Website Activity Tracking官网剖析(博主推荐)

    不多说,直接上干货! 一切来源于官网 http://kafka.apache.org/documentation/ Website Activity Tracking 网站活动追踪 The origi ...

  5. Flume Interceptors官网剖析(博主推荐)

    不多说,直接上干货! Flume Sources官网剖析(博主推荐) Flume Channels官网剖析(博主推荐) Flume Channel Selectors官网剖析(博主推荐) Flume ...

  6. Event Serializers官网剖析(博主推荐)

    不多说,直接上干货! Flume Sources官网剖析(博主推荐) Flume Channels官网剖析(博主推荐) Flume Channel Selectors官网剖析(博主推荐) Flume ...

  7. Flume Sink Processors官网剖析(博主推荐)

    不多说,直接上干货! Flume Sources官网剖析(博主推荐) Flume Channels官网剖析(博主推荐) Flume Channel Selectors官网剖析(博主推荐) Flume ...

  8. Flume Sinks官网剖析(博主推荐)

    不多说,直接上干货! Flume Sources官网剖析(博主推荐) Flume Channels官网剖析(博主推荐) Flume Channel Selectors官网剖析(博主推荐) 一切来源于f ...

  9. 怎样取消老毛桃软件赞助商---只需在输入框中输入老毛桃官网网址“laomaotao.org”

    来源:www.laomaotao.org 时间:2015-01-29 在众多网友和赞助商的支持下,迄今为止,老毛桃u盘启动盘制作工具已经推出了多个版本.如果有用户希望取消显示老毛桃软件中的赞助商,那不 ...

随机推荐

  1. axure母版使用实例之百度门户

    1.首先构建页面基本结构 2.新建母板 3.将母板应用于各个页面 4.在母板中隐藏聚焦背景及下拉二级菜单 5.在母板中添加事件:打开相应界面.显示/隐藏二级菜单 5.设置页面加载效果:给点击的一级菜单 ...

  2. excel2007去掉方括号及里面的

    获取括号外面的 b2=LEFT(A1,FIND("[",A1)-1) 获取括号里面的 =MID(A2,FIND("(",A2)+1,(FIND(")& ...

  3. 使用iframe在手机中嵌套页面

    使用iframe嵌套网页 <iframe id="show-iframes" frameborder="0" name="showHere&qu ...

  4. 技嘉H81M-DS2 主板安装 XP方法,及网卡驱动安装

    这是微软联合厂家封杀XP的结果,目的很简单,微软只想把你驱赶到WIN7.WIN8上去. 16.7.18 技嘉H81M-S1, G3260 安装XP系统 *BIOS 修改 Storage Boot Op ...

  5. 利用Python网络爬虫抓取微信好友的所在省位和城市分布及其可视化

    前几天给大家分享了如何利用Python网络爬虫抓取微信好友数量以及微信好友的男女比例,感兴趣的小伙伴可以点击链接进行查看.今天小编给大家介绍如何利用Python网络爬虫抓取微信好友的省位和城市,并且将 ...

  6. 喜马拉雅FM

    import requestsimport jsonstart_url ='https://www.ximalaya.com/revision/play/album?albumId=3595841&a ...

  7. ssm框架的多表查询和增删查改

    必须声明本文章==>http://www.cnblogs.com/zhu520/p/7883273.html 一: 1):我的运行环境 我使用myeclipse(你也可以使用eclipse),t ...

  8. &lt;LeetCode OJ&gt; 20. Valid Parentheses

    Given a string containing just the characters '(', ')', '{', '}', '[' and ']', determine if the inpu ...

  9. [Python] Understand Scope in Python

    Misunderstanding scope can cause problems in your application. Watch this lesson to learn how Python ...

  10. [Python] Find available methods and help in REPL

    For example you want to know what methods are available in Python for String, you can do : dir(" ...