listeners,

Listener List - Comma-separated list of URIs we will listen on and their protocols.
Specify hostname as 0.0.0.0 to bind to all interfaces.
Leave hostname empty to bind to default interface.
Examples of legal listener lists: PLAINTEXT://myhost:9092,TRACE://:9091 PLAINTEXT://0.0.0.0:9092, TRACE://localhost:9093

 

advertised.listeners,

Listeners to publish to ZooKeeper for clients to use, if different than the listeners above.
In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for `listeners` will be used.

 

listeners

是kafka真正bind的地址,

/**
* An NIO socket server. The threading model is
* 1 Acceptor thread that handles new connections
* Acceptor has N Processor threads that each have their own selector and read requests from sockets
* M Handler threads that handle requests and produce responses back to the processor threads for writing.
*/
class SocketServer(val config: KafkaConfig, val metrics: Metrics, val time: Time) extends Logging with KafkaMetricsGroup { private val endpoints = config.listeners /**
* Start the socket server
*/
def startup() { endpoints.values.foreach { endpoint =>
val protocol = endpoint.protocolType
val processorEndIndex = processorBeginIndex + numProcessorThreads for (i <- processorBeginIndex until processorEndIndex)
processors(i) = newProcessor(i, connectionQuotas, protocol) val acceptor = new Acceptor(endpoint, sendBufferSize, recvBufferSize, brokerId,
processors.slice(processorBeginIndex, processorEndIndex), connectionQuotas)
acceptors.put(endpoint, acceptor)
Utils.newThread("kafka-socket-acceptor-%s-%d".format(protocol.toString, endpoint.port), acceptor, false).start()
acceptor.awaitStartup() processorBeginIndex = processorEndIndex
}
}

在socketServer中,可以看到,确实在SocketServer中accept的是listeners

为每个endpoint都建立acceptor和processer

 

advertised.listeners

是暴露给外部的listeners,如果没有设置,会用listeners

KafkaServer.startup

        /* tell everyone we are alive */
val listeners = config.advertisedListeners.map {case(protocol, endpoint) =>
if (endpoint.port == 0)
(protocol, EndPoint(endpoint.host, socketServer.boundPort(protocol), endpoint.protocolType))
else
(protocol, endpoint)
}
kafkaHealthcheck = new KafkaHealthcheck(config.brokerId, listeners, zkUtils, config.rack,
config.interBrokerProtocolVersion)
kafkaHealthcheck.startup()

这里读出advertisedListeners,传给KafkaHealthcheck

/**
* This class registers the broker in zookeeper to allow
* other brokers and consumers to detect failures. It uses an ephemeral znode with the path:
* /brokers/ids/[0...N] --> advertisedHost:advertisedPort
*
* Right now our definition of health is fairly naive. If we register in zk we are healthy, otherwise
* we are dead.
*/
class KafkaHealthcheck(brokerId: Int,
advertisedEndpoints: Map[SecurityProtocol, EndPoint],
zkUtils: ZkUtils,
rack: Option[String],
interBrokerProtocolVersion: ApiVersion) extends Logging {

像注释你们看到的,

KafkaHealthcheck就是把broker信息注册到zk里面的ephemeral znode,然后当znode消失就知道broker挂了

所以这里注册到zk中的一定是advertisedListeners

/**
* Register this broker as "alive" in zookeeper
*/
def register() {
val jmxPort = System.getProperty("com.sun.management.jmxremote.port", "-1").toInt
val updatedEndpoints = advertisedEndpoints.mapValues(endpoint =>
if (endpoint.host == null || endpoint.host.trim.isEmpty)
EndPoint(InetAddress.getLocalHost.getCanonicalHostName, endpoint.port, endpoint.protocolType) //如果没有host,默认读取InetAddress.getLocalHost.getCanonicalHostName
else
endpoint
) // the default host and port are here for compatibility with older client
// only PLAINTEXT is supported as default
// if the broker doesn't listen on PLAINTEXT protocol, an empty endpoint will be registered and older clients will be unable to connect
val plaintextEndpoint = updatedEndpoints.getOrElse(SecurityProtocol.PLAINTEXT, new EndPoint(null,-1,null)) //生成plaintextEndpoint节点,兼容老版本
zkUtils.registerBrokerInZk(brokerId, plaintextEndpoint.host, plaintextEndpoint.port, updatedEndpoints, jmxPort, rack, //新的版本只会读updatedEndpoints
interBrokerProtocolVersion)
}

 

 

问题是如果kafka间同步到底用的是什么listener,

ReplicaManager.makeFollowers

中会创建FetchThread,

        val partitionsToMakeFollowerWithLeaderAndOffset = partitionsToMakeFollower.map(partition =>
new TopicAndPartition(partition) -> BrokerAndInitialOffset(
metadataCache.getAliveBrokers.find(_.id == partition.leaderReplicaIdOpt.get).get.getBrokerEndPoint(config.interBrokerSecurityProtocol),
partition.getReplica().get.logEndOffset.messageOffset)).toMap
replicaFetcherManager.addFetcherForPartitions(partitionsToMakeFollowerWithLeaderAndOffset)

这个逻辑是,broker间做同步的时候,创建FetchThread时的情况,

可以看到,broker信息还是从metadataCache取到的,

从metadataCache取出相应的broker,然后调用getBrokerEndPoint(config.interBrokerSecurityProtocol),取到相应的endpoint

security.inter.broker.protocol,Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.

 

而用户拿到的broker信息,

KafkaApis.handleTopicMetadataRequest

val brokers = metadataCache.getAliveBrokers

    val responseBody = new MetadataResponse(
brokers.map(_.getNode(request.securityProtocol)).asJava,
metadataCache.getControllerId.getOrElse(MetadataResponse.NO_CONTROLLER_ID),
completeTopicMetadata.asJava,
requestVersion
)

这里取决于什么安全协议,request.securityProtocol

public enum SecurityProtocol {
/** Un-authenticated, non-encrypted channel */
PLAINTEXT(0, "PLAINTEXT", false),
/** SSL channel */
SSL(1, "SSL", false),
/** SASL authenticated, non-encrypted channel */
SASL_PLAINTEXT(2, "SASL_PLAINTEXT", false),
/** SASL authenticated, SSL channel */
SASL_SSL(3, "SASL_SSL", false),
/** Currently identical to PLAINTEXT and used for testing only. We may implement extra instrumentation when testing channel code. */
TRACE(Short.MAX_VALUE, "TRACE", true);

可以看到不同的协议,可以有不同的地址

 

Broker

/**
* Create a broker object from id and JSON string.
*
* @param id
* @param brokerInfoString
*
* Version 1 JSON schema for a broker is:
* {
* "version":1,
* "host":"localhost",
* "port":9092
* "jmx_port":9999,
* "timestamp":"2233345666"
* }
*
* Version 2 JSON schema for a broker is:
* {
* "version":2,
* "host":"localhost",
* "port":9092
* "jmx_port":9999,
* "timestamp":"2233345666",
* "endpoints":["PLAINTEXT://host1:9092", "SSL://host1:9093"]
* }
*
* Version 3 (current) JSON schema for a broker is:
* {
* "version":3,
* "host":"localhost",
* "port":9092
* "jmx_port":9999,
* "timestamp":"2233345666",
* "endpoints":["PLAINTEXT://host1:9092", "SSL://host1:9093"],
* "rack":"dc1"
* }
*/
def createBroker(id: Int, brokerInfoString: String): Broker = {
if (brokerInfoString == null)
throw new BrokerNotAvailableException(s"Broker id $id does not exist")
try {
Json.parseFull(brokerInfoString) match {
case Some(m) =>
val brokerInfo = m.asInstanceOf[Map[String, Any]]
val version = brokerInfo("version").asInstanceOf[Int]
val endpoints =
if (version < 1)
throw new KafkaException(s"Unsupported version of broker registration: $brokerInfoString")
else if (version == 1) {
val host = brokerInfo("host").asInstanceOf[String]
val port = brokerInfo("port").asInstanceOf[Int]
Map(SecurityProtocol.PLAINTEXT -> new EndPoint(host, port, SecurityProtocol.PLAINTEXT))
}
else {
val listeners = brokerInfo("endpoints").asInstanceOf[List[String]]
listeners.map { listener =>
val ep = EndPoint.createEndPoint(listener)
(ep.protocolType, ep)
}.toMap
}
val rack = brokerInfo.get("rack").filter(_ != null).map(_.asInstanceOf[String])
new Broker(id, endpoints, rack)
case None =>
throw new BrokerNotAvailableException(s"Broker id $id does not exist")
}
} catch {
case t: Throwable =>
throw new KafkaException(s"Failed to parse the broker info from zookeeper: $brokerInfoString", t)
}
}
}

可以看到,老版本的是用host,port

而新版本都是用endpoints,里面可以定义各种协议下的listeners

 

zkUtil

/**
* This API takes in a broker id, queries zookeeper for the broker metadata and returns the metadata for that broker
* or throws an exception if the broker dies before the query to zookeeper finishes
*
* @param brokerId The broker id
* @return An optional Broker object encapsulating the broker metadata
*/
def getBrokerInfo(brokerId: Int): Option[Broker] = {
readDataMaybeNull(BrokerIdsPath + "/" + brokerId)._1 match {
case Some(brokerInfo) => Some(Broker.createBroker(brokerId, brokerInfo))
case None => None
}
}

zkUtil只是读出zk中相应的内容并createBroker

 

结论,

listeners,用于server真正bind

advertisedListeners, 用于开发给用户,如果没有设定,直接使用listeners

 

当前kafka没有区分内外部的流量,一旦设置advertisedListeners,所有流量都会使用这个配置,明显不合理啊

https://cwiki.apache.org/confluence/display/KAFKA/KIP-103%3A+Separation+of+Internal+and+External+traffic

会解决这个问题

kafka - advertised.listeners and listeners的更多相关文章

  1. Ehcache(2.9.x) - API Developer Guide, Cache Event Listeners

    About Cache Event Listeners Cache listeners allow implementers to register callback methods that wil ...

  2. Zookeeper+Kafka集群部署(转)

    Zookeeper+Kafka集群部署 主机规划: 10.200.3.85  Kafka+ZooKeeper 10.200.3.86  Kafka+ZooKeeper 10.200.3.87  Kaf ...

  3. Kafka实战分析(一)- 设计、部署规划及其调优

    1. Kafka概要设计 kafka在设计之初就需要考虑以下4个方面的问题: 吞吐量/延时 消息持久化 负载均衡和故障转移 伸缩性 1.1 吞吐量/延时 对于任何一个消息引擎而言,吞吐量都是至关重要的 ...

  4. Kafka 集群配置SASL+ACL

    一.简介 在Kafka0.9版本之前,Kafka集群时没有安全机制的.Kafka Client应用可以通过连接Zookeeper地址,例如zk1:2181:zk2:2181,zk3:2181等.来获取 ...

  5. 【Kafka】Broker之Server.properties的重要参数说明

    名称 描述 类型 默认值 有效值区间 重要程度 zookeeper.connect zk地址 string 高 advertised.host.name 过时的:只有当advertised.liste ...

  6. 利用新版本自带的Zookeeper搭建kafka集群

    安装简要说明新版本的kafka自带有zookeeper,其实自带的zookeeper完全够用,本篇文章以记录使用自带zookeeper搭建kafka集群.1.关于kafka下载kafka下载页面:ht ...

  7. kafka的安装和初步使用

    简介 最近开发的项目中,kafka用的比较多,为了方便梳理,从今天起准备记录一些关于kafka的文章,首先,当然是如何安装kafka了. Apache Kafka是分布式发布-订阅消息系统. Apac ...

  8. Linux——安装并配置Kafka

    前言 Kafka是由Apache软件基金会开发的一个开源流处理平台,由Scala和Java编写.Kafka是一种高吞吐量的分布式发布订阅消息系统,它可以处理消费者规模的网站中的所有动作流数据. 这种动 ...

  9. kafka 教程(三)-远程访问

    远程连接 kafka 配置 默认的 kafka 配置是无法远程访问的,解决该问题有几个方案. 方案1 advertised.listeners=PLAINTEXT://IP:9092 注意必须是 ip ...

随机推荐

  1. linux git安装及配置(包括更新)

    1.在终端运行命令 sudo apt-get install git 2.查看版本号 git --version  (若不是最新可更新 自选) 更新提示: sudo add-apt-repositor ...

  2. java获取汉字拼音首字母 --转载

    在项目中要更能根据某些查询条件(比如姓名)的首字母作为条件进行查询,比如查一个叫"李晓明"的人,可以输入'lxm'.写了一个工具类如下: import java.io.Unsupp ...

  3. js生成带参的二维码

    最近项目中有需求生成带参的二维码,考虑过用JAVA后台生成返回前端展示,后面了解到用jquery的qrcode.js插件可以很好现实 引入js: require.config({ baseUrl : ...

  4. 《BuildingMachineLearningSystemsWithPython》学习笔记

    BuildingMachineLearningSystemsWithPython Python机器学习入门 数据分析五个步骤 读取和清洗数据 探索和理解输入数据[我的理解是业务理解] 分析如何将算法应 ...

  5. Oracle的索引适用范围

    若字段数据的重复率不是很高,而且数据量不是很大,考虑B树索引: 若字段数据的重复率较高,而且查询中有特定的查询方式(比如列之间有或,与等逻辑运算),则考虑位图索引: 若对列中的字段进行模糊查询或者语言 ...

  6. GnuStep使用

    gcc -o main main1.m -I/GNUstep/System/Library/Headers -fconstant-string-class=NSConstantString -L/GN ...

  7. eclipse快捷键积累(持续更新)

    大小写转换:Ctrl+Shift+X;Ctrl+Shift+Y; 打开资源:Ctrl+Shift+R; 打开类型:Ctrl+Shift+T; 调试时查看对象.变量:Ctrl+Shift+I; 打开函数 ...

  8. jquery 使用需要注意

    jquery选择器,层选择器等多个选择器,jquery生成对象,jquery遍历对象, jquery ajax调用不要进行方法封装返回值方式调用,会取不到值. jquery使用要注意很多细节点才能将其 ...

  9. Java代码常用写法总结

    1.字符串是否为空判断 以下是java 判断字符串是否为空的四种方法:方法一: 最多人使用的一个方法, 直观, 方便, 但效率很低: if(s == null ||"".equal ...

  10. 集中式vs分布式区别

    记录一下我了解到的版本控制系统,集中式与分布式,它们之间的区别做下个人总结. 什么是集中式? 集中式开发:是将项目集中存放在中央服务器中,在工作的时候,大家只在自己电脑上操作,从同一个地方下载最新版本 ...