kafka - advertised.listeners and listeners
listeners,
Listener List - Comma-separated list of URIs we will listen on and their protocols.
Specify hostname as 0.0.0.0 to bind to all interfaces.
Leave hostname empty to bind to default interface.
Examples of legal listener lists: PLAINTEXT://myhost:9092,TRACE://:9091 PLAINTEXT://0.0.0.0:9092, TRACE://localhost:9093
advertised.listeners,
Listeners to publish to ZooKeeper for clients to use, if different than the listeners above.
In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for `listeners` will be used.
listeners
是kafka真正bind的地址,
/**
* An NIO socket server. The threading model is
* 1 Acceptor thread that handles new connections
* Acceptor has N Processor threads that each have their own selector and read requests from sockets
* M Handler threads that handle requests and produce responses back to the processor threads for writing.
*/
class SocketServer(val config: KafkaConfig, val metrics: Metrics, val time: Time) extends Logging with KafkaMetricsGroup { private val endpoints = config.listeners /**
* Start the socket server
*/
def startup() { endpoints.values.foreach { endpoint =>
val protocol = endpoint.protocolType
val processorEndIndex = processorBeginIndex + numProcessorThreads for (i <- processorBeginIndex until processorEndIndex)
processors(i) = newProcessor(i, connectionQuotas, protocol) val acceptor = new Acceptor(endpoint, sendBufferSize, recvBufferSize, brokerId,
processors.slice(processorBeginIndex, processorEndIndex), connectionQuotas)
acceptors.put(endpoint, acceptor)
Utils.newThread("kafka-socket-acceptor-%s-%d".format(protocol.toString, endpoint.port), acceptor, false).start()
acceptor.awaitStartup() processorBeginIndex = processorEndIndex
}
}
在socketServer中,可以看到,确实在SocketServer中accept的是listeners
为每个endpoint都建立acceptor和processer
advertised.listeners
是暴露给外部的listeners,如果没有设置,会用listeners
KafkaServer.startup
/* tell everyone we are alive */
val listeners = config.advertisedListeners.map {case(protocol, endpoint) =>
if (endpoint.port == 0)
(protocol, EndPoint(endpoint.host, socketServer.boundPort(protocol), endpoint.protocolType))
else
(protocol, endpoint)
}
kafkaHealthcheck = new KafkaHealthcheck(config.brokerId, listeners, zkUtils, config.rack,
config.interBrokerProtocolVersion)
kafkaHealthcheck.startup()
这里读出advertisedListeners,传给KafkaHealthcheck
/**
* This class registers the broker in zookeeper to allow
* other brokers and consumers to detect failures. It uses an ephemeral znode with the path:
* /brokers/ids/[0...N] --> advertisedHost:advertisedPort
*
* Right now our definition of health is fairly naive. If we register in zk we are healthy, otherwise
* we are dead.
*/
class KafkaHealthcheck(brokerId: Int,
advertisedEndpoints: Map[SecurityProtocol, EndPoint],
zkUtils: ZkUtils,
rack: Option[String],
interBrokerProtocolVersion: ApiVersion) extends Logging {
像注释你们看到的,
KafkaHealthcheck就是把broker信息注册到zk里面的ephemeral znode,然后当znode消失就知道broker挂了
所以这里注册到zk中的一定是advertisedListeners
/**
* Register this broker as "alive" in zookeeper
*/
def register() {
val jmxPort = System.getProperty("com.sun.management.jmxremote.port", "-1").toInt
val updatedEndpoints = advertisedEndpoints.mapValues(endpoint =>
if (endpoint.host == null || endpoint.host.trim.isEmpty)
EndPoint(InetAddress.getLocalHost.getCanonicalHostName, endpoint.port, endpoint.protocolType) //如果没有host,默认读取InetAddress.getLocalHost.getCanonicalHostName
else
endpoint
) // the default host and port are here for compatibility with older client
// only PLAINTEXT is supported as default
// if the broker doesn't listen on PLAINTEXT protocol, an empty endpoint will be registered and older clients will be unable to connect
val plaintextEndpoint = updatedEndpoints.getOrElse(SecurityProtocol.PLAINTEXT, new EndPoint(null,-1,null)) //生成plaintextEndpoint节点,兼容老版本
zkUtils.registerBrokerInZk(brokerId, plaintextEndpoint.host, plaintextEndpoint.port, updatedEndpoints, jmxPort, rack, //新的版本只会读updatedEndpoints
interBrokerProtocolVersion)
}
问题是如果kafka间同步到底用的是什么listener,
ReplicaManager.makeFollowers
中会创建FetchThread,
val partitionsToMakeFollowerWithLeaderAndOffset = partitionsToMakeFollower.map(partition =>
new TopicAndPartition(partition) -> BrokerAndInitialOffset(
metadataCache.getAliveBrokers.find(_.id == partition.leaderReplicaIdOpt.get).get.getBrokerEndPoint(config.interBrokerSecurityProtocol),
partition.getReplica().get.logEndOffset.messageOffset)).toMap
replicaFetcherManager.addFetcherForPartitions(partitionsToMakeFollowerWithLeaderAndOffset)
这个逻辑是,broker间做同步的时候,创建FetchThread时的情况,
可以看到,broker信息还是从metadataCache取到的,
从metadataCache取出相应的broker,然后调用getBrokerEndPoint(config.interBrokerSecurityProtocol),取到相应的endpoint
security.inter.broker.protocol,Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL.
而用户拿到的broker信息,
KafkaApis.handleTopicMetadataRequest
val brokers = metadataCache.getAliveBrokers val responseBody = new MetadataResponse(
brokers.map(_.getNode(request.securityProtocol)).asJava,
metadataCache.getControllerId.getOrElse(MetadataResponse.NO_CONTROLLER_ID),
completeTopicMetadata.asJava,
requestVersion
)
这里取决于什么安全协议,request.securityProtocol
public enum SecurityProtocol {
/** Un-authenticated, non-encrypted channel */
PLAINTEXT(0, "PLAINTEXT", false),
/** SSL channel */
SSL(1, "SSL", false),
/** SASL authenticated, non-encrypted channel */
SASL_PLAINTEXT(2, "SASL_PLAINTEXT", false),
/** SASL authenticated, SSL channel */
SASL_SSL(3, "SASL_SSL", false),
/** Currently identical to PLAINTEXT and used for testing only. We may implement extra instrumentation when testing channel code. */
TRACE(Short.MAX_VALUE, "TRACE", true);
可以看到不同的协议,可以有不同的地址
Broker
/**
* Create a broker object from id and JSON string.
*
* @param id
* @param brokerInfoString
*
* Version 1 JSON schema for a broker is:
* {
* "version":1,
* "host":"localhost",
* "port":9092
* "jmx_port":9999,
* "timestamp":"2233345666"
* }
*
* Version 2 JSON schema for a broker is:
* {
* "version":2,
* "host":"localhost",
* "port":9092
* "jmx_port":9999,
* "timestamp":"2233345666",
* "endpoints":["PLAINTEXT://host1:9092", "SSL://host1:9093"]
* }
*
* Version 3 (current) JSON schema for a broker is:
* {
* "version":3,
* "host":"localhost",
* "port":9092
* "jmx_port":9999,
* "timestamp":"2233345666",
* "endpoints":["PLAINTEXT://host1:9092", "SSL://host1:9093"],
* "rack":"dc1"
* }
*/
def createBroker(id: Int, brokerInfoString: String): Broker = {
if (brokerInfoString == null)
throw new BrokerNotAvailableException(s"Broker id $id does not exist")
try {
Json.parseFull(brokerInfoString) match {
case Some(m) =>
val brokerInfo = m.asInstanceOf[Map[String, Any]]
val version = brokerInfo("version").asInstanceOf[Int]
val endpoints =
if (version < 1)
throw new KafkaException(s"Unsupported version of broker registration: $brokerInfoString")
else if (version == 1) {
val host = brokerInfo("host").asInstanceOf[String]
val port = brokerInfo("port").asInstanceOf[Int]
Map(SecurityProtocol.PLAINTEXT -> new EndPoint(host, port, SecurityProtocol.PLAINTEXT))
}
else {
val listeners = brokerInfo("endpoints").asInstanceOf[List[String]]
listeners.map { listener =>
val ep = EndPoint.createEndPoint(listener)
(ep.protocolType, ep)
}.toMap
}
val rack = brokerInfo.get("rack").filter(_ != null).map(_.asInstanceOf[String])
new Broker(id, endpoints, rack)
case None =>
throw new BrokerNotAvailableException(s"Broker id $id does not exist")
}
} catch {
case t: Throwable =>
throw new KafkaException(s"Failed to parse the broker info from zookeeper: $brokerInfoString", t)
}
}
}
可以看到,老版本的是用host,port
而新版本都是用endpoints,里面可以定义各种协议下的listeners
zkUtil
/**
* This API takes in a broker id, queries zookeeper for the broker metadata and returns the metadata for that broker
* or throws an exception if the broker dies before the query to zookeeper finishes
*
* @param brokerId The broker id
* @return An optional Broker object encapsulating the broker metadata
*/
def getBrokerInfo(brokerId: Int): Option[Broker] = {
readDataMaybeNull(BrokerIdsPath + "/" + brokerId)._1 match {
case Some(brokerInfo) => Some(Broker.createBroker(brokerId, brokerInfo))
case None => None
}
}
zkUtil只是读出zk中相应的内容并createBroker
结论,
listeners,用于server真正bind
advertisedListeners, 用于开发给用户,如果没有设定,直接使用listeners
当前kafka没有区分内外部的流量,一旦设置advertisedListeners,所有流量都会使用这个配置,明显不合理啊
会解决这个问题
kafka - advertised.listeners and listeners的更多相关文章
- Ehcache(2.9.x) - API Developer Guide, Cache Event Listeners
About Cache Event Listeners Cache listeners allow implementers to register callback methods that wil ...
- Zookeeper+Kafka集群部署(转)
Zookeeper+Kafka集群部署 主机规划: 10.200.3.85 Kafka+ZooKeeper 10.200.3.86 Kafka+ZooKeeper 10.200.3.87 Kaf ...
- Kafka实战分析(一)- 设计、部署规划及其调优
1. Kafka概要设计 kafka在设计之初就需要考虑以下4个方面的问题: 吞吐量/延时 消息持久化 负载均衡和故障转移 伸缩性 1.1 吞吐量/延时 对于任何一个消息引擎而言,吞吐量都是至关重要的 ...
- Kafka 集群配置SASL+ACL
一.简介 在Kafka0.9版本之前,Kafka集群时没有安全机制的.Kafka Client应用可以通过连接Zookeeper地址,例如zk1:2181:zk2:2181,zk3:2181等.来获取 ...
- 【Kafka】Broker之Server.properties的重要参数说明
名称 描述 类型 默认值 有效值区间 重要程度 zookeeper.connect zk地址 string 高 advertised.host.name 过时的:只有当advertised.liste ...
- 利用新版本自带的Zookeeper搭建kafka集群
安装简要说明新版本的kafka自带有zookeeper,其实自带的zookeeper完全够用,本篇文章以记录使用自带zookeeper搭建kafka集群.1.关于kafka下载kafka下载页面:ht ...
- kafka的安装和初步使用
简介 最近开发的项目中,kafka用的比较多,为了方便梳理,从今天起准备记录一些关于kafka的文章,首先,当然是如何安装kafka了. Apache Kafka是分布式发布-订阅消息系统. Apac ...
- Linux——安装并配置Kafka
前言 Kafka是由Apache软件基金会开发的一个开源流处理平台,由Scala和Java编写.Kafka是一种高吞吐量的分布式发布订阅消息系统,它可以处理消费者规模的网站中的所有动作流数据. 这种动 ...
- kafka 教程(三)-远程访问
远程连接 kafka 配置 默认的 kafka 配置是无法远程访问的,解决该问题有几个方案. 方案1 advertised.listeners=PLAINTEXT://IP:9092 注意必须是 ip ...
随机推荐
- python之路九
paramiko paramiko模块是用python语言写的一个模块,遵循SSH2协议,支持以加密和认证的方式,进行远程服务器的连接 ssh执行命令: import paramikossh = pa ...
- Oracle 过程中检查数据表存在与否
在过程中,尤其是每天执行的任务,通常要检查查询的数据表存在不存在,如果不存在则等待一段时间在进行执行,以下代码实现了这个功能,如果表不存在,抛出异常,交给异常处理代码,确保数据完整性 使用方法:p_C ...
- Android Studio在创建/导入项目的时候,一直处于building “XXX”gradle project info的解决办法
Android Studio在新建项目或者导入项目的时候,可能会一直处于building “XXX”gradle project info的状态,而且还取消不了,无奈之下只能干掉进程... 还有一种情 ...
- SQL中rowcount与@@rowcount
rowcount的用法: rowcount的作用就是用来限定后面的sql在返回指定的行数之后便停止处理,比如下面的示例, select * from 表A 这样的查询只会返回表A中的前10条数据.它和 ...
- .NET LINQ基本查询操作
获取数据源 在 LINQ 查询中,第一步是指定数据源.像在大多数编程语言中一样,在 C# 中,必须先声明变量,才能使用它.在 LINQ 查询中,最先使用 from 子句的目的是引入数据源 ( ...
- React学习笔记-2-什么是jsx?如何使用jsx?
什么是jsx? JSX是JavaScript XML 这两个单词的缩写,xml和html非常类似,简单来说可以把它理解成使用各种各样的标签,大家可以自行 百度.所以jsx就是在javascri ...
- Swift 提示:Initialization of variable was never used consider replacing with assignment to _ or removing it
Swift 提示:Initialization of variable was never used consider replacing with assignment to _ or removi ...
- MVC学习笔记----缓存
http://www.cnblogs.com/darrenji/p/3683306.html 视图缓存 http://www.cnblogs.com/darrenji/p/3649994.html ...
- 用VBox虚拟机安装Android 屏幕90度翻转竖屏设置
在虚拟机中安装好Android之后,有一些Android应用(比如UC浏览器.UC桌面)不能安装.但更有一些程序是可以安装,却自动顺时间旋转了90度,操作和看起来非常不爽! 这个情况下,在Androi ...
- flume+kafka+hbase+ELK
一.架构方案如下图: 二.各个组件的安装方案如下: 1).zookeeper+kafka http://www.cnblogs.com/super-d2/p/4534323.html 2)hbase ...