15、Spark Streaming源码解读之No Receivers彻底思考
在前几期文章里讲了带Receiver的Spark Streaming 应用的相关源码解读,但是现在开发Spark Streaming的应用越来越多的采用No Receivers(Direct Approach)的方式,No Receiver的方式的优势:
1. 更强的控制自由度
2. 语义一致性
object DirectKafkaWordCount {def main(args: Array[String]) {if (args.length < 2) {System.err.println(s"""|Usage: DirectKafkaWordCount <brokers> <topics>| <brokers> is a list of one or more Kafka brokers| <topics> is a list of one or more kafka topics to consume from|""".stripMargin)System.exit(1)}StreamingExamples.setStreamingLogLevels()val Array(brokers, topics) = args// Create context with 2 second batch intervalval sparkConf = new SparkConf().setAppName("DirectKafkaWordCount")val ssc = new StreamingContext(sparkConf, Seconds(2))// Create direct kafka stream with brokers and topicsval topicsSet = topics.split(",").toSetval kafkaParams = Map[String, String]("metadata.broker.list" -> brokers)val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topicsSet)// Get the lines, split them into words, count the words and printval lines = messages.map(_._2)val words = lines.flatMap(_.split(" "))val wordCounts = words.map(x => (x, 1L)).reduceByKey(_ + _)wordCounts.print()// Start the computationssc.start()ssc.awaitTermination()}}
/*** A batch-oriented interface for consuming from Kafka.* Starting and ending offsets are specified in advance,* so that you can control exactly-once semantics.* @param kafkaParams Kafka <a href="http://kafka.apache.org/documentation.html#configuration">* configuration parameters</a>. Requires "metadata.broker.list" or "bootstrap.servers" to be set* with Kafka broker(s) specified in host1:port1,host2:port2 form.* @param offsetRanges offset ranges that define the Kafka data belonging to this RDD* @param messageHandler function for translating each message into the desired type*/private[kafka]class KafkaRDD[K: ClassTag,V: ClassTag,U <: Decoder[_]: ClassTag,T <: Decoder[_]: ClassTag,R: ClassTag] private[spark] (sc: SparkContext,kafkaParams: Map[String, String],val offsetRanges: Array[OffsetRange], //该RDD的数据偏移量leaders: Map[TopicAndPartition, (String, Int)],messageHandler: MessageAndMetadata[K, V] => R) extends RDD[R](sc, Nil) with Logging with HasOffsetRanges
trait HasOffsetRanges {def offsetRanges: Array[OffsetRange]}
inal class OffsetRange private(val topic: String,val partition: Int,val fromOffset: Long,val untilOffset: Long) extends Serializable
override def getPartitions: Array[Partition] = {offsetRanges.zipWithIndex.map { case (o, i) =>val (host, port) = leaders(TopicAndPartition(o.topic, o.partition))new KafkaRDDPartition(i, o.topic, o.partition, o.fromOffset, o.untilOffset, host, port)}.toArray}
private[kafka]class KafkaRDDPartition(val index: Int,val topic: String,val partition: Int,val fromOffset: Long,val untilOffset: Long,val host: String,val port: Int) extends Partition {/** Number of messages this partition refers to */def count(): Long = untilOffset - fromOffset}
KafkaRDDPartition清晰的描述了数据的具体位置,每个KafkaRDDPartition分区的数据交给KafkaRDD的compute方法计算:
override def compute(thePart: Partition, context: TaskContext): Iterator[R] = {val part = thePart.asInstanceOf[KafkaRDDPartition]assert(part.fromOffset <= part.untilOffset, errBeginAfterEnd(part))if (part.fromOffset == part.untilOffset) {log.info(s"Beginning offset ${part.fromOffset} is the same as ending offset " +s"skipping ${part.topic} ${part.partition}")Iterator.empty} else {new KafkaRDDIterator(part, context)}}
private class KafkaRDDIterator(part: KafkaRDDPartition,context: TaskContext) extends NextIterator[R] {context.addTaskCompletionListener{ context => closeIfNeeded() }log.info(s"Computing topic ${part.topic}, partition ${part.partition} " +s"offsets ${part.fromOffset} -> ${part.untilOffset}")val kc = new KafkaCluster(kafkaParams)val keyDecoder = classTag[U].runtimeClass.getConstructor(classOf[VerifiableProperties]).newInstance(kc.config.props).asInstanceOf[Decoder[K]]val valueDecoder = classTag[T].runtimeClass.getConstructor(classOf[VerifiableProperties]).newInstance(kc.config.props).asInstanceOf[Decoder[V]]val consumer = connectLeadervar requestOffset = part.fromOffsetvar iter: Iterator[MessageAndOffset] = null- //..................
- }
val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topicsSet)
def createDirectStream[K: ClassTag,V: ClassTag,KD <: Decoder[K]: ClassTag,VD <: Decoder[V]: ClassTag] (ssc: StreamingContext,kafkaParams: Map[String, String],topics: Set[String]): InputDStream[(K, V)] = {val messageHandler = (mmd: MessageAndMetadata[K, V]) => (mmd.key, mmd.message)- //创建KakfaCluster对象
val kc = new KafkaCluster(kafkaParams)- //更具kc的信息获取数据偏移量
val fromOffsets = getFromOffsets(kc, kafkaParams, topics)new DirectKafkaInputDStream[K, V, KD, VD, (K, V)](ssc, kafkaParams, fromOffsets, messageHandler)}
override def compute(validTime: Time): Option[KafkaRDD[K, V, U, T, R]] = {//计算最近的数据终止偏移量val untilOffsets = clamp(latestLeaderOffsets(maxRetries))- //利用数据的偏移量创建KafkaRDD
val rdd = KafkaRDD[K, V, U, T, R](context.sparkContext, kafkaParams, currentOffsets, untilOffsets, messageHandler)// Report the record number and metadata of this batch interval to InputInfoTracker.val offsetRanges = currentOffsets.map { case (tp, fo) =>val uo = untilOffsets(tp)OffsetRange(tp.topic, tp.partition, fo, uo.offset)}val description = offsetRanges.filter { offsetRange =>// Don't display empty ranges.offsetRange.fromOffset != offsetRange.untilOffset}.map { offsetRange =>s"topic: ${offsetRange.topic}\tpartition: ${offsetRange.partition}\t" +s"offsets: ${offsetRange.fromOffset} to ${offsetRange.untilOffset}"}.mkString("\n")// Copy offsetRanges to immutable.List to prevent from being modified by the userval metadata = Map("offsets" -> offsetRanges.toList,StreamInputInfo.METADATA_KEY_DESCRIPTION -> description)val inputInfo = StreamInputInfo(id, rdd.count, metadata)ssc.scheduler.inputInfoTracker.reportInfo(validTime, inputInfo)currentOffsets = untilOffsets.map(kv => kv._1 -> kv._2.offset)Some(rdd)}
15、Spark Streaming源码解读之No Receivers彻底思考的更多相关文章
- Spark Streaming源码解读之No Receivers彻底思考
本期内容 : Direct Acess Kafka Spark Streaming接收数据现在支持的两种方式: 01. Receiver的方式来接收数据,及输入数据的控制 02. No Receive ...
- Spark Streaming源码解读之JobScheduler内幕实现和深度思考
本期内容 : JobScheduler内幕实现 JobScheduler深度思考 JobScheduler 是整个Spark Streaming调度的核心,需要设置多线程,一条用于接收数据不断的循环, ...
- Spark Streaming源码解读之流数据不断接收和全生命周期彻底研究和思考
本节的主要内容: 一.数据接受架构和设计模式 二.接受数据的源码解读 Spark Streaming不断持续的接收数据,具有Receiver的Spark 应用程序的考虑. Receiver和Drive ...
- Spark Streaming源码解读之流数据不断接收全生命周期彻底研究和思考
本期内容 : 数据接收架构设计模式 数据接收源码彻底研究 一.Spark Streaming数据接收设计模式 Spark Streaming接收数据也相似MVC架构: 1. Mode相当于Rece ...
- Spark Streaming源码解读之Receiver生成全生命周期彻底研究和思考
本期内容 : Receiver启动的方式设想 Receiver启动源码彻底分析 多个输入源输入启动,Receiver启动失败,只要我们的集群存在就希望Receiver启动成功,运行过程中基于每个Tea ...
- Spark Streaming源码解读之生成全生命周期彻底研究与思考
本期内容 : DStream与RDD关系彻底研究 Streaming中RDD的生成彻底研究 问题的提出 : 1. RDD是怎么生成的,依靠什么生成 2.执行时是否与Spark Core上的RDD执行有 ...
- Spark Streaming源码解读之Job动态生成和深度思考
本期内容 : Spark Streaming Job生成深度思考 Spark Streaming Job生成源码解析 Spark Core中的Job就是一个运行的作业,就是具体做的某一件事,这里的JO ...
- 16.Spark Streaming源码解读之数据清理机制解析
原创文章,转载请注明:转载自 听风居士博客(http://www.cnblogs.com/zhouyf/) 本期内容: 一.Spark Streaming 数据清理总览 二.Spark Streami ...
- 11.Spark Streaming源码解读之Driver中的ReceiverTracker架构设计以及具体实现彻底研究
上篇文章详细解析了Receiver不断接收数据的过程,在Receiver接收数据的过程中会将数据的元信息发送给ReceiverTracker: 本文将详细解析ReceiverTracker的的架构 ...
随机推荐
- 第3章-Vue.js 指令扩展 和 todoList练习
一.学习目标 了解Vue.js指令的实现原理 理解v-model指令的高级用法 能够使用Vue.js 指令完成 todoList 练习(重点+难点) 二.todoList练习效果展示 2.1.效果图展 ...
- K8S Link
https://www.cnblogs.com/linuxk/p/9783510.html https://www.cnblogs.com/fengzhihai/p/9851470.html
- 2049: [Sdoi2008]Cave 洞穴勘测
2049: [Sdoi2008]Cave 洞穴勘测 Time Limit: 10 Sec Memory Limit: 259 MB Submit: 7475 Solved: 3499 [Submi ...
- 通过删除hbase表中的region来达到删除表中数据
公司最近在搞一个hbase删除数据,由于在建表的时候是通过region来对每日的数据进行存储的,所以要求在删除的时候直接通过删除region的来删除数据(最好的方案是只删除region中的数据,不把r ...
- Oracle笔记之表空间
Oracle中有一个表空间的概念,一个数据库可以有好几个表空间,表放在表空间下. 1. 创建表空间 创建表空间使用create tablespace命令: CREATE TABLESPACE foo_ ...
- 2、java语言基础
1.关键字 被Java语言赋予特定含义的单词被称为关键字关键字都是小写的在Java开发工具中,针对关键字有特殊颜色的标记 2.标识符 Java标识符命名规则 ·标识符是由,数字,字母,下划线和美元符号 ...
- vue使用jsx/axios拦截器设置
最害怕的就是做过的事情,转几天又忘记了:写过的代码,也模模糊糊不知道哪里去了,所以告诉自己最好把每天遇到的问题记录下来,好,开始. 新公司要搭个vue后台框架,所以用了简简单单的 vue+iview+ ...
- zedboard学习记录.3.oled,创建IP
环境:win7 .vivado 2017.4 .zedboard rev.d 首先建立工程. 1.Tools -> Create and Package New IP 2.Create AXI4 ...
- python常用运维脚本实例【转】
file是一个类,使用file('file_name', 'r+')这种方式打开文件,返回一个file对象,以写模式打开文件不存在则会被创建.但是更推荐使用内置函数open()来打开一个文件 . 首先 ...
- 【转载】C#异常Retry通用类
//Retry机制 public static class Retry { /// <summary> /// 重试零个参数无返回值的方法 /// </summary> /// ...