DStreamGraph有点像简洁版的DAG scheduler,负责根据某个时间间隔生成一序列JobSet,以及按照依赖关系序列化。这个类的inputStream和outputStream是最重要的属性。spark stream将动态的输入流与对流的处理通过一个shuffle来连接。前面的(shuffle map)是input stream,其实是DStream的子类,它们负责将收集的数据以block的方式存到spark memory中;而output stream,是另外的一系类DStream,负责将数据从spark memory读取出来,分解成spark core中的RDD,然后再做数据处理。





final private[streaming] class DStreamGraph extends Serializable with Logging {

private val inputStreams = new ArrayBuffer[InputDStream[_]]()
private val outputStreams = new ArrayBuffer[DStream[_]]()

var rememberDuration: Duration = null
var checkpointInProgress = false

var zeroTime: Time = null
var startTime: Time = null
var batchDuration: Duration = null


def addInputStream(inputStream: InputDStream[_]) {
this.synchronized {
inputStream.setGraph(this)
inputStreams += inputStream
}
}

def addOutputStream(outputStream: DStream[_]) {
this.synchronized {
outputStream.setGraph(this)
outputStreams += outputStream
}
}

def getInputStreams() = this.synchronized { inputStreams.toArray }

def getOutputStreams() = this.synchronized { outputStreams.toArray }

def getReceiverInputStreams() = this.synchronized {
inputStreams.filter(_.isInstanceOf[ReceiverInputDStream[_]])
.map(_.asInstanceOf[ReceiverInputDStream[_]])
.toArray
}


def generateJobs(time: Time): Seq[Job] = {
logDebug("Generating jobs for time " + time)
val jobs = this.synchronized {
outputStreams.flatMap(outputStream => outputStream.generateJob(time))
}
logDebug("Generated " + jobs.length + " jobs for time " + time)
jobs
}


@throws(classOf[IOException])
private def writeObject(oos: ObjectOutputStream): Unit = Utils.tryOrIOException {
logDebug("DStreamGraph.writeObject used")
this.synchronized {
checkpointInProgress = true
logDebug("Enabled checkpoint mode")
oos.defaultWriteObject()
checkpointInProgress = false
logDebug("Disabled checkpoint mode")
}
}

@throws(classOf[IOException])
private def readObject(ois: ObjectInputStream): Unit = Utils.tryOrIOException {
logDebug("DStreamGraph.readObject used")
this.synchronized {
checkpointInProgress = true
ois.defaultReadObject()
checkpointInProgress = false
}
}

JobScheduler负责产生jobs
/**
* This class schedules jobs to be run on Spark. It uses the JobGenerator to generate
* the jobs and runs them using a thread pool.
*/
private[streaming]
class JobScheduler(val ssc: StreamingContext) extends Logging {

private val jobSets = new ConcurrentHashMap[Time, JobSet]
private val numConcurrentJobs = ssc.conf.getInt("spark.streaming.concurrentJobs", 1)
private val jobExecutor = Executors.newFixedThreadPool(numConcurrentJobs)
private val jobGenerator = new JobGenerator(this)
val clock = jobGenerator.clock
val listenerBus = new StreamingListenerBus()

// These two are created only when scheduler starts.
// eventActor not being null means the scheduler has been started and not stopped
var receiverTracker: ReceiverTracker = null
private var eventActor: ActorRef = null

def start(): Unit = synchronized {
if (eventActor != null) return // scheduler has already been started

logDebug("Starting JobScheduler")
eventActor = ssc.env.actorSystem.actorOf(Props(new Actor {
def receive = {
case event: JobSchedulerEvent => processEvent(event)
}
}), "JobScheduler")

listenerBus.start()
receiverTracker = new ReceiverTracker(ssc)
receiverTracker.start()
jobGenerator.start()
logInfo("Started JobScheduler")
}

def submitJobSet(jobSet: JobSet) {
if (jobSet.jobs.isEmpty) {
logInfo("No jobs added for time " + jobSet.time)
} else {
jobSets.put(jobSet.time, jobSet)
jobSet.jobs.foreach(job => jobExecutor.execute(new JobHandler(job)))
logInfo("Added jobs for time " + jobSet.time)
}
}

private class JobHandler(job: Job) extends Runnable {
def run() {
eventActor ! JobStarted(job)
job.run()
eventActor ! JobCompleted(job)
}
}
job完成后处理
private def handleJobCompletion(job: Job) {
job.result match {
case Success(_) =>
val jobSet = jobSets.get(job.time)
jobSet.handleJobCompletion(job)
logInfo("Finished job " + job.id + " from job set of time " + jobSet.time)
if (jobSet.hasCompleted) {
jobSets.remove(jobSet.time)
jobGenerator.onBatchCompletion(jobSet.time)
logInfo("Total delay: %.3f s for time %s (execution: %.3f s)".format(
jobSet.totalDelay / 1000.0, jobSet.time.toString,
jobSet.processingDelay / 1000.0
))
listenerBus.post(StreamingListenerBatchCompleted(jobSet.toBatchInfo))
}
case Failure(e) =>
reportError("Error running job " + job, e)
}
}









spark streaming 4: DStreamGraph JobScheduler的更多相关文章

  1. Spark Streaming Backpressure分析

    1.为什么引入Backpressure 默认情况下,Spark Streaming通过Receiver以生产者生产数据的速率接收数据,计算过程中会出现batch processing time > ...

  2. Spark Streaming性能优化: 如何在生产环境下应对流数据峰值巨变

    1.为什么引入Backpressure 默认情况下,Spark Streaming通过Receiver以生产者生产数据的速率接收数据,计算过程中会出现batch processing time > ...

  3. 5. Spark Streaming高级解析

    5.1 DStreamGraph对象分析 在Spark Streaming中,DStreamGraph是一个非常重要的组件,主要用来: 1. 通过成员inputStreams持有Spark Strea ...

  4. 4. Spark Streaming解析

    4.1 初始化StreamingContext import org.apache.spark._ import org.apache.spark.streaming._ val conf = new ...

  5. Spark Streaming揭秘 Day25 StreamingContext和JobScheduler启动源码详解

    Spark Streaming揭秘 Day25 StreamingContext和JobScheduler启动源码详解 今天主要理一下StreamingContext的启动过程,其中最为重要的就是Jo ...

  6. Spark Streaming揭秘 Day3-运行基石(JobScheduler)大揭秘

    Spark Streaming揭秘 Day3 运行基石(JobScheduler)大揭秘 引子 作为一个非常强大框架,Spark Streaming兼具了流处理和批处理的特点.还记得第一天的谜团么,众 ...

  7. Spark Streaming源码分析 – JobScheduler

    先给出一个job从被generate到被执行的整个过程在JobGenerator中,需要定时的发起GenerateJobs事件,而每个job其实就是针对DStream中的一个RDD,发起一个Spark ...

  8. 贯通Spark Streaming JobScheduler内幕实现和深入思考

    本节主要内容: 一.SparkStreaming Job生成深度思考 二.SparkStreaming Job生成源码解析 JobScheduler的地位非常的重要,所有的关键都在JobSchedul ...

  9. Spark Streaming源码解读之JobScheduler内幕实现和深度思考

    本期内容 : JobScheduler内幕实现 JobScheduler深度思考 JobScheduler 是整个Spark Streaming调度的核心,需要设置多线程,一条用于接收数据不断的循环, ...

随机推荐

  1. css强制换行显示省略号之显示两行后显示省略号

    1,首先来一个固定宽度,在一行显示,超出隐藏,显示省略号的样式 display:block; white-space:nowrap; overflow:hidden; text-overflow:el ...

  2. Linux--环境变量配置文件

    Linux系统中环境变量配置文件分为两类: 全局环境变量配置文件 /etc/profile 用户环境变量配置文件 ~/.bash_profile . ~/.bash_login ~/.profile ...

  3. SSIS 初次接触 + 开发记录

    第一次接触SSIS,昨天终于把一套流程走通,记一下流水. 1:安装 使用SSIS需要安装插件(VS 和Sql Server都需要另外安装). 自己使用的vs2017开发,官网有专门的 VS2017 安 ...

  4. Java学习笔记【一、环境搭建】

    今天把java的学习重新拾起来,一方面是因为公司的项目需要用到大数据方面的东西,需要用java做语言 另一方面是原先使用的C#公司也在慢慢替换为java,为了以后路宽一些吧,技多不压身 此次的学习目标 ...

  5. MySQL数据库笔记五:多表查询

    1.表与表之间的关系 一对一:用户表和身份信息表,用户表是主表 例如:男人表 .女人表 create table man( mid int primary key auto_increment, mn ...

  6. 如何使windows7的默认共享可以被访问[转载]

        因为UAC的存在, 如果使用windows 7 的默认共享,比如 \abcc$ ,会被提示 无权限错误. 为了方便在局域网共享文件,找到了这个方法. Open the registry edi ...

  7. 【异常】Caused by: java.lang.IllegalStateException: Zip64 archives are not supported

    1 自己打包Spring boot项目依赖了第三方的Phoenix jar包过大,导致启动后报错 参考了这篇博客:https://cloud.tencent.com/developer/ask/135 ...

  8. (九)全志平台Tina系统量产前adb shell设密码的方法

    全志平台Tina系统量产前adb shell设密码的方法[适用范围] 全志平台Tina系统 [问题现象] 通常产品量产后都想要以安全方式封闭adb shell,不允许用户或其他开发者使用,因此需要以安 ...

  9. 生产者消费者问题--BlockingQueue

    # 代码: public class App { public static void main(String[] args) { BlockingQueue<Integer> queue ...

  10. 8.Canny边缘检测

    #导入工具包 from imutils import * image = imread('image/school.jpg') show(image) def edge_detection(image ...