ExecutorBackend

很简单的接口

package org.apache.spark.executor
/**
* A pluggable interface used by the Executor to send updates to the cluster scheduler.
*/
private[spark] trait ExecutorBackend {
def statusUpdate(taskId: Long, state: TaskState, data: ByteBuffer)
}

 

StandaloneExecutorBackend

维护executor, 并负责注册executor以及executor和driver之间的通信

private[spark] class StandaloneExecutorBackend(
driverUrl: String,
executorId: String,
hostPort: String,
cores: Int)
extends Actor
with ExecutorBackend
with Logging {
var executor: Executor = null
var driver: ActorRef = null override def preStart() {
logInfo("Connecting to driver: " + driverUrl)
driver = context.actorFor(driverUrl) // 创建driver actor ref, 以便于和driver通信
driver ! RegisterExecutor(executorId, hostPort, cores) // 向driver注册executor
} override def receive = {
case RegisteredExecutor(sparkProperties) =>
logInfo("Successfully registered with driver")
// Make this host instead of hostPort ?
executor = new Executor(executorId, Utils.parseHostPort(hostPort)._1, sparkProperties) // 当注册成功后, 创建Executor case RegisterExecutorFailed(message) =>
logError("Slave registration failed: " + message)
System.exit(1) case LaunchTask(taskDesc) =>
logInfo("Got assigned task " + taskDesc.taskId)
if (executor == null) {
logError("Received launchTask but executor was null")
System.exit(1)
} else {
executor.launchTask(this, taskDesc.taskId, taskDesc.serializedTask) // 调用executor.launchTask,启动task
} case Terminated(_) | RemoteClientDisconnected(_, _) | RemoteClientShutdown(_, _) =>
logError("Driver terminated or disconnected! Shutting down.")
System.exit(1)
} override def statusUpdate(taskId: Long, state: TaskState, data: ByteBuffer) {
driver ! StatusUpdate(executorId, taskId, state, data) // 当task状态变化时, 报告给driver actor
}
}

Executor

对于Executor, 维护一个threadPool, 可以run多个task, 取决于core的个数

所以对于launchTask, 就是在threadPool中挑个thread去run TaskRunner

private[spark] class Executor(
executorId: String,
slaveHostname: String,
properties: Seq[(String, String)])
extends Logging
{
  // Initialize Spark environment (using system properties read above)
val env = SparkEnv.createFromSystemProperties(executorId, slaveHostname, 0, false, false)
SparkEnv.set(env)

  // Start worker thread pool
val threadPool = new ThreadPoolExecutor(
1, 128, 600, TimeUnit.SECONDS, new SynchronousQueue[Runnable]) def launchTask(context: ExecutorBackend, taskId: Long, serializedTask: ByteBuffer) {
threadPool.execute(new TaskRunner(context, taskId, serializedTask))
}
}

 

TaskRunner

  class TaskRunner(context: ExecutorBackend, taskId: Long, serializedTask: ByteBuffer)
extends Runnable { override def run() {
try {
SparkEnv.set(env)
Accumulators.clear()
val (taskFiles, taskJars, taskBytes) = Task.deserializeWithDependencies(serializedTask) // 反序列化
updateDependencies(taskFiles, taskJars)
val task = ser.deserialize[Task[Any]](taskBytes, Thread.currentThread.getContextClassLoader) // 反序列化
attemptedTask = Some(task)
logInfo("Its epoch is " + task.epoch)
env.mapOutputTracker.updateEpoch(task.epoch)
taskStart = System.currentTimeMillis()
val value = task.run(taskId.toInt) // 调用task.run执行真正的逻辑
val taskFinish = System.currentTimeMillis()
        val accumUpdates = Accumulators.values
val result = new TaskResult(value, accumUpdates, task.metrics.getOrElse(null)) // 生成TaskResult
val serializedResult = ser.serialize(result) // 将TaskResult序列化
logInfo("Serialized size of result for " + taskId + " is " + serializedResult.limit)
        context.statusUpdate(taskId, TaskState.FINISHED, serializedResult) // 将任务完成和taskresult,通过statusUpdate报告给driver
logInfo("Finished task ID " + taskId)
} catch { // 处理各种fail, 同样也要用statusUpdate event通知driver
case ffe: FetchFailedException => {
val reason = ffe.toTaskEndReason
context.statusUpdate(taskId, TaskState.FAILED, ser.serialize(reason))
} case t: Throwable => {
val serviceTime = (System.currentTimeMillis() - taskStart).toInt
val metrics = attemptedTask.flatMap(t => t.metrics)
for (m <- metrics) {
m.executorRunTime = serviceTime
m.jvmGCTime = getTotalGCTime - startGCTime
}
val reason = ExceptionFailure(t.getClass.getName, t.toString, t.getStackTrace, metrics)
context.statusUpdate(taskId, TaskState.FAILED, ser.serialize(reason)) // TODO: Should we exit the whole executor here? On the one hand, the failed task may
// have left some weird state around depending on when the exception was thrown, but on
// the other hand, maybe we could detect that when future tasks fail and exit then.
logError("Exception in task ID " + taskId, t)
//System.exit(1)
}
}
}
}

Spark源码分析 – Executor的更多相关文章

  1. Spark源码分析 – 汇总索引

    http://jerryshao.me/categories.html#architecture-ref http://blog.csdn.net/pelick/article/details/172 ...

  2. Spark源码分析(三)-TaskScheduler创建

    原创文章,转载请注明: 转载自http://www.cnblogs.com/tovin/p/3879151.html 在SparkContext创建过程中会调用createTaskScheduler函 ...

  3. 【转】Spark源码分析之-deploy模块

    原文地址:http://jerryshao.me/architecture/2013/04/30/Spark%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90%E4%B9%8B- ...

  4. Spark源码分析:多种部署方式之间的区别与联系(转)

    原文链接:Spark源码分析:多种部署方式之间的区别与联系(1) 从官方的文档我们可以知道,Spark的部署方式有很多种:local.Standalone.Mesos.YARN.....不同部署方式的 ...

  5. Spark 源码分析 -- task实际执行过程

    Spark源码分析 – SparkContext 中的例子, 只分析到sc.runJob 那么最终是怎么执行的? 通过DAGScheduler切分成Stage, 封装成taskset, 提交给Task ...

  6. Spark源码分析 – BlockManager

    参考, Spark源码分析之-Storage模块 对于storage, 为何Spark需要storage模块?为了cache RDD Spark的特点就是可以将RDD cache在memory或dis ...

  7. Spark源码分析 -- TaskScheduler

    Spark在设计上将DAGScheduler和TaskScheduler完全解耦合, 所以在资源管理和task调度上可以有更多的方案 现在支持, LocalSheduler, ClusterSched ...

  8. Spark源码分析 – SchedulerBackend

    SchedulerBackend, 两个任务, 申请资源和task执行和管理 对于SparkDeploySchedulerBackend, 基于actor模式, 主要就是启动和管理两个actor De ...

  9. Spark源码分析 – Deploy

    参考, Spark源码分析之-deploy模块   Client Client在SparkDeploySchedulerBackend被start的时候, 被创建, 代表一个application和s ...

随机推荐

  1. 全局描述符表GDT

    写在前面 添油加醋系列第二弹--剖析GDT 头文件:https://github.com/bajdcc/MiniOS/blob/master/include/gdt.h 实现:https://gith ...

  2. Mockjs生成Vue数据表格

    1.npm install mockjs --save 2.在src文件下建mock.js文件 3.mock.js文件文件内容 //引入mockjs import Mock from 'mockjs' ...

  3. node js 调试出现同一个端口启动多次报错处理方案 Error: listen EADDRINUSE

    windows 下 1.查询端口占用的进程ID: netstat -aon | findstr "80"    80为端口号, 输出为: TCP    0.0.0.0:3000   ...

  4. PHP——分页显示数据库内容

    test.php <?php header("Content-Type:text/html;charset=utf-8"); //加载分页类 include "pa ...

  5. 微信小程序 - IOS 仿饿了么"我的",下拉橡皮筋效果

    这个需求是在wepy交流群里有群友提到的. 一个小花样. 注册mixins /** * IOS专用 顶部下拉橡皮筋效果 * 安卓的Page在到达顶部的时候,不能继续下拉...略过 * * 效果见 饿了 ...

  6. 树莓派+android things+实时音视频传输demo之遥控小车

    做了个测试小车,上面安装了摄像头,通过外网进行视频传输: https://www.bilibili.com/video/av23817880/

  7. etl工具,kettle实现循环

    Kettle是一款国外开源的ETL工具,纯Java编写,可以在Window.Linux.Unix上运行,绿色无需安装,数据抽取高效稳定. 业务模型: 在关系型数据库中有张很大的数据存储表,被设计成奇偶 ...

  8. UCASE() 函数

    UCASE() 函数 UCASE 函数把字段的值转换为大写. SQL UCASE() 语法 SELECT UCASE(column_name) FROM table_name

  9. 多媒体开发之视频回放---dm642 做rtsp 视频回放功能

    之前看过一款海康的视频录制和回放的ipnc 四路就是: 录制还是在本地电脑录制,通过插件在本地生成录制视频和snap图片, 回放估计就是按时间点生成的文件调用本地播放. http://m.blog.c ...

  10. linux改动登陆主机提示信息

    寻常管理着130多台Linux物理主机.真正搞清楚每一台主机的IP信息.应用部署比較麻烦! 所以在部署之初,必须规划好: 写一个脚本.把主机IP.管理员联系方法,应用部署等主机信息放在.sh里面 sh ...