DAGScheduler最终创建了task set,并提交给了taskScheduler。那先得看看task是怎么定义和执行的。
Task是execution执行的一个单元。

Task: executor执行的基本单元,也是spark操作的最小单位。和java executor的task基本上是相同含义的。
/**
* A unit of execution. We have two kinds of Task's in Spark:
* - [[org.apache.spark.scheduler.ShuffleMapTask]]
* - [[org.apache.spark.scheduler.ResultTask]]
*
* A Spark job consists of one or more stages. The very last stage in a job consists of multiple
* ResultTasks, while earlier stages consist of ShuffleMapTasks. A ResultTask executes the task
* and sends the task output back to the driver application. A ShuffleMapTask executes the task
* and divides the task output to multiple buckets (based on the task's partitioner).
*
* @param stageId id of the stage this task belongs to
* @param partitionId index of the number in the RDD
*/
private[spark] abstract class Task[T](val stageId: Int, var partitionId: Int) extends Serializable {
主要属性:
final def run(attemptId: Long): T = {
context = new TaskContext(stageId, partitionId, attemptId, runningLocally = false)
context.taskMetrics.hostname = Utils.localHostName()
taskThread = Thread.currentThread()
if (_killed) {
kill(interruptThread = false)
}
runTask(context)
def runTask(context: TaskContext): T
// Map output tracker epoch. Will be set by TaskScheduler.
var epoch: Long = -1

var metrics: Option[TaskMetrics] = None

// Task context, to be initialized in run().
@transient protected var context: TaskContext = _
// The actual Thread on which the task is running, if any. Initialized in run().
@volatile @transient private var taskThread: Thread = _

/**
* Handles transmission of tasks and their dependencies, because this can be slightly tricky. We
* need to send the list of JARs and files added to the SparkContext with each task to ensure that
* worker nodes find out about it, but we can't make it part of the Task because the user's code in
* the task might depend on one of the JARs. Thus we serialize each task as multiple objects, by
* first writing out its dependencies.
*/
private[spark] object Task {
/**
* Serialize a task and the current app dependencies (files and JARs added to the SparkContext)
*/
def serializeWithDependencies(
/**
* Deserialize the list of dependencies in a task serialized with serializeWithDependencies,
* and return the task itself as a serialized ByteBuffer. The caller can then update its
* ClassLoaders and deserialize the task.
*
* @return (taskFiles, taskJars, taskBytes)
*/
def deserializeWithDependencies(serializedTask: ByteBuffer)
ShuffleMapTask: 它是对应于transformation操作的task,主要更能是解决提供action操作所需要的数据。依旧是它是被action依赖的task,需要提前执行。
/**
* A ShuffleMapTask divides the elements of an RDD into multiple buckets (based on a partitioner
* specified in the ShuffleDependency).
*
* See [[org.apache.spark.scheduler.Task]] for more information.
*
* @param stageId id of the stage this task belongs to
* @param taskBinary broadcast version of of the RDD and the ShuffleDependency. Once deserialized,
* the type should be (RDD[_], ShuffleDependency[_, _, _]).
* @param partition partition of the RDD this task is associated with
* @param locs preferred task execution locations for locality scheduling
*/
private[spark] class ShuffleMapTask(
stageId: Int,
taskBinary: Broadcast[Array[Byte]],
partition: Partition,
@transient private var locs: Seq[TaskLocation])
extends Task[MapStatus](stageId, partition.index) with Logging {
override def runTask(context: TaskContext): MapStatus = {
// Deserialize the RDD using the broadcast variable.
val ser = SparkEnv.get.closureSerializer.newInstance()
val (rdd, dep) = ser.deserialize[(RDD[_], ShuffleDependency[_, _, _])](
ByteBuffer.wrap(taskBinary.value), Thread.currentThread.getContextClassLoader)

metrics = Some(context.taskMetrics)
var writer: ShuffleWriter[Any, Any] = null
try {
val manager = SparkEnv.get.shuffleManager
writer = manager.getWriter[Any, Any](dep.shuffleHandle, partitionId, context)
writer.write(rdd.iterator(partition, context).asInstanceOf[Iterator[_ <: Product2[Any, Any]]])
return writer.stop(success = true).get --HashShuffleWriter
} catch {
case e: Exception =>
if (writer != null) {
writer.stop(success = false)
}
throw e
} finally {
context.markTaskCom pleted()
}
}
ResultTask: 它是与action操作对应的,也就是依赖树的叶子节点上。
/**
* A task that sends back the output to the driver application.
*
* See [[Task]] for more information.
*
* @param stageId id of the stage this task belongs to
* @param taskBinary broadcasted version of the serialized RDD and the function to apply on each
* partition of the given RDD. Once deserialized, the type should be
* (RDD[T], (TaskContext, Iterator[T]) => U).
* @param partition partition of the RDD this task is associated with
* @param locs preferred task execution locations for locality scheduling
* @param outputId index of the task in this job (a job can launch tasks on only a subset of the
* input RDD's partitions).
*/
private[spark] class ResultTask[T, U](
stageId: Int,
taskBinary: Broadcast[Array[Byte]],
partition: Partition,
@transient locs: Seq[TaskLocation],
val outputId: Int)
extends Task[U](stageId, partition.index) with Serializable {
override def runTask(context: TaskContext): U = {
// Deserialize the RDD and the func using the broadcast variables.
val ser = SparkEnv.get.closureSerializer.newInstance()
val (rdd, func) = ser.deserialize[(RDD[T], (TaskContext, Iterator[T]) => U)](
ByteBuffer.wrap(taskBinary.value), Thread.currentThread.getContextClassLoader)

metrics = Some(context.taskMetrics)
try {
func(context, rdd.iterator(partition, context))
} finally {
context.markTaskCompleted()
}
}







spark 笔记 9: Task/TaskContext的更多相关文章

  1. spark笔记 环境配置

    spark笔记 spark简介 saprk 有六个核心组件: SparkCore.SparkSQL.SparkStreaming.StructedStreaming.MLlib,Graphx Spar ...

  2. Spark分区数、task数目、core数目、worker节点数目、executor数目梳理

    Spark分区数.task数目.core数目.worker节点数目.executor数目梳理 spark隐式创建由操作组成的逻辑上的有向无环图.驱动器执行时,它会把这个逻辑图转换为物理执行计划,然后将 ...

  3. spark 笔记 15: ShuffleManager,shuffle map两端的stage/task的桥梁

    无论是Hadoop还是spark,shuffle操作都是决定其性能的重要因素.在不能减少shuffle的情况下,使用一个好的shuffle管理器也是优化性能的重要手段. ShuffleManager的 ...

  4. spark 笔记 12: Executor,task最后的归宿

    spark的Executor是执行task的容器.和java的executor概念类似. ===================start executor runs task============ ...

  5. spark 笔记 7: DAGScheduler

    在前面的sparkContex和RDD都可以看到,真正的计算工作都是同过调用DAGScheduler的runjob方法来实现的.这是一个很重要的类.在看这个类实现之前,需要对actor模式有一点了解: ...

  6. spark 笔记 5: SparkContext,SparkConf

    SparkContext 是spark的程序入口,相当于熟悉的'main'函数.它负责链接spark集群.创建RDD.创建累加计数器.创建广播变量. ) scheduler.initialize(ba ...

  7. Spark笔记——技术点汇总

    目录 概况 手工搭建集群 引言 安装Scala 配置文件 启动与测试 应用部署 部署架构 应用程序部署 核心原理 RDD概念 RDD核心组成 RDD依赖关系 DAG图 RDD故障恢复机制 Standa ...

  8. Spark技术内幕: Task向Executor提交的源码解析

    在上文<Spark技术内幕:Stage划分及提交源码分析>中,我们分析了Stage的生成和提交.但是Stage的提交,只是DAGScheduler完成了对DAG的划分,生成了一个计算拓扑, ...

  9. 大数据学习——spark笔记

    变量的定义 val a: Int = 1 var b = 2 方法和函数 区别:函数可以作为参数传递给方法 方法: def test(arg: Int): Int=>Int ={ 方法体 } v ...

随机推荐

  1. 使用xpath爬取猫眼电影排行榜

    最近在学习xpath,在网上找资料的时候,发现一个新手经常拿来练手的项目,爬取猫眼电影前一百名排行的信息,很多都是跟崔庆才的很雷同,基本照抄.这里就用xpath自己写了一个程序,同样也是爬取猫眼电影, ...

  2. dispatch事件分发

    //赋值,监听change事件var el = document.getElementById('selectTimeHide');el.value=rs.text;//赋值el.dispatchEv ...

  3. Delphi 监视数据的值

  4. QQ空间相册照片批量导出

    QQ空间相册照片批量导出 先自己创建一个私人的单独的群,然后创建相册,上传照片来源从空间选图复制 复制完成后打开相册开始骚操作(两种方式) OK

  5. Django学习系列10:保存用户输入——编写表单,发送POST请求

    要获取用户输入的待办事项,发送给服务器,这样才能使用某种方式保存待办事项,然后在显示给用户查看. 上次运行测试指出无法保存用户的输入.现在,要使用HTML post请求. 若想让浏览器发送POST请求 ...

  6. R 语言中的多元线性回归

    示例 sessionInfo() # 查询版本及系统和库等信息 # 工作目录设置 getwd() path <- "E:/RSpace" setwd(path) rm(lis ...

  7. noi.ac NA536 【打地鼠】

    又一道可写的小清新思维题 其实想到倒着做了,然而还是因为T1害人不浅(我太菜了),所以并没有写 考虑两个局面不同,显然至少打了一次地鼠,基于操作的颜色覆盖性质,我们可以考虑把操作倒着做,对于一个X点, ...

  8. K8S网络模型

    一.网络模型概述 k8s的网络中主要存在四种类型的通信:同一pod内的容器间通信.各pod彼此之间的通信.pod与service间的通信.以及集群外部的流量同service之间的通信. k8s为pod ...

  9. Ubuntu 安装matlab2013b

    下载软件包: 链接:http://pan.baidu.com/s/1bHoFHc 密码:lugc 还要注意软件的解压的问题: 链接:http://pan.baidu.com/s/1geBEQyf 密码 ...

  10. qt5---步长调节器SpinBox和QDoubleSpinBox

    #include <QSpinBox>            #include <QDoubleSpinBox> QSpinBox 用于整数的显示和输入,一般显示十进制数,也可 ...