最近由于项目需要在研究spark相关的内容,形成了一些技术性文档,发布这记录下,懒得翻译了。

There are some spaces the official documents didn't explain very clearly, especially on some details. Here are given some more explanations base on the practices  I did  and the source codes I read these days.

(The official document link is http://spark.apache.org/docs/latest/job-scheduling.html)

  1. There are two different schedulers in current spark implementation, FIFO is the default setting and the initial way that spark implement.
  2. Both FIFO and FAIR schedulers can support the basic functionality that multiple parallel jobs run simultaneously, the prerequisite is that they are submitted from separate threads. (i.e., in single thread, the jobs are executed in order)
  3. In FIFO Scheduler, the jobs which are submitted earlier has higher priority and possibility than those later jobs. But it doesn't mean that the first job will execute first, it is also possible that later jobs run before the earlier ones if the resources of the whole cluster are not occupied. However,  the FIFO scheduler will cause the worst case: if the first jobs are large, the later jobs maybe suffer significant delay.
  4. The FAIR Scheduler is the way corresponding to Hadoop FAIR scheduler and enhancement for FIFO. In FIFO fashion, there is only one factor Priority will be considered in SchedulableQueue; While in FAIR fashion, more factors will be considered including minshare, runningtasks, weight (You can reference the code below if interest).Similarly, the jobs don't always run by following the rules by FairSchedulingAlgorithm strictly, while as a whole, the FAIR scheduler really alleviate largely the delay time for small jobs by adjusting the parameters which were delayed significantly in FIFO fashion in my observation through the concurrent JMeter tests。

private[spark] class FIFOSchedulingAlgorithm extends SchedulingAlgorithm {
override def comparator(s1: Schedulable, s2: Schedulable): Boolean = {
val priority1 = s1.priority
val priority2 = s2.priority
var res = math.signum(priority1 - priority2)
if (res == 0) {
val stageId1 = s1.stageId
val stageId2 = s2.stageId
res = math.signum(stageId1 - stageId2)
}
if (res < 0) {
true
} else {
false
}
}
}
private[spark] class FairSchedulingAlgorithm extends SchedulingAlgorithm {
override def comparator(s1: Schedulable, s2: Schedulable): Boolean = {
val minShare1 = s1.minShare
val minShare2 = s2.minShare
val runningTasks1 = s1.runningTasks
val runningTasks2 = s2.runningTasks
val s1Needy = runningTasks1 < minShare1
val s2Needy = runningTasks2 < minShare2
val minShareRatio1 = runningTasks1.toDouble / math.max(minShare1, 1.0).toDouble
val minShareRatio2 = runningTasks2.toDouble / math.max(minShare2, 1.0).toDouble
val taskToWeightRatio1 = runningTasks1.toDouble / s1.weight.toDouble
val taskToWeightRatio2 = runningTasks2.toDouble / s2.weight.toDouble
var compare: Int = 0 if (s1Needy && !s2Needy) {
return true
} else if (!s1Needy && s2Needy) {
return false
} else if (s1Needy && s2Needy) {
compare = minShareRatio1.compareTo(minShareRatio2)
} else {
compare = taskToWeightRatio1.compareTo(taskToWeightRatio2)
} if (compare < 0) {
true
} else if (compare > 0) {
false
} else {
s1.name < s2.name
}
}
 

  5.The pools in FIFO and FAIR schedulers

Spark Job Scheduling的更多相关文章

  1. Spark记录-官网学习配置篇(一)

    参考http://spark.apache.org/docs/latest/configuration.html Spark提供三个位置来配置系统: Spark属性控制大多数应用程序参数,可以使用Sp ...

  2. spark总结——转载

    转载自:    spark总结 第一个Spark程序 /** * 功能:用spark实现的单词计数程序 * 环境:spark 1.6.1, scala 2.10.4 */ // 导入相关类库impor ...

  3. Spark调研笔记第3篇 - Spark集群相应用的调度策略简单介绍

    Spark集群的调度分应用间调度和应用内调度两种情况,下文分别进行说明. 1. 应用间调度 1) 调度策略1: 资源静态分区 资源静态分区是指整个集群的资源被预先划分为多个partitions,资源分 ...

  4. 论文阅读计划1(Benchmarking Streaming Computation Engines: Storm, Flink and Spark Streaming & An Enforcement of Real Time Scheduling in Spark Streaming & StyleBank: An Explicit Representation for Neural Ima)

    Benchmarking Streaming Computation Engines: Storm, Flink and Spark Streaming[1] 简介:雅虎发布的一份各种流处理引擎的基准 ...

  5. [Spark] Spark 3.0 Accelerator Aware Scheduling - GPU

    Ref: Spark3.0 preview预览版尝试GPU调用(本地模式不支持GPU) 预览版本:https://archive.apache.org/dist/spark/spark-3.0.0-p ...

  6. spark 笔记 14: spark中的delay scheduling实现

    延迟调度算法的实现是在TaskSetManager类中的,它通过将task存放在四个不同级别的hash表里,当有可用的资源时,resourceOffer函数的参数之一(maxLocality)就是这些 ...

  7. spark 笔记 3:Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling

    spark论文中说他使用了延迟调度算法,源于这篇论文:http://people.csail.mit.edu/matei/papers/2010/eurosys_delay_scheduling.pd ...

  8. Spark分析之Job Scheduling Process

    经过前面文章的SparkContext.DAGScheduler.TaskScheduler分析,再从总体上了解Spark Job的调度流程 1.SparkContext将job的RDD DAG图提交 ...

  9. Spark踩坑记——Spark Streaming+Kafka

    [TOC] 前言 在WeTest舆情项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端,我们利用了spark strea ...

随机推荐

  1. Android框架

    http://blog.163.com/vicent_zxb/blog/static/1858861312011488262665/ (一)Android系统框架详解 Android采用分层的架构,分 ...

  2. ios8 ios7 tableview cell 分割线左对齐

    ios8中左对齐代码 //加入如下代码 -(void)tableView:(UITableView *)tableView willDisplayCell:(UITableViewCell *)cel ...

  3. 算法的上帝——Donald E.Knuth(转)

    开始介绍前先膜拜之~ 密尔沃基市,是美国威斯康辛州最大的城市.1938年1月10日,圣诞刚过不久,密尔沃基市民像往常一样平静地生活着.咖啡店里,有人在议论着罗斯 福总统的救市新政策,有人在议论着到底该 ...

  4. ActionScript 3.0 编程精髓 示例源码下载

    根据书籍介绍(http://product.china-pub.com/38852#qy)的指引,找到了下载地址:http://moock.org/eas3/examples/

  5. 玩转sublime(一)——玩转全局文件搜索/替换

    这个快捷键好记,一般的搜索是Ctrl+f,多了一个Shift就是全局搜索

  6. spark基础练习(未完)

    1.filterval rdd = sc.parallelize(List(1,2,3,4,5))val mappedRDD = rdd.map(2*_)mappedRDD.collectval fi ...

  7. python随文档

    UNIX网络编程--socket的keep http://www.68idc.cn/help/opersys/unixbsd/20150731471448.html 云计算学习和实践: 原创<每 ...

  8. python(8) 自己制造异常让程序退出,把print的内容写入到文件

    异常 也可以自己输出异常原因: raise Exception("404 404 404") import math import time #print 到文件的代码****** ...

  9. MDK+硬件仿真器实现debugprintf()-stm32

    MDK+硬件仿真器实现debugprintf()-stm32 1MDK工程设置如下 2其中stm32debug.ini文件内容为 /********************************** ...

  10. linux中deb怎样安装

    deb是Debian Linux的安装格式,跟Red Hat的rpm非常相似,最基本的安装命令是:dpkg -i file.debdpkg 是Debian Package的简写,是为Debian 专门 ...