spark 2.1.1

最近spark任务(spark on yarn)有一个报错

Diagnostics: Container [pid=5901,containerID=container_1542879939729_30802_01_000001] is running beyond physical memory limits. Current usage: 11.0 GB of 11 GB physical memory used; 12.2 GB of 23.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1542879939729_30802_01_000001 :
|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
|- 5901 5899 5901 5901 (bash) 3 4 115843072 361 /bin/bash -c LD_LIBRARY_PATH=/export/App/hadoop-2.6.1/lib/native::/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/native::/export/App/hadoop-2.6.1/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/native:/usr/lib/hadoop/lib/native::/export/App/hadoop-2.6.1/lib/native:/export/App/hadoop-2.6.1/lib/native /export/App/jdk1.8.0_60/bin/java -server -Xmx10240m -Djava.io.tmpdir=/export/Data/tmp/hadoop-tmp/nm-local-dir/usercache/hadoop/appcache/application_1542879939729_30802/container_1542879939729_30802_01_000001/tmp '-XX:+PrintGCDetails' '-XX:+UseG1GC' '-XX:G1HeapRegionSize=32M' '-XX:+UseGCOverheadLimit' '-XX:+ExplicitGCInvokesConcurrent' '-XX:+HeapDumpOnOutOfMemoryError' '-XX:-UseCompressedClassPointers' '-XX:CompressedClassSpaceSize=3G' '-XX:+PrintGCTimeStamps' '-Xloggc:/export/Logs/hadoop/g1gc.log' -Dspark.yarn.app.container.log.dir=/export/Logs/hadoop/userlogs/application_1542879939729_30802/container_1542879939729_30802_01_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class 'app.package.AppClass' --jar file:/jarpath/app.jar --properties-file /export/Data/tmp/hadoop-tmp/nm-local-dir/usercache/hadoop/appcache/application_1542879939729_30802/container_1542879939729_30802_01_000001/__spark_conf__/__spark_conf__.properties 1> /export/Logs/hadoop/userlogs/application_1542879939729_30802/container_1542879939729_30802_01_000001/stdout 2> /export/Logs/hadoop/userlogs/application_1542879939729_30802/container_1542879939729_30802_01_000001/stderr
|- 6406 5901 5901 5901 (java) 1834301 372741 13026095104 2888407 /export/App/jdk1.8.0_60/bin/java -server -Xmx10240m -Djava.io.tmpdir=/export/Data/tmp/hadoop-tmp/nm-local-dir/usercache/hadoop/appcache/application_1542879939729_30802/container_1542879939729_30802_01_000001/tmp -XX:+PrintGCDetails -XX:+UseG1GC -XX:G1HeapRegionSize=32M -XX:+UseGCOverheadLimit -XX:+ExplicitGCInvokesConcurrent -XX:+HeapDumpOnOutOfMemoryError -XX:-UseCompressedClassPointers -XX:CompressedClassSpaceSize=3G -XX:+PrintGCTimeStamps -Xloggc:/export/Logs/hadoop/g1gc.log -Dspark.yarn.app.container.log.dir=/export/Logs/hadoop/userlogs/application_1542879939729_30802/container_1542879939729_30802_01_000001 org.apache.spark.deploy.yarn.ApplicationMaster --class app.package.AppClass --jar file:/jarpath/app.jar --properties-file /export/Data/tmp/hadoop-tmp/nm-local-dir/usercache/hadoop/appcache/application_1542879939729_30802/container_1542879939729_30802_01_000001/__spark_conf__/__spark_conf__.properties
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt

从containerID=container_1542879939729_30802_01_000001,以及org.apache.spark.deploy.yarn.ApplicationMaster,可知这个是yarn的ApplicationMaster,运行的是spark的driver,

问题是提交spark任务时参数为 --driver-moery 10g,而且进程启动命令中也确实是 -Xmx10240m,为什么container被kill是因为超过11g?

Container [pid=5901,containerID=container_1542879939729_30802_01_000001] is running beyond physical memory limits. Current usage: 11.0 GB of 11 GB physical memory used;

跟进spark任务提交过程,详见:https://www.cnblogs.com/barneywill/p/9820684.html

org.apache.spark.launcher.SparkSubmitCommandBuilder

      String tsMemory =
isThriftServer(mainClass) ? System.getenv("SPARK_DAEMON_MEMORY") : null;
String memory = firstNonEmpty(tsMemory, config.get(SparkLauncher.DRIVER_MEMORY),
System.getenv("SPARK_DRIVER_MEMORY"), System.getenv("SPARK_MEM"), DEFAULT_MEM);
cmd.add("-Xmx" + memory);

这里会取driver memory的值,取的地方有一个优先级,firstNonEmpty

org.apache.spark.deploy.SparkSubmit

    // In yarn-cluster mode, use yarn.Client as a wrapper around the user class
if (isYarnCluster) {
childMainClass = "org.apache.spark.deploy.yarn.Client"

如果--master yarn时,会提交Client类

org.apache.spark.deploy.yarn.Client

  // AM related configurations
private val amMemory = if (isClusterMode) {
sparkConf.get(DRIVER_MEMORY).toInt
} else {
sparkConf.get(AM_MEMORY).toInt
}
private val amMemoryOverhead = {
val amMemoryOverheadEntry = if (isClusterMode) DRIVER_MEMORY_OVERHEAD else AM_MEMORY_OVERHEAD
sparkConf.get(amMemoryOverheadEntry).getOrElse(
math.max((MEMORY_OVERHEAD_FACTOR * amMemory).toLong, MEMORY_OVERHEAD_MIN)).toInt
}
private val amCores = if (isClusterMode) {
sparkConf.get(DRIVER_CORES)
} else {
sparkConf.get(AM_CORES)
} // Executor related configurations
private val executorMemory = sparkConf.get(EXECUTOR_MEMORY)
private val executorMemoryOverhead = sparkConf.get(EXECUTOR_MEMORY_OVERHEAD).getOrElse(
math.max((MEMORY_OVERHEAD_FACTOR * executorMemory).toLong, MEMORY_OVERHEAD_MIN)).toInt

其中会设置amMemoryOverhead 和executorMemoryOverhead

    val capability = Records.newRecord(classOf[Resource])
capability.setMemory(amMemory + amMemoryOverhead)
capability.setVirtualCores(amCores)

然后会根据amMemory+amMemoryOverhead的值来向yarn申请资源;

一些默认值和配置如下:

org.apache.spark.deploy.yarn.YarnSparkHadoopUtil

object YarnSparkHadoopUtil {
// Additional memory overhead
// 10% was arrived at experimentally. In the interest of minimizing memory waste while covering
// the common cases. Memory overhead tends to grow with container size. val MEMORY_OVERHEAD_FACTOR = 0.10
val MEMORY_OVERHEAD_MIN = 384L

org.apache.spark.deploy.yarn.config

  private[spark] val DRIVER_MEMORY_OVERHEAD = ConfigBuilder("spark.yarn.driver.memoryOverhead")
.bytesConf(ByteUnit.MiB)
.createOptional private[spark] val EXECUTOR_MEMORY_OVERHEAD = ConfigBuilder("spark.yarn.executor.memoryOverhead")
.bytesConf(ByteUnit.MiB)
.createOptional

所以默认的driver memory申请方式为

1 spark.yarn.driver.memoryOverhead 配置优先

2 driverMemory + overhead

其中 overhead = math.max((0.1 * driverMemory).toLong, 384))

所以--driver-memory 10g时向yarn申请的container内存是11g

【原创】大数据基础之Spark(2)Spark on Yarn:container memory allocation容器内存分配的更多相关文章

  1. 【原创】大数据基础之Hadoop(3)yarn数据收集与监控

    yarn常用rest api 1 metrics # curl http://localhost:8088/ws/v1/cluster/metrics The cluster metrics reso ...

  2. 大数据学习系列之七 ----- Hadoop+Spark+Zookeeper+HBase+Hive集群搭建 图文详解

    引言 在之前的大数据学习系列中,搭建了Hadoop+Spark+HBase+Hive 环境以及一些测试.其实要说的话,我开始学习大数据的时候,搭建的就是集群,并不是单机模式和伪分布式.至于为什么先写单 ...

  3. CentOS6安装各种大数据软件 第十章:Spark集群安装和部署

    相关文章链接 CentOS6安装各种大数据软件 第一章:各个软件版本介绍 CentOS6安装各种大数据软件 第二章:Linux各个软件启动命令 CentOS6安装各种大数据软件 第三章:Linux基础 ...

  4. 大数据平台搭建(hadoop+spark)

    大数据平台搭建(hadoop+spark) 一.基本信息 1. 服务器基本信息 主机名 ip地址 安装服务 spark-master 172.16.200.81 jdk.hadoop.spark.sc ...

  5. 大数据系列之并行计算引擎Spark部署及应用

    相关博文: 大数据系列之并行计算引擎Spark介绍 之前介绍过关于Spark的程序运行模式有三种: 1.Local模式: 2.standalone(独立模式) 3.Yarn/mesos模式 本文将介绍 ...

  6. 大数据系列之并行计算引擎Spark介绍

    相关博文:大数据系列之并行计算引擎Spark部署及应用 Spark: Apache Spark 是专为大规模数据处理而设计的快速通用的计算引擎. Spark是UC Berkeley AMP lab ( ...

  7. 【原创】大数据基础之Zookeeper(2)源代码解析

    核心枚举 public enum ServerState { LOOKING, FOLLOWING, LEADING, OBSERVING; } zookeeper服务器状态:刚启动LOOKING,f ...

  8. 寻找丢失的微服务-HAProxy热加载问题的发现与分析 原创: 单既喜 一点大数据技术团队 4月8日 在一点资讯的容器计算平台中,我们通过HAProxy进行Marathon服务发现。本文记录HAProxy服务热加载后某微服务50%概率失效的问题。设计3组对比实验,验证了陈旧配置的HAProxy在Reload时没有退出进而导致微服务丢失,并给出了解决方案. Keywords:HAProxy热加

    寻找丢失的微服务-HAProxy热加载问题的发现与分析 原创: 单既喜 一点大数据技术团队 4月8日 在一点资讯的容器计算平台中,我们通过HAProxy进行Marathon服务发现.本文记录HAPro ...

  9. 【原创】大数据基础之Spark(1)Spark Submit即Spark任务提交过程

    Spark2.1.1 一 Spark Submit本地解析 1.1 现象 提交命令: spark-submit --master local[10] --driver-memory 30g --cla ...

随机推荐

  1. Linux下redis的安装及配置

    1.去官网下载redis(redis.io) 2.将其解压到根目录下 3.进入解压的目录,然后编译源程序, 如果不是root账户登录的,命令前面需要加sudo make make install PR ...

  2. jQuery文件分片上传

    前端代码: <input type="file" id="file6" multiple> <button type="button ...

  3. openstack搭建之-neutron配置(11)

    一.base节点设置 mysql -u root -proot CREATE DATABASE neutron; GRANT ALL PRIVILEGES ON neutron.* TO 'neutr ...

  4. max-height、min-height、height优先级的问题

    前言 我们在实际开发中可能会限制元素的最大高度,那么我们使用的属性必定是max-height,那么不知道大家有没有考虑过如果同时设置max-height和height会发生什么呢? max-heigh ...

  5. map遍历性能记录

    map遍历可以通过keySet或者entrySet方式. 性能上:entrySet略胜一筹,原因是keySet获取到key后再根据key去获取value,在查一遍,所以慢一些. keySet: //先 ...

  6. Install Air Conditioning HDU - 4756(最小生成树+树形dp)

    Install Air Conditioning HDU - 4756 题意是要让n-1间宿舍和发电站相连 也就是连通嘛 最小生成树板子一套 但是还有个限制条件 就是其中有两个宿舍是不能连着的 要求所 ...

  7. iOS 后台调用apns推送

    1.java调用apns推送 2.php 调用apns 推送,可借助终端

  8. java querydsl使用

    1  POM文件 <?xml version="1.0"?> <project xsi:schemaLocation="http://maven.apa ...

  9. 从零开始部署javaWeb项目到阿里云上面

    [详情请看]http://www.cnblogs.com/softidea/p/5271746.html 补充几点特别需要注意的事情 一:putty相当于阿里云的控制台, WinSCP 相当于是专门上 ...

  10. latex 一些使用

    texlive安装 https://blog.csdn.net/qq_38386316/article/details/80272396 管理员权限打开.bat 语言的安装可以减少 texwork 一 ...