Spark源码分析:

https://yq.aliyun.com/articles/28400?utm_campaign=wenzhang&utm_medium=article&utm_source=QQ-qun&utm_content=m_11999

Spark shuffle:

http://blog.csdn.net/johnny_lee/article/details/22619585

Spark java.lang.OutOfMemoryError: Java heap space

My cluster: 1 master, 11 slaves, each node has 6 GB memory.

My settings:
spark.executor.memory=4g, Dspark.akka.frameSize=512
Here is the problem:

First, I read some data (2.19 GB) from HDFS to RDD:
val imageBundleRDD = sc.newAPIHadoopFile(...)
Second, do something on this RDD:

val res = imageBundleRDD.map(data => {
val desPoints = threeDReconstruction(data._2, bg)
(data._1, desPoints)
})
Last, output to HDFS:

res.saveAsNewAPIHadoopFile(...)
When I run my program it shows:

.....
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:24 as TID 33 on executor 9: Salve7.Hadoop (NODE_LOCAL)
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:24 as 30618515 bytes in 210 ms
14/01/15 21:42:27 INFO cluster.ClusterTaskSetManager: Starting task 1.0:36 as TID 34 on executor 2: Salve11.Hadoop (NODE_LOCAL)
14/01/15 21:42:28 INFO cluster.ClusterTaskSetManager: Serialized task 1.0:36 as 30618515 bytes in 449 ms
14/01/15 21:42:28 INFO cluster.ClusterTaskSetManager: Starting task 1.0:32 as TID 35 on executor 7: Salve4.Hadoop (NODE_LOCAL)
Uncaught error from thread [spark-akka.actor.default-dispatcher-3] shutting down JVM since 'akka.jvm-exit-on-fatal-error' is enabled for ActorSystem[spark]

I have a few suggestions:

  • If your nodes are configured to have 6g maximum for Spark (and are leaving a little for other processes), then use 6g rather than 4g, spark.executor.memory=6g. Make sure you're using as much memory as possible by checking the UI (it will say how much mem you're using)
  • Try using more partitions, you should have 2 - 4 per CPU. IME increasing the number of partitions is often the easiest way to make a program more stable (and often faster). For huge amounts of data you may need way more than 4 per CPU, I've had to use 8000 partitions in some cases!
  • Decrease the fraction of memory reserved for caching, using spark.storage.memoryFraction. If you don't use cache() or persist in your code, this might as well be 0. It's default is 0.6, which means you only get 0.4 * 4g memory for your heap. IME reducing the mem frac often makes OOMs go away. UPDATE: From spark 1.6 apparently we will no longer need to play with these values, spark will determine them automatically.
  • Similar to above but shuffle memory fraction. If your job doesn't need much shuffle memory then set it to a lower value (this might cause your shuffles to spill to disk which can have catastrophic impact on speed). Sometimes when it's a shuffle operation that's OOMing you need to do the opposite i.e. set it to something large, like 0.8, or make sure you allow your shuffles to spill to disk (it's the default since 1.0.0).
  • Watch out for memory leaks, these are often caused by accidentally closing over objects you don't need in your lambdas. The way to diagnose is to look out for the "task serialized as XXX bytes" in the logs, if XXX is larger than a few k or more than an MB, you may have a memory leak. See http://stackoverflow.com/a/25270600/1586965
  • Related to above; use broadcast variables if you really do need large objects.
  • If you are caching large RDDs and can sacrifice some access time consider serialising the RDDhttp://spark.apache.org/docs/latest/tuning.html#serialized-rdd-storage. Or even caching them on disk (which sometimes isn't that bad if using SSDs).
  • (Advanced) Related to above, avoid String and heavily nested structures (like Map and nested case classes). If possible try to only use primitive types and index all non-primitives especially if you expect a lot of duplicates. Choose WrappedArray over nested structures whenever possible. Or even roll out your own serialisation - YOU will have the most information regarding how to efficiently back your data into bytes, USE IT!
  • (bit hacky) Again when caching, consider using a Dataset to cache your structure as it will use more efficient serialisation. This should be regarded as a hack when compared to the previous bullet point. Building your domain knowledge into your algo/serialisation can minimise memory/cache-space by 100x or 1000x, whereas all a Dataset will likely give is 2x - 5x in memory and 10x compressed (parquet) on disk.

http://spark.apache.org/docs/1.2.1/configuration.html

EDIT: (So I can google myself easier) The following is also indicative of this problem:

java.lang.OutOfMemoryError : GC overhead limit exceeded

Answer2:

Have a look at the start up scripts a Java heap size is set there, it looks like you're not setting this before running Spark worker.

# Set SPARK_MEM if it isn't already set since we also use it for this process
SPARK_MEM=${SPARK_MEM:-512m}
export SPARK_MEM # Set JAVA_OPTS to be able to load native libraries and to set heap size
JAVA_OPTS="$OUR_JAVA_OPTS"
JAVA_OPTS="$JAVA_OPTS -Djava.library.path=$SPARK_LIBRARY_PATH"
JAVA_OPTS="$JAVA_OPTS -Xms$SPARK_MEM -Xmx$SPARK_MEM"

You can find the documentation to deploy scripts here.

 

Spark高级的更多相关文章

  1. Spark高级数据分析——纽约出租车轨迹的空间和时间数据分析

    Spark高级数据分析--纽约出租车轨迹的空间和时间数据分析 一.地理空间分析: 二.pom.xml 原文地址:https://www.jianshu.com/p/eb6f3e0c09b5 作者:II ...

  2. Learning Spark中文版--第六章--Spark高级编程(2)

    Working on a Per-Partition Basis(基于分区的操作) 以每个分区为基础处理数据使我们可以避免为每个数据项重做配置工作.如打开数据库连接或者创建随机数生成器这样的操作,我们 ...

  3. Learning Spark中文版--第六章--Spark高级编程(1)

    Introduction(介绍) 本章介绍了之前章节没有涵盖的高级Spark编程特性.我们介绍两种类型的共享变量:用来聚合信息的累加器和能有效分配较大值的广播变量.基于对RDD现有的transform ...

  4. spark高级排序彻底解秘

    排序,真的非常重要! RDD.scala(源码) 在其,没有罗列排序,不是说它不重要! 1.基础排序算法实战 2.二次排序算法实战 3.更高级别排序算法 4.排序算法内幕解密 1.基础排序算法实战 启 ...

  5. spark高级编程

    启动spark-shell 如果你有一个Hadoop 集群, 并且Hadoop 版本支持YARN, 通过为Spark master 设定yarn-client 参数值,就可以在集群上启动Spark 作 ...

  6. Spark高级数据分析· 3推荐引擎

    推荐算法流程 推荐算法 预备 wget http://www.iro.umontreal.ca/~lisa/datasets/profiledata_06-May-2005.tar.gz cd /Us ...

  7. Spark高级数据分析-第2章 用Scala和Spark进行数据分析

    2.4 小试牛刀:Spark shell和SparkContext 本章使用的资料来自加州大学欧文分校机器学习资料库(UC Irvine Machine Learning Repository),这个 ...

  8. Spark高级函数应用【combineByKey、transform】

    一.combineByKey算子简介 功能:实现分组自定义求和及计数. 特点:用于处理(key,value)类型的数据. 实现步骤: 1.对要处理的数据进行初始化,以及一些转化操作 2.检测key是否 ...

  9. 10、spark高级编程

    一.基于排序机制的wordcount程序 1.要求 1.对文本文件内的每个单词都统计出其出现的次数. 2.按照每个单词出现次数的数量,降序排序. 2.代码实现 ------java实现------- ...

随机推荐

  1. Goldbach

    Description: Goldbach's conjecture is one of the oldest and best-known unsolved problems in number t ...

  2. tyvj 1061 Mobile Service

    线性DP 本题的阶段很明显,就是完成了几个请求,但是信息不足以转移,我们还需要存储三个服务员的位置,但是三个人都存的话会T,我们发现在阶段 i 处,一定有一个服务员在 p[i] 处,所以我们可以只存另 ...

  3. Dreamweaver8中文版视频教程 [爱闪教程]Dreamweaver8中文版

    原文发布时间为:2008-07-30 -- 来源于本人的百度文章 [由搬家工具导入] http://www.176net.com/shipin/UploadFiles_1476/teach1/%5B% ...

  4. android的系统学习

    先从Android的应用开发开始,等到对应用掌握的比较熟悉了,开始慢慢阅读一些Android 应用框架层的源代码,然后再渐渐往下去了解Android的JNI.Libraries.Dalvik虚拟机.H ...

  5. Netty构建游戏服务器(二)--Hello World

    一,准备工作 1,netty-all-4.1.5.Final.jar(官网下载) 2,eclipse 二,步骤概要 1,服务器开发 (1),创建Server类 该类是程序的主入口,有main方法,服务 ...

  6. Java Socket应用

    Java Socket(套接字)通常也称作"套接字",用于描述IP地址和端口,是一个通信链的句柄.应用程序通常通过"套接字"向网络发出请求或者应答网络请求.

  7. [TJOI2019]唱、跳、rap和篮球_生成函数_容斥原理_ntt

    [TJOI2019]唱.跳.rap和篮球 这么多人过没人写题解啊 那我就随便说说了嗷 这题第一步挺套路的,就是题目要求不能存在balabala的时候考虑正难则反,要求必须存在的方案数然后用总数减,往往 ...

  8. Centos7安装遇到的问题及详解

    1.虚拟机中选择的是NAT模式,我的笔记本电脑是拨号上网,用桥接模式,找不到网卡,在windows的dos界面用ipconfig查看的结果 PPP适配器 宽带连接:IPv4 地址 . . . . . ...

  9. Codeforces #471

    C(分段) 题意: 分析: 我们分别考虑p=2和p>=3的情况 当p=2的时候,个数明显是[L,R]内完全平方数的个数 当p>=3的时候,我们注意到这样的数字个数是1e6级别的,且a最多也 ...

  10. hadoop学习 的yarn

    Yarn的产生 mapReduc1.0 1单点故障 2扩展效率低 3资源利用率高 降低运维成本 方便数据共享 多计算框架支持 MapReduce Spark Storm Yarn的架构图 Yarn模块 ...