今天在测试spark-sql运行在yarn上的过程中,无意间从日志中发现了一个问题:

spark-sql --master yarn
// :: INFO Client: Requesting a new application from cluster with  NodeManagers
// :: INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster ( MB per container)
// :: INFO Client: Will allocate AM container, with MB memory including MB overhead
// :: INFO Client: Setting up container launch context for our AM
// :: INFO Client: Preparing resources for our AM container
// :: INFO Client: Uploading resource file:/home/spark/software/source/compile/deploy_spark/assembly/target/scala-2.10/spark-assembly-1.3.0-SNAPSHOT-hadoop2.3.0-cdh5.0.0.jar -> hdfs://hadoop000:8020/user/spark/.sparkStaging/application_1416381870014_0093/spark-assembly-1.3.0-SNAPSHOT-hadoop2.3.0-cdh5.0.0.jar
// :: INFO Client: Setting up the launch environment for our AM container

再开启一个spark-sql命令行,从日志中再次发现:

// :: INFO Client: Requesting a new application from cluster with  NodeManagers
// :: INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster ( MB per container)
// :: INFO Client: Will allocate AM container, with MB memory including MB overhead
// :: INFO Client: Setting up container launch context for our AM
// :: INFO Client: Preparing resources for our AM container
// :: INFO Client: Uploading resource file:/home/spark/software/source/compile/deploy_spark/assembly/target/scala-2.10/spark-assembly-1.3.0-SNAPSHOT-hadoop2.3.0-cdh5.0.0.jar -> hdfs://hadoop000:8020/user/spark/.sparkStaging/application_1416381870014_0094/spark-assembly-1.3.0-SNAPSHOT-hadoop2.3.0-cdh5.0.0.jar
// :: INFO Client: Setting up the launch environment for our AM container

然后查看HDFS上的文件:

hadoop fs -ls hdfs://hadoop000:8020/user/spark/.sparkStaging/
drwx------   - spark supergroup           -- : hdfs://hadoop000:8020/user/spark/.sparkStaging/application_1416381870014_0093
drwx------ - spark supergroup -- : hdfs://hadoop000:8020/user/spark/.sparkStaging/application_1416381870014_0094

每个Application都会上传一个spark-assembly-x.x.x-SNAPSHOT-hadoopx.x.x-cdhx.x.x.jar的jar包,影响HDFS的性能以及占用HDFS的空间。

在Spark文档(http://spark.apache.org/docs/latest/running-on-yarn.html)中发现spark.yarn.jar属性,将spark-assembly-xxxxx.jar存放在hdfs://hadoop000:8020/spark_lib/下

在spark-defaults.conf添加属性配置:

spark.yarn.jar hdfs://hadoop000:8020/spark_lib/spark-assembly-1.3.0-SNAPSHOT-hadoop2.3.0-cdh5.0.0.jar

再次启动spark-sql --master yarn观察日志:

14/12/29 15:39:02 INFO Client: Requesting a new application from cluster with 1 NodeManagers
14/12/29 15:39:02 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (8192 MB per container)
14/12/29 15:39:02 INFO Client: Will allocate AM container, with 896 MB memory including 384 MB overhead
14/12/29 15:39:02 INFO Client: Setting up container launch context for our AM
14/12/29 15:39:02 INFO Client: Preparing resources for our AM container
14/12/29 15:39:02 INFO Client: Source and destination file systems are the same. Not copying hdfs://hadoop000:8020/spark_lib/spark-assembly-1.3.0-SNAPSHOT-hadoop2.3.0-cdh5.0.0.jar
14/12/29 15:39:02 INFO Client: Setting up the launch environment for our AM container

观察HDFS上文件

hadoop fs -ls hdfs://hadoop000:8020/user/spark/.sparkStaging/application_1416381870014_0097

该Application对应的目录下没有spark-assembly-xxxxx.jar了,从而节省assembly包上传的过程以及HDFS空间占用。

我在测试过程中遇到了类似如下的错误:

Application application_xxxxxxxxx_yyyy failed 2 times due to AM Container for application_xxxxxxxxx_yyyy 

exited with exitCode: -1000 due to: java.io.FileNotFoundException: File /tmp/hadoop-spark/nm-local-dir/filecache does not exist

在/tmp/hadoop-spark/nm-local-dir路径下创建filecache文件夹即可解决报错问题。

Spark On Yarn中spark.yarn.jar属性的使用的更多相关文章

  1. Spark HA 配置中spark.deploy.zookeeper.url 的意思

    Spark HA的配置网上很多,最近我在看王林的Spark的视频,要付费的.那个人牛B吹得很大,本事应该是有的,但是有本事,不一定就是好老师.一开始吹中国第一,吹着吹着就变成世界第一.就算你真的是世界 ...

  2. Guava com.google.common.base.Stopwatch Spark程序在yarn中 MethodNotFound

    今天在公司提交一个Spark 读取hive中的数据,写入JanusGraph 的app,自己本地调试没有问题,放入环境中提交到yarn 中时,发现app 跑不起. yarn 中日志,也比较明显,app ...

  3. Spark基本工作流程及YARN cluster模式原理(读书笔记)

    Spark基本工作流程及YARN cluster模式原理 转载请注明出处:http://www.cnblogs.com/BYRans/ Spark基本工作流程 相关术语解释 Spark应用程序相关的几 ...

  4. 【原创】大数据基础之Spark(2)Spark on Yarn:container memory allocation容器内存分配

    spark 2.1.1 最近spark任务(spark on yarn)有一个报错 Diagnostics: Container [pid=5901,containerID=container_154 ...

  5. 017 Spark的运行模式(yarn模式)

    1.关于mapreduce on yarn 来提交job的流程 yarn=resourcemanager(RM)+nodemanager(NM) client向RM提交任务 RM向NM分配applic ...

  6. <YARN><MRv2><Spark on YARN>

    MRv1 VS MRv2 MRv1: - JobTracker: 资源管理 & 作业控制- 每个作业由一个JobInProgress控制,每个任务由一个TaskInProgress控制.由于每 ...

  7. Spark记录-源码编译spark2.2.0(结合Hive on Spark/Hive on MR2/Spark on Yarn)

    #spark2.2.0源码编译 #组件:mvn-3.3.9 jdk-1.8 #wget http://mirror.bit.edu.cn/apache/spark/spark-2.2.0/spark- ...

  8. Spark(十二) -- Spark On Yarn & Spark as a Service & Spark On Tachyon

    Spark On Yarn: 从0.6.0版本其,就可以在在Yarn上运行Spark 通过Yarn进行统一的资源管理和调度 进而可以实现不止Spark,多种处理框架并存工作的场景 部署Spark On ...

  9. Spark提交任务(Standalone和Yarn)

    Spark Standalone模式提交任务 Cluster模式: ./spark-submit  \--master spark://node01:7077  \--deploy-mode clus ...

随机推荐

  1. 拿到新机器,进行初始化和部署Nginx的过程

    1. 在/etc/ansbile/hosts中添加主机init 2. 在sysinit.yml中修改要初始化的机器:   hosts: init 3. 设置不检查key      export ANS ...

  2. reverse-daily(1)-audio_visual_receiver_code

    本人第一篇随笔,就以一篇CTF逆向分析的文章开始吧! 链接:http://pan.baidu.com/s/1eS6xFIa 密码:u14d 因为re的分析比较琐碎,所以主要就挑一些重点东西来说. 据说 ...

  3. LayaAir引擎——(一)

    LayaAir是LayaBox推出的Html5游戏引擎,支持 ActionScript3.TypeScript.JavaScript,开源,并且商用免费.   LayaAir IDE 是一款使用Lay ...

  4. Java:String和Date、Timestamp之间的转换

    一.String与Date(java.util.Date)互转 1.1 String -> Date String dateStr = "2016-9-28 12:25:55" ...

  5. |原创|unity 4.3 2D功能SpriteRenderer修改颜色的方法

    4.3增加了不少2D功能,然后实在没有找到有人分享,在国外查资料研究一下午然后给个简单的教程 ===================================================== ...

  6. linux php redis扩展的安装和redis服务的安装

    一.php redis扩展的安装 wget http://pecl.php.net/get/redis-2.2.7.tgztar zvxf redis-2.2.7.tgzcd redis-2.2.7/ ...

  7. (转)深度学习主机环境配置: Ubuntu16.04+Nvidia GTX 1080+CUDA8.0

      深度学习主机环境配置: Ubuntu16.04+Nvidia GTX 1080+CUDA8.0 发表于2016年07月15号由52nlp 接上文<深度学习主机攒机小记>,这台GTX10 ...

  8. Android Studio debug使用release的签名

    当我们在做微信微博sdk分享的时候调试非常麻烦,因为要使用对应的签名版本才能调用sdk成功. 当我们使用AndroidStudio的Gradle之后会很简单的解决这个问题. 1.我们把签名文件放到工程 ...

  9. C语言 gets()和scanf()函数的区别

    scanf( )函数和gets( )函数都可用于输入字符串,但在功能上有区别.若想从键盘上输入字符串"hi hello",则应该使用 gets 函数. gets可以接收空格:而sc ...

  10. [转]Vs解决方案的目录结构设置和管理

    原文地址:[转]Vs解决方案的目录结构设置和管理 作者:大明   以下内容为“原创”+“转载” 首先,解决方案和项目文件夹包含关系(c++项目): VS解决方案和各个项目文件夹以及解决方案和各个项目对 ...