Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task  in stage 0.0 failed  times, most recent failure: Lost task 3.3 in stage 0.0 (TID , hadoop7, executor ): ExecutorLostFailure (executor  exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 9.2 GB of  GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.util.EventLoop$$anon$.run(EventLoop.scala:) ERROR : FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory.
INFO : Completed executing command(queryId=hive_20190529100107_063ed2a4-e3b0-48a9-9bcc-49acd51925c1); Time taken: 1441.753 seconds
Error: Error while processing statement: FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory. (state=,code=)
Closing: : jdbc:hive2://hadoop1:10000/pdw_nameonce

Hive on spark时报错

解决
a.set spark.yarn.executor.memoryOverhead=512G 调大(权宜之计),excutor-momery + memoryOverhead不能大于集群内存
b.该问题的原因是因为OS层面虚拟内存分配导致,物理内存没有占用多少,但检查虚拟内存的时候却发现OOM,因此可以通过关闭虚拟内存检查来解决该问题,yarn.nodemanager.vmem-check-enabled=false 将虚拟内存检测设置为false

Hive-Container killed by YARN for exceeding memory limits. 9.2 GB of 9 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.的更多相关文章

  1. Container killed by YARN for exceeding memory limits

    19/08/12 14:15:35 ERROR cluster.YarnScheduler: Lost executor 5 on worker01.hadoop.mobile.cn: Contain ...

  2. [转载]Memory Limits for Windows and Windows Server Releases

    Memory Limits for Windows and Windows Server Releases This topic describes the memory limits for sup ...

  3. hadoop的job执行在yarn中内存分配调节————Container [pid=108284,containerID=container_e19_1533108188813_12125_01_000002] is running beyond virtual memory limits. Current usage: 653.1 MB of 2 GB physical memory used

    实际遇到的真实问题,解决方法: 1.调整虚拟内存率yarn.nodemanager.vmem-pmem-ratio (这个hadoop默认是2.1) 2.调整map与reduce的在AM中的大小大于y ...

  4. hive: insert数据时Error during job, obtaining debugging information 以及beyond physical memory limits

    insert overwrite table canal_amt1...... 2014-10-09 10:40:27,368 Stage-1 map = 100%, reduce = 32%, Cu ...

  5. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十三)kafka+spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 G)

    异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...

  6. spark运行任务报错:Container [...] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 5.0 GB of 6.3 GB virtual memory used. Killing container.

    spark版本:1.6.0 scala版本:2.10 报错日志: Application application_1562341921664_2123 failed 2 times due to AM ...

  7. [hadoop] - Container [xxxx] is running beyond physical/virtual memory limits.

    当运行mapreduce的时候,有时候会出现异常信息,提示物理内存或者虚拟内存超出限制,默认情况下:虚拟内存是物理内存的2.1倍.异常信息类似如下: Container [pid=13026,cont ...

  8. Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits

    以Spark-Client模式运行,Spark-Submit时出现了下面的错误: User: hadoop Name: Spark Pi Application Type: SPARK Applica ...

  9. Spark- Spark Yarn模式下跑yarn-client无法初始化SparkConext,Over usage of virtual memory

    在spark yarn模式下跑yarn-client时出现无法初始化SparkContext错误. // :: INFO mapreduce.Job: Task Id : attempt_142829 ...

随机推荐

  1. 树莓派设定笔记(Raspberry Pi 3 B+)

    树莓派默认用户名密码 pi / raspberry 一.启用root用户 设置root用户密码 sudo passwd root 开启root账户 sudo passwd --unlock root ...

  2. Spring MVC 面试题

    什么是springMVC?作用? springMVC是一种web层mvc框架,用于替代servlet(处理|响应请求,获取表单参数,表单校验等). 为什么要用springMVC? 基本上,框架的作用就 ...

  3. sublime text 编辑器的操作

    我一直在用的代码编辑器是sublime text,然后总结了一些相关的操作方法. 一 环境操作 1.放大显示比例:Ctrl+ 2.缩小显示比例:Ctrl- 3.分屏:Alt+ Shift +数字    ...

  4. jquery die()方法 语法

    jquery die()方法 语法 作用:die() 方法移除所有通过 live() 方法向指定元素添加的一个或多个事件处理程序.直线电机参数 语法:$(selector).die(event,fun ...

  5. CentOS 7 各个版本的区别

    CentOS 7 各个版本的区别 2017年07月04日 10:44:37 程诺 阅读数 52029    版权声明:本文为博主原创文章,未经博主允许不得转载. https://blog.csdn.n ...

  6. CF 672C 两个人捡瓶子 最短路与次短路思想

    C. Recycling Bottles time limit per test 2 seconds memory limit per test 256 megabytes input standar ...

  7. MySQL数据库入门———常用基础命令

    mysql 连接数据库命令: MySQL 连接本地数据库,用户名为“root”,密码“123”(注意:“-p”和“123” 之间不能有空格) mysql -h localhost -u root -p ...

  8. CodeForces 538F A Heap of Heaps

    题意 给定一个长度为n的数组A,将它变为一颗k叉树(1 <= k <= n - 1)(堆的形式编号). 问对于每一个k,有多少个节点小于它的父节点. 解题 显然,最初的想法是暴力.因为树的 ...

  9. Nginx配置记录【例3】

    C服务器,例: [root@82_www_db_2 conf.d]# egrep -v "^#|^$" /etc/nginx/nginx.conf user nginx; work ...

  10. char、varchar 哪种的搜索效率高

    在MySQL 中char 和 varchar 都是存储字符串的,区别在于char有固定的长度,而varchar属于可变长的字符类型.char(M)类型的数据列里,每个值都占用M个字节,如果某个长度小于 ...