Hive-Container killed by YARN for exceeding memory limits. 9.2 GB of 9 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task in stage 0.0 failed times, most recent failure: Lost task 3.3 in stage 0.0 (TID , hadoop7, executor ): ExecutorLostFailure (executor exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 9.2 GB of GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.util.EventLoop$$anon$.run(EventLoop.scala:) ERROR : FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory.
INFO : Completed executing command(queryId=hive_20190529100107_063ed2a4-e3b0-48a9-9bcc-49acd51925c1); Time taken: 1441.753 seconds
Error: Error while processing statement: FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory. (state=,code=)
Closing: : jdbc:hive2://hadoop1:10000/pdw_nameonce
Hive on spark时报错
解决
a.set spark.yarn.executor.memoryOverhead=512G 调大(权宜之计),excutor-momery + memoryOverhead不能大于集群内存
b.该问题的原因是因为OS层面虚拟内存分配导致,物理内存没有占用多少,但检查虚拟内存的时候却发现OOM,因此可以通过关闭虚拟内存检查来解决该问题,yarn.nodemanager.vmem-check-enabled=false 将虚拟内存检测设置为false
Hive-Container killed by YARN for exceeding memory limits. 9.2 GB of 9 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.的更多相关文章
- Container killed by YARN for exceeding memory limits
19/08/12 14:15:35 ERROR cluster.YarnScheduler: Lost executor 5 on worker01.hadoop.mobile.cn: Contain ...
- [转载]Memory Limits for Windows and Windows Server Releases
Memory Limits for Windows and Windows Server Releases This topic describes the memory limits for sup ...
- hadoop的job执行在yarn中内存分配调节————Container [pid=108284,containerID=container_e19_1533108188813_12125_01_000002] is running beyond virtual memory limits. Current usage: 653.1 MB of 2 GB physical memory used
实际遇到的真实问题,解决方法: 1.调整虚拟内存率yarn.nodemanager.vmem-pmem-ratio (这个hadoop默认是2.1) 2.调整map与reduce的在AM中的大小大于y ...
- hive: insert数据时Error during job, obtaining debugging information 以及beyond physical memory limits
insert overwrite table canal_amt1...... 2014-10-09 10:40:27,368 Stage-1 map = 100%, reduce = 32%, Cu ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十三)kafka+spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 G)
异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...
- spark运行任务报错:Container [...] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 5.0 GB of 6.3 GB virtual memory used. Killing container.
spark版本:1.6.0 scala版本:2.10 报错日志: Application application_1562341921664_2123 failed 2 times due to AM ...
- [hadoop] - Container [xxxx] is running beyond physical/virtual memory limits.
当运行mapreduce的时候,有时候会出现异常信息,提示物理内存或者虚拟内存超出限制,默认情况下:虚拟内存是物理内存的2.1倍.异常信息类似如下: Container [pid=13026,cont ...
- Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits
以Spark-Client模式运行,Spark-Submit时出现了下面的错误: User: hadoop Name: Spark Pi Application Type: SPARK Applica ...
- Spark- Spark Yarn模式下跑yarn-client无法初始化SparkConext,Over usage of virtual memory
在spark yarn模式下跑yarn-client时出现无法初始化SparkContext错误. // :: INFO mapreduce.Job: Task Id : attempt_142829 ...
随机推荐
- 解决You may use special comments to disable some warnings.
问题:运行vue项目出现: You may use special comments to disable some warnings.Use // eslint-disable-next-line ...
- PHP大文件上传断点续传源码
文件夹数据库处理逻辑 publicclass DbFolder { JSONObject root; public DbFolder() { this.root = new JSONObject(); ...
- LOJ #6145. 「2017 山东三轮集训 Day7」Easy 点分树+线段树
这个就比较简单了~ Code: #include <cstdio> #include <algorithm> #define N 100004 #define inf 1000 ...
- 1003: [ZJOI2006]物流运输
就我一开始写状压的吗? 调不过 后来发现(直接搜索)直接最短路就行了-- \(f[i]\)表示前\(i\)天最少需要多少 \(f[i] = min(f[j] + dis(j + 1, i))\) 然后 ...
- JavaWeb_(request和response)用户登录注册模板_基础版
用户登录注册模板进阶版 传送门 用户登录注册模板基础版 登录:当用户登录成功时,跳转到personCenter.jsp,当用户登录失败时,跳转到login.jsp并给出提示 注册:当用户注册成功时,跳 ...
- Unity3D_(游戏)控制物体的上、下、左、右移动
通过键盘上↑.↓.←.→实现对物体的控制 using System.Collections; using System.Collections.Generic; using UnityEngine; ...
- 拉格朗日插值法板子(dls)
namespace polysum { ; ll a[D],f[D],g[D],p[D],p1[D],p2[D],b[D],h[D][],C[D]; ll calcn(int d,ll *a,ll n ...
- [CSP-S模拟测试]:开心的金明(贪心+模拟)
题目传送门(内部题117) 输入格式 第一行一个整数$k$,表示需要处理的月份数. 接下来的$k$行,每行$4$个整数,第$1+i$行分别为:$c_i,d_i,m_i,p_i$ 接下来的$k-1$行, ...
- JavaWeb--Servlet 详解
一.基本概念 Servlet是运行在Web服务器上的小程序,通过http协议和客户端进行交互.这里的客户端一般为浏览器,发送http请求(request)给服务器(如Tomcat).服务器接收到请求后 ...
- 【Linux】GDB用法详解(5小时快速教程)
GDB是一个强大的命令行调试工具.虽然X Window提供了GDB的图形版DDD,但是我仍然更钟爱在命令行模式下使用GDB.大家知道命令行的强大就是在于,其可以形成执行序列,形成脚本. UNIX下的软 ...