Hive-Container killed by YARN for exceeding memory limits. 9.2 GB of 9 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task in stage 0.0 failed times, most recent failure: Lost task 3.3 in stage 0.0 (TID , hadoop7, executor ): ExecutorLostFailure (executor exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 9.2 GB of GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.util.EventLoop$$anon$.run(EventLoop.scala:) ERROR : FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory.
INFO : Completed executing command(queryId=hive_20190529100107_063ed2a4-e3b0-48a9-9bcc-49acd51925c1); Time taken: 1441.753 seconds
Error: Error while processing statement: FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory. (state=,code=)
Closing: : jdbc:hive2://hadoop1:10000/pdw_nameonce
Hive on spark时报错
解决
a.set spark.yarn.executor.memoryOverhead=512G 调大(权宜之计),excutor-momery + memoryOverhead不能大于集群内存
b.该问题的原因是因为OS层面虚拟内存分配导致,物理内存没有占用多少,但检查虚拟内存的时候却发现OOM,因此可以通过关闭虚拟内存检查来解决该问题,yarn.nodemanager.vmem-check-enabled=false 将虚拟内存检测设置为false
Hive-Container killed by YARN for exceeding memory limits. 9.2 GB of 9 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.的更多相关文章
- Container killed by YARN for exceeding memory limits
19/08/12 14:15:35 ERROR cluster.YarnScheduler: Lost executor 5 on worker01.hadoop.mobile.cn: Contain ...
- [转载]Memory Limits for Windows and Windows Server Releases
Memory Limits for Windows and Windows Server Releases This topic describes the memory limits for sup ...
- hadoop的job执行在yarn中内存分配调节————Container [pid=108284,containerID=container_e19_1533108188813_12125_01_000002] is running beyond virtual memory limits. Current usage: 653.1 MB of 2 GB physical memory used
实际遇到的真实问题,解决方法: 1.调整虚拟内存率yarn.nodemanager.vmem-pmem-ratio (这个hadoop默认是2.1) 2.调整map与reduce的在AM中的大小大于y ...
- hive: insert数据时Error during job, obtaining debugging information 以及beyond physical memory limits
insert overwrite table canal_amt1...... 2014-10-09 10:40:27,368 Stage-1 map = 100%, reduce = 32%, Cu ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十三)kafka+spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 G)
异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...
- spark运行任务报错:Container [...] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 5.0 GB of 6.3 GB virtual memory used. Killing container.
spark版本:1.6.0 scala版本:2.10 报错日志: Application application_1562341921664_2123 failed 2 times due to AM ...
- [hadoop] - Container [xxxx] is running beyond physical/virtual memory limits.
当运行mapreduce的时候,有时候会出现异常信息,提示物理内存或者虚拟内存超出限制,默认情况下:虚拟内存是物理内存的2.1倍.异常信息类似如下: Container [pid=13026,cont ...
- Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits
以Spark-Client模式运行,Spark-Submit时出现了下面的错误: User: hadoop Name: Spark Pi Application Type: SPARK Applica ...
- Spark- Spark Yarn模式下跑yarn-client无法初始化SparkConext,Over usage of virtual memory
在spark yarn模式下跑yarn-client时出现无法初始化SparkContext错误. // :: INFO mapreduce.Job: Task Id : attempt_142829 ...
随机推荐
- win7系统安装虚拟网卡进行与Linux虚拟机的通信
有网络情况下,安装Linux时选择网桥即可实现Window与Linux直接通信. 无网络情况下,最简单的方法是在window系统中安装虚拟网卡,以进行与Linux的通信,步骤如下: (1)右击“我的计 ...
- BZOJ 3697: 采药人的路径 点分治
好久不做点分治的题了,正好在联赛之前抓紧复习一下. 先把边权为 $0$ 的置为 $-1$.定义几个状态:$f[dis][0/1],g[dis][0/1]$ 其中 $f$ 代表在当前遍历的子树内的答案. ...
- 【CF963C】Cutting Rectangle(数论,构造,map)
题意: 思路:考虑构造最小的单位矩形然后平铺 单位矩形中每种矩形的数量可以根据比例算出来,为c[i]/d,其中d是所有c[i]的gcd,如果能构造成功答案即为d的因子个数 考虑如果要将两种矩形放在同一 ...
- vue路由 routers的写法:require用与不用
vue路由的写法有很多种,这里我只说routers的写法,一种是compcomponent后面直接写路径,另一种是用require的方式,来看代码 import Vue from 'vue' impo ...
- JIRA7.13版本创建项目:字段和界面(三)
这是我从网上找的资料和最新版的相差不大,可以借鉴原文链接:http://ju.outofmemory.cn/entry/367224 项目的版本号取决于修复版本,不是影响版本 字段 我们已经知道如何在 ...
- oracle条件参数中 IN函数中的值最大只能为1000个
delete from dep where id in(1,2,3.....) 括号里面字段个数最大只能为1000个
- vue跳转到指定位置
document.querySelector(id).scrollIntoView(true)//跳转到顶部 window.scrollTo(0, 0)
- Eclipse常用快捷键与IDEA中的对比.
最近从github下载了一些项目,但是看了一下使用的编译器是IDEA的,所以就下载了一个IDEA. 这边可以提供几个网址:只要是针对各个下载idea之后的一些激活相关的帮助. http://idea. ...
- java 多线程为何会出现无法捕获异常的现象?
提出问题: 很多Java初学者在初学java 多线程的时候可能会看到如下代码: public class ExceptionThread implements Runnable{ @Override ...
- MQTT消息中间件Mosquitto的安装和配置
特别提示:本人博客部分有参考网络其他博客,但均是本人亲手编写过并验证通过.如发现博客有错误,请及时提出以免误导其他人,谢谢!欢迎转载,但记得标明文章出处:http://www.cnblogs.com/ ...