Hive-Container killed by YARN for exceeding memory limits. 9.2 GB of 9 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task in stage 0.0 failed times, most recent failure: Lost task 3.3 in stage 0.0 (TID , hadoop7, executor ): ExecutorLostFailure (executor exited caused by one of the running tasks) Reason: Container killed by YARN for exceeding memory limits. 9.2 GB of GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.util.EventLoop$$anon$.run(EventLoop.scala:) ERROR : FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory.
INFO : Completed executing command(queryId=hive_20190529100107_063ed2a4-e3b0-48a9-9bcc-49acd51925c1); Time taken: 1441.753 seconds
Error: Error while processing statement: FAILED: Execution Error, return code from org.apache.hadoop.hive.ql.exec.spark.SparkTask. Spark job failed because of out of memory. (state=,code=)
Closing: : jdbc:hive2://hadoop1:10000/pdw_nameonce
Hive on spark时报错
解决
a.set spark.yarn.executor.memoryOverhead=512G 调大(权宜之计),excutor-momery + memoryOverhead不能大于集群内存
b.该问题的原因是因为OS层面虚拟内存分配导致,物理内存没有占用多少,但检查虚拟内存的时候却发现OOM,因此可以通过关闭虚拟内存检查来解决该问题,yarn.nodemanager.vmem-check-enabled=false 将虚拟内存检测设置为false
Hive-Container killed by YARN for exceeding memory limits. 9.2 GB of 9 GB physical memory used. Consider boosting spark.yarn.executor.memoryOverhead.的更多相关文章
- Container killed by YARN for exceeding memory limits
19/08/12 14:15:35 ERROR cluster.YarnScheduler: Lost executor 5 on worker01.hadoop.mobile.cn: Contain ...
- [转载]Memory Limits for Windows and Windows Server Releases
Memory Limits for Windows and Windows Server Releases This topic describes the memory limits for sup ...
- hadoop的job执行在yarn中内存分配调节————Container [pid=108284,containerID=container_e19_1533108188813_12125_01_000002] is running beyond virtual memory limits. Current usage: 653.1 MB of 2 GB physical memory used
实际遇到的真实问题,解决方法: 1.调整虚拟内存率yarn.nodemanager.vmem-pmem-ratio (这个hadoop默认是2.1) 2.调整map与reduce的在AM中的大小大于y ...
- hive: insert数据时Error during job, obtaining debugging information 以及beyond physical memory limits
insert overwrite table canal_amt1...... 2014-10-09 10:40:27,368 Stage-1 map = 100%, reduce = 32%, Cu ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十三)kafka+spark streaming打包好的程序提交时提示虚拟内存不足(Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical memory used; 2.2 GB of 2.1 G)
异常问题:Container is running beyond virtual memory limits. Current usage: 119.5 MB of 1 GB physical mem ...
- spark运行任务报错:Container [...] is running beyond physical memory limits. Current usage: 3.0 GB of 3 GB physical memory used; 5.0 GB of 6.3 GB virtual memory used. Killing container.
spark版本:1.6.0 scala版本:2.10 报错日志: Application application_1562341921664_2123 failed 2 times due to AM ...
- [hadoop] - Container [xxxx] is running beyond physical/virtual memory limits.
当运行mapreduce的时候,有时候会出现异常信息,提示物理内存或者虚拟内存超出限制,默认情况下:虚拟内存是物理内存的2.1倍.异常信息类似如下: Container [pid=13026,cont ...
- Container [pid=6263,containerID=container_1494900155967_0001_02_000001] is running beyond virtual memory limits
以Spark-Client模式运行,Spark-Submit时出现了下面的错误: User: hadoop Name: Spark Pi Application Type: SPARK Applica ...
- Spark- Spark Yarn模式下跑yarn-client无法初始化SparkConext,Over usage of virtual memory
在spark yarn模式下跑yarn-client时出现无法初始化SparkContext错误. // :: INFO mapreduce.Job: Task Id : attempt_142829 ...
随机推荐
- C# 常用方法——生成验证码
其他常用方法详见:https://www.cnblogs.com/zhuanjiao/p/12060937.html 原文链接:https://www.cnblogs.com/morang/p/405 ...
- STM32使用HAL库,使用延时卡死的问题。
之前一直使用标准库的,现在转到HAL库来后,编写了第一个程序就遇到了问题.发现我使用库里的延时程序HAL_Delay()时,会卡死在里面. 根据程序,进入到这个延时程序后 ,发现HAL_GetTick ...
- C# 数据测试
查询 100w条数据 39列 把100w条数据转换为匿名对象 加入到集合 所用的时间是 32 s 39列 600万条数据
- matplotlib绘图时显示额外的“figure”浮窗
引自 https://blog.csdn.net/weixin_41571493/article/details/82690052 问题:现在默认的Pycharm绘图时,都会出现下面的情况: 不能弹出 ...
- [CSP-S模拟测试]:平方数(数学+哈希)
题目传送门(内部题137) 输入格式 第一行,一个正整数$n$. 第二行$n$个正整数$a_1\sim a_n$. 输出格式 输出一个整数,为满足条件的二元组个数. 样例 样例输入: 51 2 3 4 ...
- 如何限制修改IP地址;如何禁止显示的本地连接属性
现在很多单位都配置了局域网,为了便于进行网络管理,同时为了提高的登录网络的速度,网管人员一般都为局域网中的每台电脑都指定了IP地址.但是在windows环境下其他用户很容易修改IP地址配置,这样就很容 ...
- Linux高级调试与优化——内存泄漏实战分析
最近在整理Linux调试方面的文档,正好碰到了一个内存泄漏踩栈的问题,借此机会记录一下分析过程. 首先,发现问题之后,赶紧看一下产生coredump文件没有,果不其然,产生了coredump,果断上g ...
- C++动态链接库实践
参考:https://www.cnblogs.com/Anker/p/3746802.html gcc -fPIC -shared calc.c -o libcalc.so, 编译得到 在linux上 ...
- leetcode 121买卖股票的最佳时机I
从下标1开始,维护两个变量,一个是0~i-1中的最低价格low,一个是当前的最高利润res;先更新最高利润,在更新最低价格:应用了贪心算法的基本思想,总是选择买入价格最低的股票,代码如下: 具有最优子 ...
- Android各种键盘挡住输入框解决办法
方法一:windowSoftInputMode:adjustResize和adjustPan 主要实现方法:在 AndroidManifest.xml 对应的Activity里添加 android:w ...