[错误]Caused by: org.apache.spark.memory.SparkOutOfMemoryError: Unable to acquire 65536 bytes of memory, got 0
今天,在运行Spark SQL代码的时候,遇到了以下错误:
Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 50.0 failed 4 times, most recent failure: Lost task 3.3 in stage 50.0 (TID 9204, bjhw-feed-shunyi2513.bjhw.baidu.com, executor 20): org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:165)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:93)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:88)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:112)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.memory.SparkOutOfMemoryError: Unable to acquire 65536 bytes of memory, got 0
at org.apache.spark.memory.MemoryConsumer.throwOom(MemoryConsumer.java:157)
at org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:98)
at org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.<init>(UnsafeInMemorySorter.java:128)
at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.<init>(UnsafeExternalSorter.java:161)
at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.create(UnsafeExternalSorter.java:128)
at org.apache.spark.sql.execution.UnsafeExternalRowSorter.<init>(UnsafeExternalRowSorter.java:108)
at org.apache.spark.sql.execution.UnsafeExternalRowSorter.create(UnsafeExternalRowSorter.java:93)
at org.apache.spark.sql.execution.SortExec.createSorter(SortExec.scala:87)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage22.init(Unknown Source)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10.apply(WholeStageCodegenExec.scala:611)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10.apply(WholeStageCodegenExec.scala:608)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$26.apply(RDD.scala:847)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$26.apply(RDD.scala:847)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
Spark SQL关键代码如下:
spark.sql(
s"""
|select
|$baseDay,
|T1.type,
|T1.app_id,
|T1.name,
|T1.level,
|T1.business_line_id,
|T1.status,
|T1.quota,
|T1.real_flow,
|T1.real_show,
|T1.article_num,
|T1.baoliang_num,
|T1.baoliang_finish_num,
|T1.begin_time,
|T1.end_time,
|T2.fawen_num_bjh,
|T3.fawen_num_tth,
|T4.fenfa_num_bjh,
|T5.fenfa_num_tth,
|T6.fensi_num_bjh,
|T7.fensi_num_tth
|from
|base_table T1
|left outer join bjh_fawen T2 on T1.app_id=T2.app_id
|left outer join tth_fawen T3 on T1.app_id=T3.app_id
|left outer join bjh_fenfa T4 on T1.app_id=T4.app_id
|left outer join tth_fenfa T5 on T1.app_id=T5.app_id
|left outer join bjh_fensi_add T6 on T1.app_id=T6.app_id
|left outer join tth_fensi_add T7 on T1.app_id=T7.app_id
""".stripMargin).rdd.map(r => {
r.mkString("\t")
}).coalesce(5).saveAsTextFile(s"xxx")
解决办法:
去掉coalesce。
参考
https://www.e-learn.cn/content/wangluowenzhang/700757
[错误]Caused by: org.apache.spark.memory.SparkOutOfMemoryError: Unable to acquire 65536 bytes of memory, got 0的更多相关文章
- MySQL案例03:(MyCAT报错) [ERROR][$_NIOREACTOR-3-RW] caught err: java.lang.OutOfM emoryError: Unable to acquire 131072 bytes of memory, got 0
上班坐下来没多久,接同事电话说有两台mysql服务器无法访问,其中这两台服务器是mycat服务器+MySQL服务器,具体处理过程如下: 一.错误信息 错误信息01: :: ::, [INFO ][$_ ...
- maven运行出现错误:Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat].StandardHost[localhost].StandardContext[]](xjl456852原创)
maven在使用tomcat插件tomcat7-maven-plugin:2.2:run运行项目,出现下面错误: 严重: A child container failed during start j ...
- 错误:Caused by:org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow.Available: 0, required: 21. To avoid this,
这个是写入Redis时用的序列化器,然后错误提示是超过了大小限制,把配置调大即可. .set("spark.kryoserializer.buffer.max","128 ...
- 【原创】大叔问题定位分享(10)提交spark任务偶尔报错 org.apache.spark.SparkException: A master URL must be set in your configuration
spark 2.1.1 一 问题重现 问题代码示例 object MethodPositionTest { val sparkConf = new SparkConf().setAppName(&qu ...
- org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse
跑sparkPis示例程序 [root@node01 bin]# ./spark-submit --master spark://node01:7077 --class org.apache.spar ...
- Apache Spark 2.3.0 重要特性介绍
文章标题 Introducing Apache Spark 2.3 Apache Spark 2.3 介绍 Now Available on Databricks Runtime 4.0 现在可以在D ...
- failed to launch: nice -n 0 /home/hadoop/spark-2.3.3-bin-hadoop2.7/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://namenode1:7077
spark2.3.3安装完成之后启动报错: [hadoop@namenode1 sbin]$ ./start-all.shstarting org.apache.spark.deploy.master ...
- Caused by: org.apache.ibatis.binding.BindingException: Parameter 'parameter' not found.解决
Caused by: org.apache.ibatis.binding.BindingException: Parameter 'company' not found. Available para ...
- org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.binding.BindingException: Parameter 'username' not found. Available parameters are [1, 0, param1, param2]
Spring+mybatis错误:org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.bi ...
随机推荐
- pcntl_signal(): Error assigning signal
错误原因:SIGSTOP(19)和SIGKILL(6)两个信号不能使用,进程间通信换成其他信号量就好了.
- EF Core 基础知识
数据库连接字符串 在 ASP.NET Core 添加配置片段: { "ConnectionStrings": { "BloggingDatabase": &qu ...
- LeetCode刷题总结-哈希表篇
本文总结在LeetCode上有关哈希表的算法题,推荐刷题总数为12题.具体考察的知识点如下图: 1.数学问题 题号:149. 直线上最多的点数,难度困难 题号:554. 砖墙,难度中等(最大最小边界问 ...
- 在Join中使用FIND_IN_SET
$d['a.cold'] = 2; $d['b.PostId'] = $up_id['PostId']; $d['b.F_Id'] = $up_id['Id']; $d['WorkinTime'] = ...
- tp5.1批量删除商品
选中要删除的商品,点击批量删除 先在控制器使用sql语句查出商品信息goods 然后在html源码中使用goods变量. <table> {foreach $goods as $item} ...
- 去除TextView设置lineSpacingExtra后,最后一行多出的空白
转载请标明出处:https://www.cnblogs.com/tangZH/p/11985745.html 有些手机中,给TextView设置lineSpacingExtra后会出现最后一行的文字也 ...
- 二、计算机数据表示&&校验码(简单了解)
一.数据表示 机器数:各种数值在计算机中的表示形式.其特点是采用二进制计数器,数的符号用0和1标识,小数点则隐含,表示不占位置.机器数分为带符号数和无符号数.无符号数称为正数. 比如,十进制中的数 + ...
- 阿里云ECS部署Redis主备哨兵集群遇到的问题
一.部署 详细部署步骤:https://blog.csdn.net/lihongtai/article/details/82826809 Redis5.0版本需要注意的参数配置:https://www ...
- centos安装redis并开启多个redis实例
1.下载安装包 下载地址 : http://download.redis.io/releases/,去里面找对应的版本下载 例如 wget http://download ...
- 离线安装Redis 说明
安装Redis所需环境 需要Root权限 1. 准备压缩包解压 (这里我们准备安装到visible账户下的webdata文件夹下) // *****root账户***** cd /home/visib ...