今天,在运行Spark SQL代码的时候,遇到了以下错误:

Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 50.0 failed 4 times, most recent failure: Lost task 3.3 in stage 50.0 (TID 9204, bjhw-feed-shunyi2513.bjhw.baidu.com, executor 20): org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:165)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:93)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:88)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:112)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.memory.SparkOutOfMemoryError: Unable to acquire 65536 bytes of memory, got 0
at org.apache.spark.memory.MemoryConsumer.throwOom(MemoryConsumer.java:157)
at org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:98)
at org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.<init>(UnsafeInMemorySorter.java:128)
at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.<init>(UnsafeExternalSorter.java:161)
at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.create(UnsafeExternalSorter.java:128)
at org.apache.spark.sql.execution.UnsafeExternalRowSorter.<init>(UnsafeExternalRowSorter.java:108)
at org.apache.spark.sql.execution.UnsafeExternalRowSorter.create(UnsafeExternalRowSorter.java:93)
at org.apache.spark.sql.execution.SortExec.createSorter(SortExec.scala:87)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage22.init(Unknown Source)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10.apply(WholeStageCodegenExec.scala:611)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10.apply(WholeStageCodegenExec.scala:608)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$26.apply(RDD.scala:847)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$26.apply(RDD.scala:847)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)

Spark SQL关键代码如下:

spark.sql(
s"""
|select
|$baseDay,
|T1.type,
|T1.app_id,
|T1.name,
|T1.level,
|T1.business_line_id,
|T1.status,
|T1.quota,
|T1.real_flow,
|T1.real_show,
|T1.article_num,
|T1.baoliang_num,
|T1.baoliang_finish_num,
|T1.begin_time,
|T1.end_time,
|T2.fawen_num_bjh,
|T3.fawen_num_tth,
|T4.fenfa_num_bjh,
|T5.fenfa_num_tth,
|T6.fensi_num_bjh,
|T7.fensi_num_tth
|from
|base_table T1
|left outer join bjh_fawen T2 on T1.app_id=T2.app_id
|left outer join tth_fawen T3 on T1.app_id=T3.app_id
|left outer join bjh_fenfa T4 on T1.app_id=T4.app_id
|left outer join tth_fenfa T5 on T1.app_id=T5.app_id
|left outer join bjh_fensi_add T6 on T1.app_id=T6.app_id
|left outer join tth_fensi_add T7 on T1.app_id=T7.app_id
""".stripMargin).rdd.map(r => {
r.mkString("\t")
}).coalesce(5).saveAsTextFile(s"xxx")

解决办法:

去掉coalesce。

参考

https://www.e-learn.cn/content/wangluowenzhang/700757

[错误]Caused by: org.apache.spark.memory.SparkOutOfMemoryError: Unable to acquire 65536 bytes of memory, got 0的更多相关文章

  1. MySQL案例03:(MyCAT报错) [ERROR][$_NIOREACTOR-3-RW] caught err: java.lang.OutOfM emoryError: Unable to acquire 131072 bytes of memory, got 0

    上班坐下来没多久,接同事电话说有两台mysql服务器无法访问,其中这两台服务器是mycat服务器+MySQL服务器,具体处理过程如下: 一.错误信息 错误信息01: :: ::, [INFO ][$_ ...

  2. maven运行出现错误:Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat].StandardHost[localhost].StandardContext[]](xjl456852原创)

    maven在使用tomcat插件tomcat7-maven-plugin:2.2:run运行项目,出现下面错误: 严重: A child container failed during start j ...

  3. 错误:Caused by:org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow.Available: 0, required: 21. To avoid this,

    这个是写入Redis时用的序列化器,然后错误提示是超过了大小限制,把配置调大即可. .set("spark.kryoserializer.buffer.max","128 ...

  4. 【原创】大叔问题定位分享(10)提交spark任务偶尔报错 org.apache.spark.SparkException: A master URL must be set in your configuration

    spark 2.1.1 一 问题重现 问题代码示例 object MethodPositionTest { val sparkConf = new SparkConf().setAppName(&qu ...

  5. org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse

    跑sparkPis示例程序 [root@node01 bin]# ./spark-submit --master spark://node01:7077 --class org.apache.spar ...

  6. Apache Spark 2.3.0 重要特性介绍

    文章标题 Introducing Apache Spark 2.3 Apache Spark 2.3 介绍 Now Available on Databricks Runtime 4.0 现在可以在D ...

  7. failed to launch: nice -n 0 /home/hadoop/spark-2.3.3-bin-hadoop2.7/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://namenode1:7077

    spark2.3.3安装完成之后启动报错: [hadoop@namenode1 sbin]$ ./start-all.shstarting org.apache.spark.deploy.master ...

  8. Caused by: org.apache.ibatis.binding.BindingException: Parameter 'parameter' not found.解决

    Caused by: org.apache.ibatis.binding.BindingException: Parameter 'company' not found. Available para ...

  9. org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.binding.BindingException: Parameter 'username' not found. Available parameters are [1, 0, param1, param2]

    Spring+mybatis错误:org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.bi ...

随机推荐

  1. 《Java知识应用》Java通过Get和Post实现HTTP请求。

    Http请求,是非常常见并且的数据交互方式. 下面讲解:Get和Post的两个实战案例. 用于测试的Action(controller). @RequestMapping(value = " ...

  2. C#获取字符串的拼音和首字母

    在C#中我们想要获取字符串的拼音并不是那么困难的,在网上看到很多都是特别笨的方式来实现,其实各有各的好处吧,如果使用了下方法方式,它不知道多音字,这就是一个问题. /// <summary> ...

  3. 下载Abook 高等教育出版社网站资料

    一.背景 又快到了期末复习周,这个学期学了一门操作系统,老师没有给课本习题的答案,说是配套网站上有,我看了一下,确实有,是高等教育出版社的数字课程网站Abookl http://abook.hep.c ...

  4. c++-构造函数练习和delete,new

    强化练习 #include <stdio.h> #include <stdlib.h> #include <string.h> #include <iostr ...

  5. 6. abp中的拦截器

    abp拦截器基本定义 拦截器接口定义: public interface IAbpInterceptor { void Intercept(IAbpMethodInvocation invocatio ...

  6. jquery实现商品sku多属性选择(商品详情页)

    转载于https://blog.csdn.net/csdn924618338/article/details/51455595 实现效果 源码 <!DOCTYPE HTML> <ht ...

  7. flutter_boot android和flutter源码阅读记录

    版本号0.1.54 看源码之前,我先去看下官方文档,对于其源码的设计说明,文中所说的原生都是指android 看完官方文档的说明,我有以下几个疑问 第一个:容器是怎么设计的? 第二个:native和f ...

  8. Android 布局阴影实现

    最近项目要求,ui有很多有关于阴影的设计要求,网上找了些实现方式,但都不是很理想.现在闲下来了,就寻思着自己写个阴影布局耍耍,以备后用.先说道说道我找到的几种阴影实现方式: 系统阴影 Andorid ...

  9. oop面向对象【接口、多态】

    今日内容 1.接口 2.三大特征——多态 3.引用类型转换 教学目标 1.写出定义接口的格式 2.写出实现接口的格式 3.说出接口中成员的特点 4.能够说出使用多态的前提条件 5.理解多态的向上转型 ...

  10. Leetcode题解 - DFS部分简单题目代码+思路(113、114、116、117、1020、494、576、688)

    这次接触到记忆化DFS,不过还需要多加练习 113. 路径总和 II - (根到叶子结点相关信息记录) """ 思路: 本题 = 根到叶子结点的路径记录 + 根到叶子结点 ...