今天,在运行Spark SQL代码的时候,遇到了以下错误:

Caused by: org.apache.spark.SparkException: Job aborted due to stage failure: Task 3 in stage 50.0 failed 4 times, most recent failure: Lost task 3.3 in stage 50.0 (TID 9204, bjhw-feed-shunyi2513.bjhw.baidu.com, executor 20): org.apache.spark.SparkException: Task failed while writing rows
at org.apache.spark.internal.io.SparkHadoopWriter$.org$apache$spark$internal$io$SparkHadoopWriter$$executeTask(SparkHadoopWriter.scala:165)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:93)
at org.apache.spark.internal.io.SparkHadoopWriter$$anonfun$3.apply(SparkHadoopWriter.scala:88)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
at org.apache.spark.scheduler.Task.run(Task.scala:112)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.memory.SparkOutOfMemoryError: Unable to acquire 65536 bytes of memory, got 0
at org.apache.spark.memory.MemoryConsumer.throwOom(MemoryConsumer.java:157)
at org.apache.spark.memory.MemoryConsumer.allocateArray(MemoryConsumer.java:98)
at org.apache.spark.util.collection.unsafe.sort.UnsafeInMemorySorter.<init>(UnsafeInMemorySorter.java:128)
at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.<init>(UnsafeExternalSorter.java:161)
at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.create(UnsafeExternalSorter.java:128)
at org.apache.spark.sql.execution.UnsafeExternalRowSorter.<init>(UnsafeExternalRowSorter.java:108)
at org.apache.spark.sql.execution.UnsafeExternalRowSorter.create(UnsafeExternalRowSorter.java:93)
at org.apache.spark.sql.execution.SortExec.createSorter(SortExec.scala:87)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage22.init(Unknown Source)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10.apply(WholeStageCodegenExec.scala:611)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$10.apply(WholeStageCodegenExec.scala:608)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$26.apply(RDD.scala:847)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsWithIndex$1$$anonfun$apply$26.apply(RDD.scala:847)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)

Spark SQL关键代码如下:

spark.sql(
s"""
|select
|$baseDay,
|T1.type,
|T1.app_id,
|T1.name,
|T1.level,
|T1.business_line_id,
|T1.status,
|T1.quota,
|T1.real_flow,
|T1.real_show,
|T1.article_num,
|T1.baoliang_num,
|T1.baoliang_finish_num,
|T1.begin_time,
|T1.end_time,
|T2.fawen_num_bjh,
|T3.fawen_num_tth,
|T4.fenfa_num_bjh,
|T5.fenfa_num_tth,
|T6.fensi_num_bjh,
|T7.fensi_num_tth
|from
|base_table T1
|left outer join bjh_fawen T2 on T1.app_id=T2.app_id
|left outer join tth_fawen T3 on T1.app_id=T3.app_id
|left outer join bjh_fenfa T4 on T1.app_id=T4.app_id
|left outer join tth_fenfa T5 on T1.app_id=T5.app_id
|left outer join bjh_fensi_add T6 on T1.app_id=T6.app_id
|left outer join tth_fensi_add T7 on T1.app_id=T7.app_id
""".stripMargin).rdd.map(r => {
r.mkString("\t")
}).coalesce(5).saveAsTextFile(s"xxx")

解决办法:

去掉coalesce。

参考

https://www.e-learn.cn/content/wangluowenzhang/700757

[错误]Caused by: org.apache.spark.memory.SparkOutOfMemoryError: Unable to acquire 65536 bytes of memory, got 0的更多相关文章

  1. MySQL案例03:(MyCAT报错) [ERROR][$_NIOREACTOR-3-RW] caught err: java.lang.OutOfM emoryError: Unable to acquire 131072 bytes of memory, got 0

    上班坐下来没多久,接同事电话说有两台mysql服务器无法访问,其中这两台服务器是mycat服务器+MySQL服务器,具体处理过程如下: 一.错误信息 错误信息01: :: ::, [INFO ][$_ ...

  2. maven运行出现错误:Caused by: org.apache.catalina.LifecycleException: Failed to start component [StandardEngine[Tomcat].StandardHost[localhost].StandardContext[]](xjl456852原创)

    maven在使用tomcat插件tomcat7-maven-plugin:2.2:run运行项目,出现下面错误: 严重: A child container failed during start j ...

  3. 错误:Caused by:org.apache.spark.SparkException: Kryo serialization failed: Buffer overflow.Available: 0, required: 21. To avoid this,

    这个是写入Redis时用的序列化器,然后错误提示是超过了大小限制,把配置调大即可. .set("spark.kryoserializer.buffer.max","128 ...

  4. 【原创】大叔问题定位分享(10)提交spark任务偶尔报错 org.apache.spark.SparkException: A master URL must be set in your configuration

    spark 2.1.1 一 问题重现 问题代码示例 object MethodPositionTest { val sparkConf = new SparkConf().setAppName(&qu ...

  5. org.apache.spark.rpc.RpcTimeout$$anonfun$1.applyOrElse

    跑sparkPis示例程序 [root@node01 bin]# ./spark-submit --master spark://node01:7077 --class org.apache.spar ...

  6. Apache Spark 2.3.0 重要特性介绍

    文章标题 Introducing Apache Spark 2.3 Apache Spark 2.3 介绍 Now Available on Databricks Runtime 4.0 现在可以在D ...

  7. failed to launch: nice -n 0 /home/hadoop/spark-2.3.3-bin-hadoop2.7/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 spark://namenode1:7077

    spark2.3.3安装完成之后启动报错: [hadoop@namenode1 sbin]$ ./start-all.shstarting org.apache.spark.deploy.master ...

  8. Caused by: org.apache.ibatis.binding.BindingException: Parameter 'parameter' not found.解决

    Caused by: org.apache.ibatis.binding.BindingException: Parameter 'company' not found. Available para ...

  9. org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.binding.BindingException: Parameter 'username' not found. Available parameters are [1, 0, param1, param2]

    Spring+mybatis错误:org.mybatis.spring.MyBatisSystemException: nested exception is org.apache.ibatis.bi ...

随机推荐

  1. 如何用Jmeter做接口测试

    Jmeter介绍&测试准备: Jmeter介绍:Jmeter是软件行业里面比较常用的接口.性能测试工具,下面介绍下如何用Jmeter做接口测试以及如何用它连接MySQL数据库. 前期准备:测试 ...

  2. BeetleX之TCP服务应用详解

    BeetleX是.net core平台下的一个开源TCP 通讯组件,它不仅使用简便还提供了出色性能的支持,可以轻易让你实现上百万级别RPS吞吐的服务应用.组件所提供的基础功能也非常完善,可以让你轻易扩 ...

  3. 《MySQL数据库》常用语法(一)

    MySQL从创建数据库到对表的增删改操作汇总. 1. 数据库操作: -- 查看所有的数据库 SHOW DATABASES ; -- 创建一个数据库,XXX表示数据库名称 CREATE DATABASE ...

  4. Linux下监控用户操作轨迹

    在实际工作当中,都会碰到误删除.误修改配置文件等事件.如果没有堡垒机,要在linux系统上查看到底谁对配置文件做了误操作,特别是遇到删库跑路的事件,当然可以通过history来查看历史命令记录,但如果 ...

  5. div拖拽效果 JQuery

    <!DOCTYPE html> <html> <head> <meta name="description" content=" ...

  6. VMware Workstation 15 Pro中安装ubuntu1804

    这篇笔记是一篇安装教程,没有什么实际的意义,仅为了记录一下……距离上次弄这东西不知道多长时间了,以至于这次再次使用时很是生疏,于是就想着把过程记录下来方便之后查看. 这里不涉及VMware Works ...

  7. 集合系列 Map(十二):HashMap

    HashMap 是 Map 基于哈希散列算法的实现,其在 JDK1.7 中采用了数组+链表的数据结构.在 JDK1.8 中为了提高查询效率,采用了数组+链表+红黑树的数据结构.本文所有讲解均基于 JD ...

  8. PageRank算法小结

    PageRank 这个学期选了数据挖掘的课程,期末要做一个关于链接分析算法的报告,这是PR算法的小结. 算法 PR算法基于等级权威的思想,及不仅考虑指向该网页的链接数,同时也考虑指向该网页网站的重要程 ...

  9. Dynamic Code Evaluation:Code Injection 动态代码评估:代码注入

  10. input输入框change和blur事件区别

    blur与change事件在绝大部分的情况下表现都非常相似,输入结束后,离开输入框,会先后触发change与blur,唯有两点例外. 1.没有进行任何输入时,不会触发change 在这种情况下失焦后, ...