1 详细异常

ERROR scheduler.JobScheduler: Error running job streaming job  ms.
org.apache.spark.SparkException: Job aborted due to stage failure: Task in stage 0.0 failed times,
most recent failure: Lost task 0.3 in stage 0.0 (TID , , executor ): ExecutorLostFailure (executor exited caused by one of the running tasks) Reason: Executor heartbeat timed out after ms
Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$failJobAndIndependentStages(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.util.EventLoop$$anon$.run(EventLoop.scala:)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$foreachPartition$.apply(RDD.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:)
at org.apache.spark.rdd.RDD.foreachPartition(RDD.scala:)
at com.wm.bigdata.phoenix.etl.WmPhoniexEtlToHbase$$anonfun$main$.apply(WmPhoniexEtlToHbase.scala:)
at com.wm.bigdata.phoenix.etl.WmPhoniexEtlToHbase$$anonfun$main$.apply(WmPhoniexEtlToHbase.scala:)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$$$anonfun$apply$mcV$sp$.apply(DStream.scala:)
at org.apache.spark.streaming.dstream.DStream$$anonfun$foreachRDD$$$anonfun$apply$mcV$sp$.apply(DStream.scala:)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$$$anonfun$apply$mcV$sp$.apply$mcV$sp(ForEachDStream.scala:)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$$$anonfun$apply$mcV$sp$.apply(ForEachDStream.scala:)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$$$anonfun$apply$mcV$sp$.apply(ForEachDStream.scala:)
at org.apache.spark.streaming.dstream.DStream.createRDDWithLocalProperties(DStream.scala:)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$.apply$mcV$sp(ForEachDStream.scala:)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$.apply(ForEachDStream.scala:)
at org.apache.spark.streaming.dstream.ForEachDStream$$anonfun$.apply(ForEachDStream.scala:)
at scala.util.Try$.apply(Try.scala:)
at org.apache.spark.streaming.scheduler.Job.run(Job.scala:)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$.apply$mcV$sp(JobScheduler.scala:)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$.apply(JobScheduler.scala:)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler$$anonfun$run$.apply(JobScheduler.scala:)
at scala.util.DynamicVariable.withValue(DynamicVariable.scala:)
at org.apache.spark.streaming.scheduler.JobScheduler$JobHandler.run(JobScheduler.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)

2 查询Stack Overflow里面问答

 
3 解决
提交spark submit任务的时候,加大超时时间设置
--conf spark.network.timeout  --conf spark.executor.heartbeatInterval=   --conf spark.driver.maxResultSize=4g

【异常】Reason: Executor heartbeat timed out after 140927 ms的更多相关文章

  1. 邮件发送异常, [Errno 110] Connection timed out

    邮件发送异常,  [Errno 110] Connection timed out SMTP 服务地址(华东 1): smtpdm.aliyun.com SMTP 服务地址(新加坡):smtpdm-a ...

  2. (node:7584) UnhandledPromiseRejectionWarning: MongooseTimeoutError: Server selection timed out after 30000 ms

    记录一次学习node.js犯的低级错误 这里遇到一个这样的问题 express连接mongoose时报错(node:7584) UnhandledPromiseRejectionWarning: Mo ...

  3. 处理11gR2 RAC集群资源状态异常INTERMEDIATE,CHECK TIMED OUT

    注意节点6,7的磁盘CRSDG的状态明显不正常.oracle@ZJHZ-PS-CMREAD-SV-RPTDW06-DB-SD:~> crsctl status resource -t |less ...

  4. mybatis-ehcache整合中出现的异常 ibatis处理器异常(executor.ExecutorException)解决方法

    今天学习mabatis时出现了,ibatis处理器处理器异常,显示原因是Executor was closed.则很有可能是ibatis的session被关闭了, 后面看了一下测试程序其实是把sqlS ...

  5. Timed out after 30000 ms while waiting to connect

    今天使用mongo-java-drive写连接mongo的客户端,着实被上面那个错坑了一把.回顾一下解决过程: 报错: com.mongodb.MongoTimeoutException: Timed ...

  6. spark异常篇-Removing executor 5 with no recent heartbeats: 120504 ms exceeds timeout 120000 ms 可能的解决方案

    问题描述与分析 题目中的问题大致可以描述为: 由于某个 Executor 没有按时向 Driver 发送心跳,而被 Driver 判断该 Executor 已挂掉,此时 Driver 要把 该 Exe ...

  7. Spark代码调优(一)

    环境极其恶劣情况下: import org.apache.spark.SparkContext import org.apache.spark.rdd.RDD import org.apache.sp ...

  8. spark 实现TOP N

    数据量较少的情况下: scala> numrdd.sortBy(x=>x,false).take(3) res17: Array[Int] = Array(100, 99, 98) sca ...

  9. IDEA 开发环境中 调试Spark SQL及遇到问题解决办法

    1.问题 java.lang.OutOfMemoryError: PermGen space java.lang.OutOfMemoryError: Java heap space // :: WAR ...

随机推荐

  1. Leaflet - 实现按照路径方向旋转的 Marker

    在每帧动画时设置 Marker 的 transform 属性就行,zjffun/Leaflet.MovingMarker at zjf/feature-rotate 我在这个 Fork 中实现了一下. ...

  2. 如何在网页中添加 jQuery。

    转自:http://www.runoob.com/jquery/jquery-install.html 网页中添加 jQuery 可以通过多种方法在网页中添加 jQuery. 您可以使用以下方法: 从 ...

  3. 在mac下安装fiddler

    说明:学习fiddler好久了,一直以来也没形成文档,之前学的一些知识也快忘得差不多了:正好利用假期,把之前学的知识都捡起来,捋一遍,形成文档,供以后使用的时候参考和借鉴 一:下载Mono 因为fid ...

  4. centos 7 删除 virbr0 虚拟网卡

    出现虚拟网卡是因为安装时启用了 libvirtd 服务后生成的关闭方法virsh net-list名称               状态     自动开始  持久------------------- ...

  5. Spring MVC整合fastjson、EasyUI乱码问题

    一.框架版本 Spring MVC:spring-webmvc-4.0.0.RELEASE fastjson:fastjson-1.2.45 EasyUI:1.5 二.乱码现象 Controller调 ...

  6. 1.Spring项目启动时,加载相关初始化配置

    Spring项目启动时,会加载一些常用的配置: 1.加载spring上下文 SpringApplicationContextUtils.initApplicationContext(event.get ...

  7. python的I/O编程:文件打开、操作文件和目录、序列化操作

    1 文件读写 1.1 打开文件: open(r'D:\text.txt') 1.2 文件模式 值 功能描述 'r' 读模式 'w' 写模式 'a' 追加模式 'b' 二进制模式 '+' 读写模式 1. ...

  8. Core 3 WPF MVVM框架 Prism系列之数据绑定

    一.安装Prism 1.使用程序包管理控制台# Install-Package Prism.Unity -Version 7.2.0.1367 也可以去掉‘-Version 7.2.0.1367’获取 ...

  9. Opencv中的WMesh

    费了半天劲,终于把这个WMesh类搞懂了,可惜效果不佳,比Matlab中的mesh差多了. 使用WMesh前,需要有一个Mesh对象,Mesh是三维数据点的基本几何信息.颜色信息.索引信息等集成的对象 ...

  10. MySQL Explain命令详解--表的读取顺序,数据读取操作的类型等

    表示索引中使用的字节数,可通过该列计算查询中使用的索引的长度(key_len显示的值为索引字段的最大可能长度,并非实际使用长度,即key_len是根据表定义计算而得,不是通过表内检索出的) 不损失精确 ...