IDEA Spark程序报错处理
错误一:
// :: ERROR Executor: Exception in task 0.0 in stage 0.0 (TID )
java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at Person.<init>(RDD_To_DataFrame.scala:)
at RDD_To_DataFrame$.$anonfun$main$(RDD_To_DataFrame.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$$$anon$.hasNext(WholeStageCodegenExec.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:)
at org.apache.spark.scheduler.Task.run(Task.scala:)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
// :: ERROR TaskSetManager: Task in stage 0.0 failed times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task in stage 0.0 failed times, most recent failure: Lost task 0.0 in stage 0.0 (TID , localhost, executor driver): java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at Person.<init>(RDD_To_DataFrame.scala:)
at RDD_To_DataFrame$.$anonfun$main$(RDD_To_DataFrame.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$$$anon$.hasNext(WholeStageCodegenExec.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:)
at org.apache.spark.scheduler.Task.run(Task.scala:)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:) Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.util.EventLoop$$anon$.run(EventLoop.scala:)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:)
at org.apache.spark.sql.Dataset$$anonfun$head$.apply(Dataset.scala:)
at org.apache.spark.sql.Dataset$$anonfun$head$.apply(Dataset.scala:)
at org.apache.spark.sql.Dataset$$anonfun$.apply(Dataset.scala:)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:)
at org.apache.spark.sql.Dataset.head(Dataset.scala:)
at org.apache.spark.sql.Dataset.take(Dataset.scala:)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:)
at org.apache.spark.sql.Dataset.show(Dataset.scala:)
at org.apache.spark.sql.Dataset.show(Dataset.scala:)
at org.apache.spark.sql.Dataset.show(Dataset.scala:)
at RDD_To_DataFrame$.main(RDD_To_DataFrame.scala:)
at RDD_To_DataFrame.main(RDD_To_DataFrame.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:)
Caused by: java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at Person.<init>(RDD_To_DataFrame.scala:)
at RDD_To_DataFrame$.$anonfun$main$(RDD_To_DataFrame.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$$$anon$.hasNext(WholeStageCodegenExec.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:)
at org.apache.spark.scheduler.Task.run(Task.scala:)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
错误处理:将IDEA中的Scala 改为2.10.4版本
这个问题主要出现在 Spark程序使用 case class 类时
错误二:
Error:(, ) No TypeTag available for (Array[String],)
val documentDF= spark.createDataFrame(Seq(
错误处理:将IDEA中的Scala 改为2.12.3版本
这个问题主要出现在 Spark程序使用 Seq时:
比如:
val df= spark.createDataFrame(Seq(
(,Array("soyo","spark","soyo2","soyo","")),
(,Array("soyo","hadoop","soyo","hadoop","xiaozhou","soyo2","spark","","")),
(,Array("soyo","spark","soyo2","hadoop","soyo3","")),
(,Array("soyo","spark","soyo20","hadoop","soyo2","","")),
(,Array("soyo","","spark","","spark","spark",""))
)).toDF("id","words")
IDEA Spark程序报错处理的更多相关文章
- 解决spark程序报错:Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]
报错信息: 09-05-2017 09:58:44 CST xxxx_job_1494294485570174 INFO - at org.apache.spark.sql.catalyst.erro ...
- eclispe集成Scalas环境后,导入外部Spark包报错:object apache is not a member of package org
在Eclipse中集成scala环境后,发现导入的Spark包报错,提示是:object apache is not a member of package org,网上说了一大推,其实问题很简单: ...
- 运行编译后的程序报错 error while loading shared libraries: lib*.so: cannot open shared object file: No such file or directory
运行编译后的程序报错 error while loading shared libraries: lib*.so: cannot open shared object file: No such f ...
- Window7中Eclipse运行MapReduce程序报错的问题
按照文档:http://www.micmiu.com/bigdata/hadoop/hadoop2x-eclipse-mapreduce-demo/安装配置好Eclipse后,运行WordCount程 ...
- eclipse运行hadoop程序报错:Connection refused: no further information
eclipse运行hadoop程序报错:Connection refused: no further information log4j:WARN No appenders could be foun ...
- WinDbg抓取程序报错dump文件的方法
程序崩溃的两种主要现象: a. 程序在运行中的时候,突然弹出错误窗口,然后点错误窗口的确定时,程序直接关闭 例如: “应用程序错误” “C++错误之类的窗口” “程序无响应” “假死”等 此种崩溃特点 ...
- 记录微信小程序报错 Unexpected end of JSON input;at pages/flow/checkout page getOrderData function
微信小程序报错 Unexpected end of JSON input;at pages/flow/checkout page getOrderData function 这个报错是在将数组对象通过 ...
- 小程序-报错 xxx is not defined (已解决)
小程序-报错 xxx is not defined (已解决) 问题情境: 这样一段代码,微信的小程序报错 is not defined 我 wxml 想这样调用 //wxml 代码 <view ...
- debug运行java程序报错
debug运行java程序报错 ERROR: transport error 202: connect failed: Connection timed out ERROR: JDWP Transpo ...
随机推荐
- block的作用
ios高效开发--blocks相关 1.替换delegate 如果我们有2个viewController,a和b,当我们从a界面push到b后,在b上面触发了一些事件,这些时间又会影响 ...
- The C Programming Language-4.1
下面是c程序设计语言4.1代码以及我的一些理解 strindex函数,通过嵌套两次循环,在s[ ]和t[ ]两个数组对映元素相等且t[ ]尚未遍历完毕的情况下,不断循环,最终返回正数或-1 代码如下 ...
- eclipse自动提示配置
打开Window->Preferences
- stall and flow separation on airfoil or blade
stall stall and flow separation Table of Contents 1. Stall and flow separation 1.1. Separation of Bo ...
- java 使用OpenOffice文件实现预览
1.安装OpenOffice软件 安装教程:https://jingyan.baidu.com/article/c275f6ba12c07ce33d756732.html 2.安装完成后,创建项目,p ...
- Git--删除远程仓库文件但不删除本地仓库资源
我们在使用idea开发的过程中经常会出现新建项目的时候直接把xxx.iml文件也添加到了git trace 当然这并不会出现什么问题,问题是当我们把xxx.iml文件push到我们github上之后, ...
- 【Codeforces 584D】Dima and Lisa
[链接] 我是链接,点我呀:) [题意] 让你把一个奇数n分成最多个质数的和 [题解] 10的9次方以内,任意两个质数之间的差距最大为300 因此可以这样,我们先从i=n-2开始一直递减直到i变成最大 ...
- [luoguP1631] 序列合并(堆 || 优先队列)
传送门 首先,把A和B两个序列分别从小到大排序,变成两个有序队列.这样,从A和B中各任取一个数相加得到N2个和,可以把这些和看成形成了n个有序表/队列: A[1]+B[1] <= A[1]+B[ ...
- [luoguP2854] [USACO06DEC]牛的过山车Cow Roller Coaster(DP + sort)
传送门 先按照起点 sort 一遍. 这样每一个点的只由前面的点决定. f[i][j] 表示终点为 i,花费 j 的最优解 状态转移就是一个01背包. ——代码 #include <cstdio ...
- swift kilo版代码更新
今天重新搭建swift服务器,git下代码后一时好奇,进入kilo/stable branch后,与四个月前下载的swift/kilo版本做了个比较.使用diff命令完成.发现代码还是略有区别. di ...