IDEA Spark程序报错处理
错误一:
// :: ERROR Executor: Exception in task 0.0 in stage 0.0 (TID )
java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at Person.<init>(RDD_To_DataFrame.scala:)
at RDD_To_DataFrame$.$anonfun$main$(RDD_To_DataFrame.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$$$anon$.hasNext(WholeStageCodegenExec.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:)
at org.apache.spark.scheduler.Task.run(Task.scala:)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
// :: ERROR TaskSetManager: Task in stage 0.0 failed times; aborting job
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Task in stage 0.0 failed times, most recent failure: Lost task 0.0 in stage 0.0 (TID , localhost, executor driver): java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at Person.<init>(RDD_To_DataFrame.scala:)
at RDD_To_DataFrame$.$anonfun$main$(RDD_To_DataFrame.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$$$anon$.hasNext(WholeStageCodegenExec.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:)
at org.apache.spark.scheduler.Task.run(Task.scala:)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:) Driver stacktrace:
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$.apply(DAGScheduler.scala:)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$.apply(DAGScheduler.scala:)
at scala.Option.foreach(Option.scala:)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:)
at org.apache.spark.util.EventLoop$$anon$.run(EventLoop.scala:)
at org.apache.spark.scheduler.DAGScheduler.runJob(DAGScheduler.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.SparkContext.runJob(SparkContext.scala:)
at org.apache.spark.sql.execution.SparkPlan.executeTake(SparkPlan.scala:)
at org.apache.spark.sql.execution.CollectLimitExec.executeCollect(limit.scala:)
at org.apache.spark.sql.Dataset.org$apache$spark$sql$Dataset$$collectFromPlan(Dataset.scala:)
at org.apache.spark.sql.Dataset$$anonfun$head$.apply(Dataset.scala:)
at org.apache.spark.sql.Dataset$$anonfun$head$.apply(Dataset.scala:)
at org.apache.spark.sql.Dataset$$anonfun$.apply(Dataset.scala:)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:)
at org.apache.spark.sql.Dataset.withAction(Dataset.scala:)
at org.apache.spark.sql.Dataset.head(Dataset.scala:)
at org.apache.spark.sql.Dataset.take(Dataset.scala:)
at org.apache.spark.sql.Dataset.showString(Dataset.scala:)
at org.apache.spark.sql.Dataset.show(Dataset.scala:)
at org.apache.spark.sql.Dataset.show(Dataset.scala:)
at org.apache.spark.sql.Dataset.show(Dataset.scala:)
at RDD_To_DataFrame$.main(RDD_To_DataFrame.scala:)
at RDD_To_DataFrame.main(RDD_To_DataFrame.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:)
Caused by: java.lang.NoSuchMethodError: scala.Product.$init$(Lscala/Product;)V
at Person.<init>(RDD_To_DataFrame.scala:)
at RDD_To_DataFrame$.$anonfun$main$(RDD_To_DataFrame.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at scala.collection.Iterator$$anon$.next(Iterator.scala:)
at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIterator.processNext(Unknown Source)
at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:)
at org.apache.spark.sql.execution.WholeStageCodegenExec$$anonfun$$$anon$.hasNext(WholeStageCodegenExec.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$.apply(SparkPlan.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$mapPartitionsInternal$$$anonfun$apply$.apply(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:)
at org.apache.spark.scheduler.Task.run(Task.scala:)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:)
at java.lang.Thread.run(Thread.java:)
错误处理:将IDEA中的Scala 改为2.10.4版本
这个问题主要出现在 Spark程序使用 case class 类时
错误二:
Error:(, ) No TypeTag available for (Array[String],)
val documentDF= spark.createDataFrame(Seq(
错误处理:将IDEA中的Scala 改为2.12.3版本
这个问题主要出现在 Spark程序使用 Seq时:
比如:
val df= spark.createDataFrame(Seq(
(,Array("soyo","spark","soyo2","soyo","")),
(,Array("soyo","hadoop","soyo","hadoop","xiaozhou","soyo2","spark","","")),
(,Array("soyo","spark","soyo2","hadoop","soyo3","")),
(,Array("soyo","spark","soyo20","hadoop","soyo2","","")),
(,Array("soyo","","spark","","spark","spark",""))
)).toDF("id","words")
IDEA Spark程序报错处理的更多相关文章
- 解决spark程序报错:Caused by: java.util.concurrent.TimeoutException: Futures timed out after [300 seconds]
报错信息: 09-05-2017 09:58:44 CST xxxx_job_1494294485570174 INFO - at org.apache.spark.sql.catalyst.erro ...
- eclispe集成Scalas环境后,导入外部Spark包报错:object apache is not a member of package org
在Eclipse中集成scala环境后,发现导入的Spark包报错,提示是:object apache is not a member of package org,网上说了一大推,其实问题很简单: ...
- 运行编译后的程序报错 error while loading shared libraries: lib*.so: cannot open shared object file: No such file or directory
运行编译后的程序报错 error while loading shared libraries: lib*.so: cannot open shared object file: No such f ...
- Window7中Eclipse运行MapReduce程序报错的问题
按照文档:http://www.micmiu.com/bigdata/hadoop/hadoop2x-eclipse-mapreduce-demo/安装配置好Eclipse后,运行WordCount程 ...
- eclipse运行hadoop程序报错:Connection refused: no further information
eclipse运行hadoop程序报错:Connection refused: no further information log4j:WARN No appenders could be foun ...
- WinDbg抓取程序报错dump文件的方法
程序崩溃的两种主要现象: a. 程序在运行中的时候,突然弹出错误窗口,然后点错误窗口的确定时,程序直接关闭 例如: “应用程序错误” “C++错误之类的窗口” “程序无响应” “假死”等 此种崩溃特点 ...
- 记录微信小程序报错 Unexpected end of JSON input;at pages/flow/checkout page getOrderData function
微信小程序报错 Unexpected end of JSON input;at pages/flow/checkout page getOrderData function 这个报错是在将数组对象通过 ...
- 小程序-报错 xxx is not defined (已解决)
小程序-报错 xxx is not defined (已解决) 问题情境: 这样一段代码,微信的小程序报错 is not defined 我 wxml 想这样调用 //wxml 代码 <view ...
- debug运行java程序报错
debug运行java程序报错 ERROR: transport error 202: connect failed: Connection timed out ERROR: JDWP Transpo ...
随机推荐
- RC: blkio throttle 测试
本文将测试一下使用cgroup的blkio组来控制IO吞吐量 : 测试环境CentOS 7.x x64 创建一个继承组 [root@150 rg1]# cd /sys/fs/cgroup/blkio/ ...
- 【实验级】Docker-Compose搭建单服务器ELK伪集群
本文说明 由于最近在搭ELK的日志系统,为了演示方案搭了个单台服务器的日志系统,就是前一篇文章中所记,其实这些笔记已经整理好久了,一直在解决各种问题就没有发出来.在演示过程中我提到了两个方案,其中之一 ...
- 诊断:ORA-01919: role ‘PLUSTRACE’ does not exist
如下错误 SQL> grant plustrace to scott; grant plustrace to scott * ERROR at line 1: ORA-01919: role ' ...
- 个人Linux(ubuntu)使用记录——远程访问linux
说明:记录自己的linux使用过程,并不打算把它当作一个教程,仅仅只是记录下自己使用过程中的一些命令,配置等东西,这样方便自己查阅,也就不用到处去网上搜索了,所以文章毫无章法可言,甚至会记录得很乱. ...
- Linux:Apache改静态网页、个人用户主页、虚拟网站主机、Apache访问控制
Apache改静态网页 1.概述: Apache是web服务器(静态解析,如HTML),tomcat是java应用服务器(动态解析,如JSP.PHP) Tomcat只是一个servlet(jsp也翻 ...
- Linux系统重要的子目录
更多目录知识 http://blog.51cto.com/yangrong/1288072 /etc/fstab 机自动挂载分区/磁盘,规定哪个分区/设备,挂载到哪里 [root@oldboy ~] ...
- zabbix3.4调用钉钉报警通知(超详细)
一.备注: zabbix调用钉钉接口报警通知有两种情况: 1.通知到个人钉 2.通知到钉钉群 本文主要介绍zabbix调用钉钉接口通知到钉钉个人的方式 二.zabbix3.4调用钉钉接口报警通知到个 ...
- gitlab root 账号 忘记密码如何重置
shell>cd /home/git/gitlabshell> su gitshell>bundle exec rails console productionirb(main):0 ...
- 多校1007 Naive Operations
>>点击进入原题测试<< 思路:好像是第一次这么印象深刻的写线段树,说实话,这个题确实很有意思,值得学习. 看了大神讲解视频,但是自己写的还是超时了. 参考来自 https:/ ...
- poj——1330 Nearest Common Ancestors
Nearest Common Ancestors Time Limit: 1000MS Memory Limit: 10000K Total Submissions: 30082 Accept ...