1 详细信息

User class threw exception: java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.

This stopped SparkContext was created at:
 
org.apache.spark.SparkContext.<init>(SparkContext.scala:76)
com.wm.bigdata.spark.etl.RentOrderDistributedEtl$.main(RentOrderDistributedEtl.scala:227)
com.wm.bigdata.spark.etl.RentOrderDistributedEtl.main(RentOrderDistributedEtl.scala)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
 
The currently active SparkContext was created at:
 
(No active SparkContext.)
 
at org.apache.spark.SparkContext.assertNotStopped(SparkContext.scala:100)
at org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1186)
at org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1185)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:699)
at org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:1185)
at org.apache.phoenix.spark.PhoenixRDD.<init>(PhoenixRDD.scala:49)
at org.apache.phoenix.spark.PhoenixRelation.buildScan(PhoenixRelation.scala:39)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:293)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:293)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:326)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:325)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy.pruneFilterProjectRaw(DataSourceStrategy.scala:381)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy.pruneFilterProject(DataSourceStrategy.scala:321)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy.apply(DataSourceStrategy.scala:289)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:72)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:68)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:77)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:77)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:207)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:207)
at org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:99)
at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:207)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:75)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withNewRDDExecutionId(Dataset.scala:3346)
at org.apache.spark.sql.Dataset.foreach(Dataset.scala:2716)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1$$anonfun$apply$mcVJ$sp$1.apply(RentOrderDistributedEtl.scala:284)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1$$anonfun$apply$mcVJ$sp$1.apply(RentOrderDistributedEtl.scala:263)
at scala.collection.immutable.List.foreach(List.scala:392)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1.apply$mcVJ$sp(RentOrderDistributedEtl.scala:263)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1.apply(RentOrderDistributedEtl.scala:262)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1.apply(RentOrderDistributedEtl.scala:262)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$.main(RentOrderDistributedEtl.scala:262)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl.main(RentOrderDistributedEtl.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
 
错误原因
  自己关闭资源的位置写在任务循环代码之内了
sqlContext.sparkSession.close()
sc.stop() 详细检查写结束的位置,写在最后,解决问题。低级失误

异常-User class threw exception: java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.的更多相关文章

  1. 严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutputStream() has already been called for this response

    严重: Servlet.service() for servlet jsp threw exception    java.lang.IllegalStateException: getOutputS ...

  2. 严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutputStream() has already been called

    错误: 严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutput ...

  3. 严重: End event threw exception java.lang.IllegalArgumentException: Can't convert argument: null

    堆栈信息: 2014-6-17 10:33:58 org.apache.tomcat.util.digester.Digester endElement 严重: End event threw exc ...

  4. 报错!!!Servlet.service() for servlet [action] in context with path [/myssh] threw exception [java.lang.NullPointerException] with root cause java.lang.NullPointerException

    这个为什么报错啊~~ at com.hsp.basic.BasicService.executeQuery(BasicService.java:33) 这个对应的语句是   Query query = ...

  5. Servlet.service() for servlet UserServlet threw exception java.lang.NullPointerException 空指针异常

    错误付现: 严重: Servlet.service() for servlet UserServlet threw exceptionjava.lang.NullPointerException at ...

  6. 异常:java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path '/app/userInfoMaint/getProvince.do'

    调试代码时出现异常:java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path '/app/user ...

  7. 严重: Servlet.service() for servlet [jsp] threw exception java.lang.NullPointerException

    五月 04, 2018 11:53:24 上午 org.apache.catalina.core.ApplicationDispatcher invoke 严重: Servlet.service() ...

  8. 一个异常org.apache.jasper.JasperException: java.lang.IllegalStateException: No output folder:的解决

    今天对一个WebApp做完修改,导出成war包,再发布到Tomcat7中,居然访问不了了! 同样的问题一周前也出现过,后来一顿鼓捣,又莫名其妙好了,当时认为是Tomcat7闹点小毛病,也没多想. 但是 ...

  9. 常见的java异常——java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path

    此异常是由于你的controller中有两个名字与内容相同的方法: 出现此异常时去检查你的controller中是否有重复的名字的方法:

随机推荐

  1. openerp学习笔记 tree视图增加复选处理按钮

    wizard:用于确认或选择 wizard/sale_multi_action.py # -*- encoding: utf-8 -*-from openerp.osv import fields, ...

  2. wpf prism IRegionManager 和IRegionViewRegistry

    引入了一个新的问题,IRegionViewRegistry和IRegionManager都具有RegisterViewWithRegion方法,二者有区别么? 答案是——没有.我们已经分析过,在Uni ...

  3. SQL Server 等待统计信息基线收集

    背景 我们随时监控每个服务器不同时间段的wait statistics ,可以根据监控信息大概判断什么时候开始出现异常,相当于一个wait statistics基线收集,还可以具体分析占比高的等待类型 ...

  4. springboot jar启动 读取jar包中相对路径文件报错

    jar包启动后读取相对路径文件报异常: Caused by: java.io.FileNotFoundException: class path resource [***.***] cannot b ...

  5. JVM内存结构、Java内存模型和Java对象模型

    Java作为一种面向对象的,跨平台语言,其对象.内存等一直是比较难的知识点.而且很多概念的名称看起来又那么相似,很多人会傻傻分不清楚.比如本文要讨论的JVM内存结构.Java内存模型和Java对象模型 ...

  6. Partition to K Equal Sum Subsets

    Given an array of integers nums and a positive integer k, find whether it's possible to divide this ...

  7. 小记---------spark架构原理&主要组件和进程

    spark的主要组件和进程       driver (进程):     我们编写的spark程序就在driver上,由driver进程执行       master(进程):     主要负责资源的 ...

  8. java导入导出Excel文件

    package poi.excel; import java.io.IOException; import java.io.InputStream; import java.io.OutputStre ...

  9. C++练习 | 类的继承与派生练习(1)

    #include <iostream> #include <cmath> #include <cstring> #include <string> #i ...

  10. spark教程(二)-shell操作

    spark 支持 shell 操作 shell 主要用于调试,所以简单介绍用法即可 支持多种语言的 shell 包括 scala shell.python shell.R shell.SQL shel ...