1 详细信息

User class threw exception: java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.

This stopped SparkContext was created at:
 
org.apache.spark.SparkContext.<init>(SparkContext.scala:76)
com.wm.bigdata.spark.etl.RentOrderDistributedEtl$.main(RentOrderDistributedEtl.scala:227)
com.wm.bigdata.spark.etl.RentOrderDistributedEtl.main(RentOrderDistributedEtl.scala)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
 
The currently active SparkContext was created at:
 
(No active SparkContext.)
 
at org.apache.spark.SparkContext.assertNotStopped(SparkContext.scala:100)
at org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1186)
at org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1185)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:699)
at org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:1185)
at org.apache.phoenix.spark.PhoenixRDD.<init>(PhoenixRDD.scala:49)
at org.apache.phoenix.spark.PhoenixRelation.buildScan(PhoenixRelation.scala:39)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:293)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:293)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:326)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:325)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy.pruneFilterProjectRaw(DataSourceStrategy.scala:381)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy.pruneFilterProject(DataSourceStrategy.scala:321)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy.apply(DataSourceStrategy.scala:289)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:72)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:68)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:77)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:77)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:207)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:207)
at org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:99)
at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:207)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:75)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withNewRDDExecutionId(Dataset.scala:3346)
at org.apache.spark.sql.Dataset.foreach(Dataset.scala:2716)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1$$anonfun$apply$mcVJ$sp$1.apply(RentOrderDistributedEtl.scala:284)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1$$anonfun$apply$mcVJ$sp$1.apply(RentOrderDistributedEtl.scala:263)
at scala.collection.immutable.List.foreach(List.scala:392)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1.apply$mcVJ$sp(RentOrderDistributedEtl.scala:263)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1.apply(RentOrderDistributedEtl.scala:262)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1.apply(RentOrderDistributedEtl.scala:262)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$.main(RentOrderDistributedEtl.scala:262)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl.main(RentOrderDistributedEtl.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
 
错误原因
  自己关闭资源的位置写在任务循环代码之内了
sqlContext.sparkSession.close()
sc.stop() 详细检查写结束的位置,写在最后,解决问题。低级失误

异常-User class threw exception: java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.的更多相关文章

  1. 严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutputStream() has already been called for this response

    严重: Servlet.service() for servlet jsp threw exception    java.lang.IllegalStateException: getOutputS ...

  2. 严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutputStream() has already been called

    错误: 严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutput ...

  3. 严重: End event threw exception java.lang.IllegalArgumentException: Can't convert argument: null

    堆栈信息: 2014-6-17 10:33:58 org.apache.tomcat.util.digester.Digester endElement 严重: End event threw exc ...

  4. 报错!!!Servlet.service() for servlet [action] in context with path [/myssh] threw exception [java.lang.NullPointerException] with root cause java.lang.NullPointerException

    这个为什么报错啊~~ at com.hsp.basic.BasicService.executeQuery(BasicService.java:33) 这个对应的语句是   Query query = ...

  5. Servlet.service() for servlet UserServlet threw exception java.lang.NullPointerException 空指针异常

    错误付现: 严重: Servlet.service() for servlet UserServlet threw exceptionjava.lang.NullPointerException at ...

  6. 异常:java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path '/app/userInfoMaint/getProvince.do'

    调试代码时出现异常:java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path '/app/user ...

  7. 严重: Servlet.service() for servlet [jsp] threw exception java.lang.NullPointerException

    五月 04, 2018 11:53:24 上午 org.apache.catalina.core.ApplicationDispatcher invoke 严重: Servlet.service() ...

  8. 一个异常org.apache.jasper.JasperException: java.lang.IllegalStateException: No output folder:的解决

    今天对一个WebApp做完修改,导出成war包,再发布到Tomcat7中,居然访问不了了! 同样的问题一周前也出现过,后来一顿鼓捣,又莫名其妙好了,当时认为是Tomcat7闹点小毛病,也没多想. 但是 ...

  9. 常见的java异常——java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path

    此异常是由于你的controller中有两个名字与内容相同的方法: 出现此异常时去检查你的controller中是否有重复的名字的方法:

随机推荐

  1. JNI的开发步骤

    使用C函数实现Java本地方法: 1. 在java代码里面声明一个native的方法 public native String helloFromC(); 2. 在工程目录下面创建一个jni的文件夹 ...

  2. [log4j]log4j简单配置

    步骤: 1.导入jar包:log4j-1.2.17.jar 2.编写log4j配置文件:log4j.properties ### set log levels - for more verbose l ...

  3. centos 7 删除 virbr0 虚拟网卡

    出现虚拟网卡是因为安装时启用了 libvirtd 服务后生成的关闭方法virsh net-list名称               状态     自动开始  持久------------------- ...

  4. centos 7安装redis5

    环境 centos 7 最简安装 官网指导地址:https://redis.io/download 1.yum 安装wget # yum install -y wget 2.安装gcc yum ins ...

  5. ubuntu搭建ssh服务

    本人在ubuntu16.4.4.0-13下测试 #man uname//用于打印系统信息 sudo apt install update sudo apt install openssh-server ...

  6. Centos7下yum安装kubernetes

    一.前言    Kubernetes 是Google开源的容器集群管理系统,基于Docker构建一个容器的调度服务,提供资源调度.均衡容灾.服务注册.动态扩缩容等功能套件,目前centos yum源上 ...

  7. 【VS开发】获得devcon.exe

    1.获得devcon.exe 有两种方法,一是直接去网上下,不过下的很多64位的都不能用,二是自己装个ddk去安装目录下找,在WinDDK\7600.16385.1\tools\devcon下,当然还 ...

  8. flask_sqlalchemy基本设置

    from flask import Flask from flask_sqlalchemy import SQLAlchemy #区别 sqlalchemy这是第三方模块不属于flask app = ...

  9. spring5源码分析系列(二)——spring核心容器体系结构

    首先我们来认识下IOC和DI: IOC(Inversion of Control)控制反转:控制反转,就是把原先代码里面需要实现的对象创建.依赖的代码,反转给容器来帮忙实现.所以需要创建一个容器,并且 ...

  10. ZooKeeper原理及介绍

    Zookeeper简介 1.1 什么是Zookeeper ZooKeeper是一个分布式的,开放源码的分布式应用程序协调服务,是Google的Chubby一个开源的实现,是大数据生态中的重要组件.它是 ...