1 详细信息
User class threw exception: java.lang.IllegalStateException: Cannot call methods on a stopped SparkContext.
This stopped SparkContext was created at:
org.apache.spark.SparkContext.<init>(SparkContext.scala:76)
com.wm.bigdata.spark.etl.RentOrderDistributedEtl$.main(RentOrderDistributedEtl.scala:227)
com.wm.bigdata.spark.etl.RentOrderDistributedEtl.main(RentOrderDistributedEtl.scala)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
The currently active SparkContext was created at:
(No active SparkContext.)
at org.apache.spark.SparkContext.assertNotStopped(SparkContext.scala:100)
at org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1186)
at org.apache.spark.SparkContext$$anonfun$newAPIHadoopRDD$1.apply(SparkContext.scala:1185)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
at org.apache.spark.SparkContext.withScope(SparkContext.scala:699)
at org.apache.spark.SparkContext.newAPIHadoopRDD(SparkContext.scala:1185)
at org.apache.phoenix.spark.PhoenixRDD.<init>(PhoenixRDD.scala:49)
at org.apache.phoenix.spark.PhoenixRelation.buildScan(PhoenixRelation.scala:39)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:293)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$10.apply(DataSourceStrategy.scala:293)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:326)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy$$anonfun$pruneFilterProject$1.apply(DataSourceStrategy.scala:325)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy.pruneFilterProjectRaw(DataSourceStrategy.scala:381)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy.pruneFilterProject(DataSourceStrategy.scala:321)
at org.apache.spark.sql.execution.datasources.DataSourceStrategy.apply(DataSourceStrategy.scala:289)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$1.apply(QueryPlanner.scala:63)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:78)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2$$anonfun$apply$2.apply(QueryPlanner.scala:75)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.TraversableOnce$$anonfun$foldLeft$1.apply(TraversableOnce.scala:157)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.TraversableOnce$class.foldLeft(TraversableOnce.scala:157)
at scala.collection.AbstractIterator.foldLeft(Iterator.scala:1334)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:75)
at org.apache.spark.sql.catalyst.planning.QueryPlanner$$anonfun$2.apply(QueryPlanner.scala:67)
at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:435)
at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:441)
at org.apache.spark.sql.catalyst.planning.QueryPlanner.plan(QueryPlanner.scala:93)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan$lzycompute(QueryExecution.scala:72)
at org.apache.spark.sql.execution.QueryExecution.sparkPlan(QueryExecution.scala:68)
at org.apache.spark.sql.execution.QueryExecution.executedPlan$lzycompute(QueryExecution.scala:77)
at org.apache.spark.sql.execution.QueryExecution.executedPlan(QueryExecution.scala:77)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:207)
at org.apache.spark.sql.execution.QueryExecution$$anonfun$toString$3.apply(QueryExecution.scala:207)
at org.apache.spark.sql.execution.QueryExecution.stringOrError(QueryExecution.scala:99)
at org.apache.spark.sql.execution.QueryExecution.toString(QueryExecution.scala:207)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:75)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.Dataset.withNewRDDExecutionId(Dataset.scala:3346)
at org.apache.spark.sql.Dataset.foreach(Dataset.scala:2716)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1$$anonfun$apply$mcVJ$sp$1.apply(RentOrderDistributedEtl.scala:284)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1$$anonfun$apply$mcVJ$sp$1.apply(RentOrderDistributedEtl.scala:263)
at scala.collection.immutable.List.foreach(List.scala:392)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1.apply$mcVJ$sp(RentOrderDistributedEtl.scala:263)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1.apply(RentOrderDistributedEtl.scala:262)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$$anonfun$main$1.apply(RentOrderDistributedEtl.scala:262)
at scala.collection.Iterator$class.foreach(Iterator.scala:891)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1334)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl$.main(RentOrderDistributedEtl.scala:262)
at com.wm.bigdata.spark.etl.RentOrderDistributedEtl.main(RentOrderDistributedEtl.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:678)
错误原因
自己关闭资源的位置写在任务循环代码之内了
sqlContext.sparkSession.close()
sc.stop()
详细检查写结束的位置,写在最后,解决问题。低级失误
- 严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutputStream() has already been called for this response
严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutputS ...
- 严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutputStream() has already been called
错误: 严重: Servlet.service() for servlet jsp threw exception java.lang.IllegalStateException: getOutput ...
- 严重: End event threw exception java.lang.IllegalArgumentException: Can't convert argument: null
堆栈信息: 2014-6-17 10:33:58 org.apache.tomcat.util.digester.Digester endElement 严重: End event threw exc ...
- 报错!!!Servlet.service() for servlet [action] in context with path [/myssh] threw exception [java.lang.NullPointerException] with root cause java.lang.NullPointerException
这个为什么报错啊~~ at com.hsp.basic.BasicService.executeQuery(BasicService.java:33) 这个对应的语句是 Query query = ...
- Servlet.service() for servlet UserServlet threw exception java.lang.NullPointerException 空指针异常
错误付现: 严重: Servlet.service() for servlet UserServlet threw exceptionjava.lang.NullPointerException at ...
- 异常:java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path '/app/userInfoMaint/getProvince.do'
调试代码时出现异常:java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path '/app/user ...
- 严重: Servlet.service() for servlet [jsp] threw exception java.lang.NullPointerException
五月 04, 2018 11:53:24 上午 org.apache.catalina.core.ApplicationDispatcher invoke 严重: Servlet.service() ...
- 一个异常org.apache.jasper.JasperException: java.lang.IllegalStateException: No output folder:的解决
今天对一个WebApp做完修改,导出成war包,再发布到Tomcat7中,居然访问不了了! 同样的问题一周前也出现过,后来一顿鼓捣,又莫名其妙好了,当时认为是Tomcat7闹点小毛病,也没多想. 但是 ...
- 常见的java异常——java.lang.IllegalStateException: Ambiguous handler methods mapped for HTTP path
此异常是由于你的controller中有两个名字与内容相同的方法: 出现此异常时去检查你的controller中是否有重复的名字的方法:
随机推荐
- kubernetes学习:CKA考试认证
考点 CKA认证针对考核成为当业界的Kubernetes管理员所需的技能. CKA认证考试包括这些一般领域及其在考试中的权重: 应用程序生命周期管理 - 8% 安装.配置和验证 - 12% 核心概 ...
- Java split(".") 和 split("\\.")
Java split(".") 和 split("\\.") 问题描述 使用 . 分解 IP 的各个段,并打印,如:192.168.10.123,分解为 192 ...
- flannel下k8s pod及容器无法跨主机互通问题
参照文档 https://blog.csdn.net/a610786189/article/details/80340556 https://blog.csdn.net/weixin_43092 ...
- SQLServer 断开数据库连接
数据库名:test1 1. 查询数据库当前连接 select * from master.sys.sysprocesses where dbid = db_id('test1') 2. 断开指定连接 ...
- 【HANA系列】【第五篇】SAP HANA XS的JavaScript API详解
公众号:SAP Technical 本文作者:matinal 原文出处:http://www.cnblogs.com/SAPmatinal/ 原文链接:[HANA系列][第五篇]SAP HANA XS ...
- WINDOWS mysql 5.7.15 安装配置方法图文教程
因本人需要需要安装Mysql,现将安装过程记录如下,在自己记录的同时,希望对有疑问的人有所帮助. 一.下载软件 1. 进入mysql官网,登陆自己的oracle账号(没有账号的自己注册一个),下载My ...
- elk 概念整理 集群状态 - yellow - 面试的问题 -- 官方配置文档 水平扩容以及数据保障
1. primary shard -- raid0 2.replicas shard -- raid1 3.index -- 图书馆的借书指引 4.MySQL vs elasticsearch # ...
- (转) Asp.net中实现同一用户名不能同时登录
最近找了一些单点登录的,发现了这篇文章,貌似还是可以实现的,先保存了. Web 项目中经常遇到的问题就是同一用户名多次登录的问题,相应的解决办法也很多,总结起来不外乎这几种解决办法: 将登录后的用 ...
- redhat 7 防火墙配置
没有iptables 用systemctl stop firewalld
- springmvc的MultipartFile参数如果不上传文件报错的问题
@RequestMapping(value = "/updateInformation",method = RequestMethod.POST) @ResponseBody pu ...