java执行spark查询hbase的jar包出现错误提示:ob aborted due to stage failure: Master removed our application: FAILED
执行java调用scala 打包后的jar时候出现异常
/14 23:57:08 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/04/14 23:57:23 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/04/14 23:57:38 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/04/14 23:57:39 INFO AppClient$ClientActor: Executor updated: app-20150414235011-0003/9 is now EXITED (Command exited with code 1)
15/04/14 23:57:39 INFO SparkDeploySchedulerBackend: Executor app-20150414235011-0003/9 removed: Command exited with code 1
15/04/14 23:57:39 ERROR SparkDeploySchedulerBackend: Application has been killed. Reason: Master removed our application: FAILED
15/04/14 23:57:39 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
15/04/14 23:57:39 INFO TaskSchedulerImpl: Cancelling stage 0
15/04/14 23:57:39 INFO DAGScheduler: Failed to run count at SparkSelect03.scala:55
Exception in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Master removed our application: FAILED
at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1049)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1033)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1031)
at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
at org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1031)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:635)
at org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:635)
at scala.Option.foreach(Option.scala:236)
at org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:635)
at org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1234)
at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
at akka.actor.ActorCell.invoke(ActorCell.scala:456)
at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
at akka.dispatch.Mailbox.run(Mailbox.scala:219)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
问题1:
/14 23:57:08 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/04/14 23:57:23 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory
15/04/14 23:57:38 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memor
分析:这个是内存不足?
我spark-env.sh的配置文件信息如下
export JAVA_HOME=/home/hadoop/jdk1.7.0_75
export SCALA_HOME=/home/hadoop/scala-2.11.6
export HADOOP_HOME=/home/hadoop/hadoop-2.3.0-cdh5.0.2
export HADOOP_CONF_DIR=/home/hadoop/hadoop-2.3.0-cdh5.0.2/etc/hadoop
export SPARK_CLASSPATH=/home/hadoop/hbase-0.96.1.1-cdh5.0.2/lib/*
export SPARK_MASTER_IP=master
export SPARK_MASTER_PORT=17077
export SPARK_MASTER_WEBUI_PORT=18080 export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_WEBUI_PORT=18081
export SPARK_WORKER_INSTANCES=1
问题2:
15/04/14 23:57:39 INFO DAGScheduler: Failed to run count at SparkSelect03.scala:55
这句话的代码:
val count = hbaseRDD.count()
println("HBase RDD Count:" + count)
hbaseRDD.cache()
问题3:
in thread "main" org.apache.spark.SparkException: Job aborted due to stage failure: Master removed our application: FAILED
有遇到过类似的或者知道怎么解决的可以留言下
java执行spark查询hbase的jar包出现错误提示:ob aborted due to stage failure: Master removed our application: FAILED的更多相关文章
- 在服务器上执行hbase的jar包
hadoop命令执行hbase应用jar包时的环境变量加载问题 Apache HBase ™ Reference Guide HBase, MapReduce, and the CLASSPATH
- java调用scala 查询hbase数据
问题:将scala打成jar包,提供给java调用,但是java一直提示找不到类 实现功能:利用spark查询hbase数据,然后提供给外部接口调用 我的方式:spark查询Hbase用scala实现 ...
- Java SE 9 多版本兼容 JAR 包示例
Java SE 9 多版本兼容 JAR 包示例 作者:Grey 原文地址:Java SE 9 多版本兼容 JAR 包示例 说明 Java 9 版本中增强了Jar 包多版本字节码文件格式支持,也就是说在 ...
- eclipse java项目中明明引入了jar包 为什么项目启动的时候不能找到jar包 项目中已经 引入了 com.branchitech.app 包 ,但时tomcat启动的时候还是报错? java.lang.ClassNotFoundException: com.branchitech.app.startup.AppStartupContextListener java.lang.ClassN
eclipse java项目中明明引入了jar包 为什么项目启动的时候不能找到jar包 项目中已经 引入了 com.branchitech.app 包 ,但时tomcat启动的时候还是报错?java. ...
- 转!java web项目 build path 导入jar包,tomcat启动报错 找不到该类
在eclipse集成tomcat开发java web项目时,引入的外部jar包,编译通过,但启动tomcat运行web时提示找不到jar包内的类,需要作如下配置,将jar包在部署到集成的tomcat环 ...
- 不要轻易在java ext 目录放任何三方jar包
今天在编写一个简单spi 应用demo的时候,在编译时总有一个其他的错误,如下: ERROR Failed to execute goal org.apache.maven.plugins:maven ...
- 如果Android的jar包导入错误,怎么修改呢?
如果jar包导入错误,怎么修改呢? 右键工程---->properties---->Java Build Path --->Libraries-->选择android-supp ...
- java应用程序远程登录linux并执行其命令(ssh jar包)
http://www.ganymed.ethz.ch/ssh2/在这个网址下载一个调用ssh和scp命令的jar包. 然后,就可以写程序了.将上面的jar包导入MyEclipse,下面是一个类的实例代 ...
- java中如何根据函数查询引用的jar包
选中函数,按Ctrl+Shift+T,就可以弹出对应的jar包地址 例如:
随机推荐
- eclipse中server name选项变灰
删除workspace中.metadata\.plugins\org.eclipse.core.runtime\.settings目录下 org.eclipse.wst.server.core.pre ...
- php中正则表达式总结(不容错过)
php中正则表达式总结(不容错过) 一.总结 一句话总结: 无论js,php,java,python里面中的正则都是差不多一样的,所以用点脑子 用到正则的地方很多,比如 nginx的配置文件 1.ph ...
- nteract 使用教程
安装 直接去官网下载 一路回车 官网 建立python虚拟环境 和我们平时一样 不同的是在建立完之后 要安装一个kernel Using Python3 with pip and a virtual ...
- 堆、栈、方法区、静态代码块---Java
java 堆.栈.方法区 堆区: 1.存储的全部是对象,每个对象都包含一个与之对应的class的信息.(class的目的是得到操作指令) 2.jvm只有一个堆区(heap)被所有线程共享,堆中不存放基 ...
- playbackRate控制音频播放倍速
<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8&quo ...
- 如何省略.jsx文件名
在webpack.config.js文件夹中module.exports中添加: resolve:{ extensions:[".js", ".jsx", &q ...
- WSGI是什么?
WSGI是什么? WSGI,全称 Web Server Gateway Interface,或者 Python Web Server Gateway Interface ,是为 Python 语言定义 ...
- (数据科学学习手札59)从抓取数据到生成shp文件并展示
一.简介 shp格式的文件是地理信息领域最常见的文件格式之一,很好的结合了矢量数据与对应的标量数据,而在Python中我们可以使用pyshp来完成创建shp文件的过程,本文将从如何从高德地图获取矢量信 ...
- (数据科学学习手札58)在R中处理有缺失值数据的高级方法
一.简介 在实际工作中,遇到数据中带有缺失值是非常常见的现象,简单粗暴的做法如直接删除包含缺失值的记录.删除缺失值比例过大的变量.用0填充缺失值等,但这些做法会很大程度上影响原始数据的分布或者浪费来之 ...
- C预处理之宏定义
#include <stdio.h> //定义不带参数的宏 #define PI 3.14 /*********************************************** ...