搭建Spark环境后,调测Spark样例时,出现下面的错误:
WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

[hadoop@gpmaster bin]$ ./run-example org.apache.spark.examples.SparkPi
15/10/01 08:59:33 INFO spark.SparkContext: Running Spark version 1.5.0
.......................
15/10/01 08:59:35 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.1.128:17514]
15/10/01 08:59:35 INFO util.Utils: Successfully started service 'sparkDriver' on port 17514.
.......................
15/10/01 08:59:36 INFO ui.SparkUI: Started SparkUI at http://192.168.1.128:4040
15/10/01 08:59:37 INFO spark.SparkContext: Added JAR file:/home/hadoop/spark/lib/spark-examples-1.5.0-hadoop2.6.0.jar at http://192.168.1.128:36471/jars/spark-examples-1.5.0-hadoop2.6.0.jar with timestamp 1443661177865
15/10/01 08:59:37 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
15/10/01 08:59:38 INFO client.AppClient$ClientEndpoint: Connecting to master spark://192.168.1.128:7077...
15/10/01 08:59:38 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151001085938-0000
.................................
15/10/01 08:59:40 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/10/01 08:59:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:40 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:40 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:02:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:02:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

从警告信息大致可以知道:
初始化job时没有获取到任何资源;提示检查集群,确保workers可以被注册并有足够的内存资源。

可能的原因有几点,可以逐个排查:
1. 主机主机名和ip是否配置正确
先查看/etc/hosts文件配置是否正确

同时可以通过spark-shell查看SparkContext获取的上下文信息, 如下操作:
[hadoop@gpmaster bin]$ ./spark-shell
........
scala> sc.getConf.getAll.foreach(println)
(spark.fileserver.uri,http://192.168.1.128:34634)
(spark.app.name,Spark shell)
(spark.driver.port,25392)
(spark.app.id,app-20151001090322-0001)
(spark.repl.class.uri,http://192.168.1.128:24988)
(spark.externalBlockStore.folderName,spark-1254a794-fbfa-4b4c-9757-b5a94dc26ffc)
(spark.jars,)
(spark.executor.id,driver)
(spark.submit.deployMode,client)
(spark.driver.host,192.168.1.128)
(spark.master,spark://192.168.1.128:7077)

scala> sc.getConf.toDebugString
res8: String = 
spark.app.id=app-20151001090322-0001
spark.app.name=Spark shell
spark.driver.host=192.168.1.128
spark.driver.port=25392
spark.executor.id=driver
spark.externalBlockStore.folderName=spark-1254a794-fbfa-4b4c-9757-b5a94dc26ffc
spark.fileserver.uri=http://192.168.1.128:34634
spark.jars=
spark.master=spark://192.168.1.128:7077
spark.repl.class.uri=http://192.168.1.128:24988
spark.submit.deployMode=client

2. 内存不足
我的环境就是因为内存的原因。
我集群环境中,spark-env.sh 文件配置如下:
export JAVA_HOME=/usr/java/jdk1.7.0_60
export SCALA_HOME=/usr/local/scala
export SPARK_MASTER_IP=192.168.1.128
export SPARK_WORKER_MEMORY=100m
export HADOOP_CONF_DIR=/home/hadoop/hadoop-2.6.0/etc/hadoop
export MASTER=spark://192.168.1.128:7077

因为我的集群环境,每个节点只剩下500MB内存了,由于我没有配置SPARK_EXECUTOR_MEMORY参数,默认会使用1G内存,所以会出现内存不足,从而出现上面日志报的警告信息。

所以解决办法是添加如下参数:

export SPARK_EXECUTOR_MEMORY=512m

3.端口号被占用,之前的程序已运行。

Spark执行样例报警告:WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources的更多相关文章

  1. 18/03/18 04:53:44 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

    1:遇到这个问题是在启动bin/spark-shell以后,然后呢,执行spark实现wordcount的例子的时候出现错误了,如: scala> sc.textFile()).reduceBy ...

  2. Spark之submit任务时的Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

    Spark submit任务到Spark集群时,会出现如下异常: Exception 1:Initial job has not accepted any resources; check your ...

  3. azure iothub create-device-identity样例报错: unable to find valid certification path ,及iothub-explorer Error: CERT_UNTRUSTED

    https://docs.microsoft.com/zh-cn/azure/iot-hub/iot-hub-java-java-getstarted 在IDEA中执行上述的代码,会出现下面的报错信息 ...

  4. Pytest执行用例报Hint: make sure your test modules/packages have valid Python names.

    近日,使用Pytest+Appium 实现APP端UI自动化,遇到Pytest收集用例失败的情况. 报错信息如下: test_room.py:None (test_room.py) ImportErr ...

  5. Eureka 的 Application Service client的注冊以及执行演示样例

            Eureka 服务器架起来了(关于架设步骤參考博客<Linux 下 Eureka 服务器的部署>),如今怎样把我们要负载均衡的服务器(也就是从 Application Cl ...

  6. spark mllib lda 中文分词、主题聚合基本样例

    github https://github.com/cclient/spark-lda-example spark mllib lda example 官方示例较为精简 在官方lda示例的基础上,给合 ...

  7. Android OpenCV样例调试+报错处理

    1.OpenCV样例调试:<OpenCV Sample - image-manipulations>       blog+报错:E/CAMERA_ACTIVITY(17665): Cam ...

  8. Eureka 的 Application Client client的执行演示样例

            上篇以一个 demo 演示样例介绍了 Eureka 的 Application Service 客户端角色.今天我们继续了解 Eureka 的 Application Client 客 ...

  9. 在Ubuntu下构建Bullet以及执行Bullet的样例程序

    在Ubuntu下构建Bullet以及执行Bullet的样例程序 1.找到Bullet的下载页,地址是:https://code.google.com/p/bullet/downloads/list 2 ...

随机推荐

  1. Dive into Spring framework -- 了解基本原理(二)--设计模式-part2

    Template模式 Template模式顾名思义是提供了一种模板,也就是针对某种业务提供了模范框架.这个在spring中是属于核心模式的,因为其ApplicationContext抽象类就是模板模式 ...

  2. java工具类使用

    ResourceBundle bundle = ResourceBundle.getBundle("res", new Locale("zh", "C ...

  3. 百度编辑器(ueditor)@功能之获取坐标

    //获取百度编辑器的工具类 var domUtils = UE.dom.domUtils; //获取编辑器的坐标 var $ueditor_offset = $("#ueditor_0&qu ...

  4. 【Python】学习笔记之函数

    Python函数 在Python中,一切皆为对象,函数也可以赋给一个变量,就是指向一个函数对象的引用,相当于给这个函数起了一个“别名”: >>> a = max >>&g ...

  5. Hive -hivevar 参数传递

    命令行模式,或者说目录模式,可以使用hive 执行命令. 选项说明: -e : 执行短命令 -f :  执行文件(适合脚本封装) -S : 安静模式,不显示MR的运行过程 -hivevar : 传参数 ...

  6. AtCoder Regular Contest 078D

    两边bfs,先一边找到从1到n的路径并记录下来,然后挨个标记,最后一边bfs找1能到达的点,比较一下就行了 #include<map> #include<set> #inclu ...

  7. 玲珑oj 1129 ST

    1129 - 喵哈哈村的战斗魔法师丶坏坏い月 Time Limit:3s Memory Limit:256MByte Submissions:490Solved:107 DESCRIPTION 坏坏い ...

  8. 【Python】改变对象的字符串显示

    问题 改变对象实例的打印或显示输出,让它们更具可读性. 解决方案 要改变一个实例的字符串表示,可重新定义它的 __str__() 和 __repr__() 方法.例如: class Pair: def ...

  9. MySQL相关错误汇总

    Eroor 1 描述: 在启动mysql的时候出现如下问题:"ERROR 2003 (HY000): Can't connect to MySQL server on 'localhost' ...

  10. Hessian序列化

    当子类定义了和父类同名的属性时,经过hessian传输,会导致该属性值丢失.因为hessian发送二进制数据时,子类数据在前,父类数据在后.接收二进制数据时,子类数据在前,父类数据在后.所以对于同名字 ...