搭建Spark环境后,调测Spark样例时,出现下面的错误:
WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

[hadoop@gpmaster bin]$ ./run-example org.apache.spark.examples.SparkPi
15/10/01 08:59:33 INFO spark.SparkContext: Running Spark version 1.5.0
.......................
15/10/01 08:59:35 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.1.128:17514]
15/10/01 08:59:35 INFO util.Utils: Successfully started service 'sparkDriver' on port 17514.
.......................
15/10/01 08:59:36 INFO ui.SparkUI: Started SparkUI at http://192.168.1.128:4040
15/10/01 08:59:37 INFO spark.SparkContext: Added JAR file:/home/hadoop/spark/lib/spark-examples-1.5.0-hadoop2.6.0.jar at http://192.168.1.128:36471/jars/spark-examples-1.5.0-hadoop2.6.0.jar with timestamp 1443661177865
15/10/01 08:59:37 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
15/10/01 08:59:38 INFO client.AppClient$ClientEndpoint: Connecting to master spark://192.168.1.128:7077...
15/10/01 08:59:38 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151001085938-0000
.................................
15/10/01 08:59:40 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/10/01 08:59:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:40 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:40 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:02:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:02:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

从警告信息大致可以知道:
初始化job时没有获取到任何资源;提示检查集群,确保workers可以被注册并有足够的内存资源。

可能的原因有几点,可以逐个排查:
1. 主机主机名和ip是否配置正确
先查看/etc/hosts文件配置是否正确

同时可以通过spark-shell查看SparkContext获取的上下文信息, 如下操作:
[hadoop@gpmaster bin]$ ./spark-shell
........
scala> sc.getConf.getAll.foreach(println)
(spark.fileserver.uri,http://192.168.1.128:34634)
(spark.app.name,Spark shell)
(spark.driver.port,25392)
(spark.app.id,app-20151001090322-0001)
(spark.repl.class.uri,http://192.168.1.128:24988)
(spark.externalBlockStore.folderName,spark-1254a794-fbfa-4b4c-9757-b5a94dc26ffc)
(spark.jars,)
(spark.executor.id,driver)
(spark.submit.deployMode,client)
(spark.driver.host,192.168.1.128)
(spark.master,spark://192.168.1.128:7077)

scala> sc.getConf.toDebugString
res8: String = 
spark.app.id=app-20151001090322-0001
spark.app.name=Spark shell
spark.driver.host=192.168.1.128
spark.driver.port=25392
spark.executor.id=driver
spark.externalBlockStore.folderName=spark-1254a794-fbfa-4b4c-9757-b5a94dc26ffc
spark.fileserver.uri=http://192.168.1.128:34634
spark.jars=
spark.master=spark://192.168.1.128:7077
spark.repl.class.uri=http://192.168.1.128:24988
spark.submit.deployMode=client

2. 内存不足
我的环境就是因为内存的原因。
我集群环境中,spark-env.sh 文件配置如下:
export JAVA_HOME=/usr/java/jdk1.7.0_60
export SCALA_HOME=/usr/local/scala
export SPARK_MASTER_IP=192.168.1.128
export SPARK_WORKER_MEMORY=100m
export HADOOP_CONF_DIR=/home/hadoop/hadoop-2.6.0/etc/hadoop
export MASTER=spark://192.168.1.128:7077

因为我的集群环境,每个节点只剩下500MB内存了,由于我没有配置SPARK_EXECUTOR_MEMORY参数,默认会使用1G内存,所以会出现内存不足,从而出现上面日志报的警告信息。

所以解决办法是添加如下参数:

export SPARK_EXECUTOR_MEMORY=512m

3.端口号被占用,之前的程序已运行。

Spark执行样例报警告:WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources的更多相关文章

  1. 18/03/18 04:53:44 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

    1:遇到这个问题是在启动bin/spark-shell以后,然后呢,执行spark实现wordcount的例子的时候出现错误了,如: scala> sc.textFile()).reduceBy ...

  2. Spark之submit任务时的Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

    Spark submit任务到Spark集群时,会出现如下异常: Exception 1:Initial job has not accepted any resources; check your ...

  3. azure iothub create-device-identity样例报错: unable to find valid certification path ,及iothub-explorer Error: CERT_UNTRUSTED

    https://docs.microsoft.com/zh-cn/azure/iot-hub/iot-hub-java-java-getstarted 在IDEA中执行上述的代码,会出现下面的报错信息 ...

  4. Pytest执行用例报Hint: make sure your test modules/packages have valid Python names.

    近日,使用Pytest+Appium 实现APP端UI自动化,遇到Pytest收集用例失败的情况. 报错信息如下: test_room.py:None (test_room.py) ImportErr ...

  5. Eureka 的 Application Service client的注冊以及执行演示样例

            Eureka 服务器架起来了(关于架设步骤參考博客<Linux 下 Eureka 服务器的部署>),如今怎样把我们要负载均衡的服务器(也就是从 Application Cl ...

  6. spark mllib lda 中文分词、主题聚合基本样例

    github https://github.com/cclient/spark-lda-example spark mllib lda example 官方示例较为精简 在官方lda示例的基础上,给合 ...

  7. Android OpenCV样例调试+报错处理

    1.OpenCV样例调试:<OpenCV Sample - image-manipulations>       blog+报错:E/CAMERA_ACTIVITY(17665): Cam ...

  8. Eureka 的 Application Client client的执行演示样例

            上篇以一个 demo 演示样例介绍了 Eureka 的 Application Service 客户端角色.今天我们继续了解 Eureka 的 Application Client 客 ...

  9. 在Ubuntu下构建Bullet以及执行Bullet的样例程序

    在Ubuntu下构建Bullet以及执行Bullet的样例程序 1.找到Bullet的下载页,地址是:https://code.google.com/p/bullet/downloads/list 2 ...

随机推荐

  1. 贯众云平台脚本编写之判断、循环以及shell命令的使用

    最近使用贯众云平台工具写脚本,进行Ui界面的自动化测试.刚开始接触,确实碰到不少的问题,稍微难点的就是判断语句,循环语句以及shell命令的使用,尤其是对于咱们测试这样比较少接触代码的人来说. 其实吧 ...

  2. 拉取代码过程中遇到的:post install error,please remove node_modules before retry!

    这是在git → clone 之后,安装npm intall时出现的错误,完整错误提示如下: 解决: // 1.先删除node_modules这个文件 $ rm -rf node_modules/ / ...

  3. 【Python】更优的字符串格式化方式 -- "format"替代"%s"

    背景 前段时间看了一篇介绍Python的代码技巧的文章,建议格式化字符串时使用"format"代替使用"%",但是没有说明原因.各博客网站介绍相关用法的博客很多 ...

  4. 【Jmeter】压测mysql数据库中间件mycat

    背景 因为博主所负责测试的项目需要数据库有较大的吞吐量,在最近进行了升级,更新了一个数据库中间件 - - mycat.查询了一些资料,了解到这是阿里的一个开源项目,基于mysql,是针对磁盘的读与写, ...

  5. PHP处理MySQL事务代码

    php使用mysqli进行事务处理 <?php$db = new mysqli("localhost","root",""," ...

  6. 你曾后悔进入 IT 行业吗?为什么?(转自知乎)--一生不悔入IT

    你曾后悔进入 IT 行业吗?为什么?(转自知乎)--一生不悔入IT 一.总结 一句话总结:看了大概200条评论,99%的不后悔,大部分人后悔没有早点干,但是做it最最主要的是要注意身体. 1.it是最 ...

  7. Oracle中 如何用一个表的数据更新另一个表中的数据

    准备阶段 1.建表语句: create table table1( idd varchar2(10) , val varchar2(20) ); create table table2( idd va ...

  8. 使用IDEA创建SpringBoot自定义注解

    创建SpringBoot项目 添加组织名 选择web 输入项目名称 创建后目录结构为 使用Spring的AOP先加入Maven依赖 <dependency> <groupId> ...

  9. 1-22-shell脚本基本应用-实验手册

    脚本应用思路 1. 确定命令操作(设计并执行任务) 2. 编写Shell脚本(组织任务过程) 3. 设置计划任务(控制时间,调用任务脚本) ------------------------------ ...

  10. qt4.8中多线程的几种方式

    第一: 用QtConcurrentRun类,适合在另一个线程中运行一个函数.不用继承类,很方便 第二:用QRunnable和QThreadPool结合.继承QRunnable,重写run函数,然后用Q ...