搭建Spark环境后,调测Spark样例时,出现下面的错误:
WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

[hadoop@gpmaster bin]$ ./run-example org.apache.spark.examples.SparkPi
15/10/01 08:59:33 INFO spark.SparkContext: Running Spark version 1.5.0
.......................
15/10/01 08:59:35 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.1.128:17514]
15/10/01 08:59:35 INFO util.Utils: Successfully started service 'sparkDriver' on port 17514.
.......................
15/10/01 08:59:36 INFO ui.SparkUI: Started SparkUI at http://192.168.1.128:4040
15/10/01 08:59:37 INFO spark.SparkContext: Added JAR file:/home/hadoop/spark/lib/spark-examples-1.5.0-hadoop2.6.0.jar at http://192.168.1.128:36471/jars/spark-examples-1.5.0-hadoop2.6.0.jar with timestamp 1443661177865
15/10/01 08:59:37 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
15/10/01 08:59:38 INFO client.AppClient$ClientEndpoint: Connecting to master spark://192.168.1.128:7077...
15/10/01 08:59:38 INFO cluster.SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20151001085938-0000
.................................
15/10/01 08:59:40 INFO scheduler.TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/10/01 08:59:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:40 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:00:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:40 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:01:55 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:02:10 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/10/01 09:02:25 WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

从警告信息大致可以知道:
初始化job时没有获取到任何资源;提示检查集群,确保workers可以被注册并有足够的内存资源。

可能的原因有几点,可以逐个排查:
1. 主机主机名和ip是否配置正确
先查看/etc/hosts文件配置是否正确

同时可以通过spark-shell查看SparkContext获取的上下文信息, 如下操作:
[hadoop@gpmaster bin]$ ./spark-shell
........
scala> sc.getConf.getAll.foreach(println)
(spark.fileserver.uri,http://192.168.1.128:34634)
(spark.app.name,Spark shell)
(spark.driver.port,25392)
(spark.app.id,app-20151001090322-0001)
(spark.repl.class.uri,http://192.168.1.128:24988)
(spark.externalBlockStore.folderName,spark-1254a794-fbfa-4b4c-9757-b5a94dc26ffc)
(spark.jars,)
(spark.executor.id,driver)
(spark.submit.deployMode,client)
(spark.driver.host,192.168.1.128)
(spark.master,spark://192.168.1.128:7077)

scala> sc.getConf.toDebugString
res8: String = 
spark.app.id=app-20151001090322-0001
spark.app.name=Spark shell
spark.driver.host=192.168.1.128
spark.driver.port=25392
spark.executor.id=driver
spark.externalBlockStore.folderName=spark-1254a794-fbfa-4b4c-9757-b5a94dc26ffc
spark.fileserver.uri=http://192.168.1.128:34634
spark.jars=
spark.master=spark://192.168.1.128:7077
spark.repl.class.uri=http://192.168.1.128:24988
spark.submit.deployMode=client

2. 内存不足
我的环境就是因为内存的原因。
我集群环境中,spark-env.sh 文件配置如下:
export JAVA_HOME=/usr/java/jdk1.7.0_60
export SCALA_HOME=/usr/local/scala
export SPARK_MASTER_IP=192.168.1.128
export SPARK_WORKER_MEMORY=100m
export HADOOP_CONF_DIR=/home/hadoop/hadoop-2.6.0/etc/hadoop
export MASTER=spark://192.168.1.128:7077

因为我的集群环境,每个节点只剩下500MB内存了,由于我没有配置SPARK_EXECUTOR_MEMORY参数,默认会使用1G内存,所以会出现内存不足,从而出现上面日志报的警告信息。

所以解决办法是添加如下参数:

export SPARK_EXECUTOR_MEMORY=512m

3.端口号被占用,之前的程序已运行。

Spark执行样例报警告:WARN scheduler.TaskSchedulerImpl: Initial job has not accepted any resources的更多相关文章

  1. 18/03/18 04:53:44 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

    1:遇到这个问题是在启动bin/spark-shell以后,然后呢,执行spark实现wordcount的例子的时候出现错误了,如: scala> sc.textFile()).reduceBy ...

  2. Spark之submit任务时的Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient memory

    Spark submit任务到Spark集群时,会出现如下异常: Exception 1:Initial job has not accepted any resources; check your ...

  3. azure iothub create-device-identity样例报错: unable to find valid certification path ,及iothub-explorer Error: CERT_UNTRUSTED

    https://docs.microsoft.com/zh-cn/azure/iot-hub/iot-hub-java-java-getstarted 在IDEA中执行上述的代码,会出现下面的报错信息 ...

  4. Pytest执行用例报Hint: make sure your test modules/packages have valid Python names.

    近日,使用Pytest+Appium 实现APP端UI自动化,遇到Pytest收集用例失败的情况. 报错信息如下: test_room.py:None (test_room.py) ImportErr ...

  5. Eureka 的 Application Service client的注冊以及执行演示样例

            Eureka 服务器架起来了(关于架设步骤參考博客<Linux 下 Eureka 服务器的部署>),如今怎样把我们要负载均衡的服务器(也就是从 Application Cl ...

  6. spark mllib lda 中文分词、主题聚合基本样例

    github https://github.com/cclient/spark-lda-example spark mllib lda example 官方示例较为精简 在官方lda示例的基础上,给合 ...

  7. Android OpenCV样例调试+报错处理

    1.OpenCV样例调试:<OpenCV Sample - image-manipulations>       blog+报错:E/CAMERA_ACTIVITY(17665): Cam ...

  8. Eureka 的 Application Client client的执行演示样例

            上篇以一个 demo 演示样例介绍了 Eureka 的 Application Service 客户端角色.今天我们继续了解 Eureka 的 Application Client 客 ...

  9. 在Ubuntu下构建Bullet以及执行Bullet的样例程序

    在Ubuntu下构建Bullet以及执行Bullet的样例程序 1.找到Bullet的下载页,地址是:https://code.google.com/p/bullet/downloads/list 2 ...

随机推荐

  1. 常见HTTP状态(304,)

    一.1XX(临时响应) 表示临时响应并需要请求者继续执行操作的状态码. 100(继续) 请求者应当继续提出请求.服务器返回此代码表示:已经收到请求的第一部分,正在等待其余部分. 101(切换协议) 请 ...

  2. 'webpack' 不是内部或外部命令解决办法以及npm配置

    昨天在笔记本上安装webpack,按照教程下来,使用webpack命令行,报错:'webpack' 不是内部或外部命令,也不是可运行的程序 或批处理文件.网上有大量的配置方法与解决办法,找了好久才成功 ...

  3. 英语每日阅读---5、VOA慢速英语(翻译+字幕+讲解):美国人口普查局表示美国人受教育程度提升

    英语每日阅读---5.VOA慢速英语(翻译+字幕+讲解):美国人口普查局表示美国人受教育程度提升 一.总结 一句话总结: a.Thirty-four percent - college degree: ...

  4. 何时使用img标签,何时使用background-image背景图像

    在什么情况下更适合使用HTML IMG标签来显示一个图像,而不是一个CSS有背景图像,反之亦然? 如下场景使用img标签比较合适: 1.如果图像是等内容的一部分或图表或人(真正的人,而不是股票图人), ...

  5. flask学习(四):debug模式

    一. 设置debug模式 1. flask 1.0之前 在app.run()中传入一个关键字参数debug,app.run(debug=True),就设置当前项目为debug模式 2. flask 1 ...

  6. Android实现布局控件自定义属性

    一.自定义ViewGroup 1.onMeasure 决定内部View(子View)的宽度和高度,以及自己的宽度和高度 2.onLayout 决定子View放置的位置 3.onTouchEvent 定 ...

  7. js Math对象的常用方法

    1,基本方法: Math.round();向上四舍五入. Math.ceil();向上取整,有小数就整数部分加1 Math.floor(5/2) ;向下取整 Math.abs();返回绝对值: Mat ...

  8. SPOJ 694 && SPOJ 705 (不重复子串个数:后缀数组)

    题意 给定一个字符串,求它的所有不重复子串的个数 思路 一个字符串的子串都必然是它的某个后缀的前缀.对于每一个sa[i]后缀,它的起始位置sa[i],那么它最多能得到该后缀长度个子串(n-sa[i]个 ...

  9. scale的空白问题

    使用scale对表格进行缩放 出现大片空白问题 一直没有很好地重视这个问题,导致这次不得不面对,经过各种搜索,各种尝试,终于解决了这个留白问题 思路 大小盒子,小盒子进行缩放,大盒子依据缩放来进行动态 ...

  10. FreeMarker初探--介绍

    FreeMarker是一个用Java语言编写的模板引擎,它基于模板来生成文本输出.FreeMarker与Web容器无关,即在Web运行时,它并不知道Servlet或HTTP.它不仅可以用作表现层的实现 ...