sparkr——报错
> sc <- sparkR.init()
Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
> sqlContext <- sparkRSQL.init(sc)
> df <- createDataFrame(sqlContext, faithful)
17/03/01 15:05:56 INFO SparkContext: Starting job: collectPartitions at NativeMethodAccessorImpl.java:-2
17/03/01 15:05:56 INFO DAGScheduler: Got job 0 (collectPartitions at NativeMethodAccessorImpl.java:-2) with 1 output partitions
17/03/01 15:05:56 INFO DAGScheduler: Final stage: ResultStage 0 (collectPartitions at NativeMethodAccessorImpl.java:-2)
17/03/01 15:05:56 INFO DAGScheduler: Parents of final stage: List()
17/03/01 15:05:56 INFO DAGScheduler: Missing parents: List()
17/03/01 15:05:56 INFO DAGScheduler: Submitting ResultStage 0 (ParallelCollectionRDD[0] at parallelize at RRDD.scala:460), which has no missing parents
17/03/01 15:05:56 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1280.0 B, free 1280.0 B)
17/03/01 15:05:56 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 854.0 B, free 2.1 KB)
17/03/01 15:05:56 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.16.31.137:49150 (size: 854.0 B, free: 511.5 MB)
17/03/01 15:05:56 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
17/03/01 15:05:56 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (ParallelCollectionRDD[0] at parallelize at RRDD.scala:460)
17/03/01 15:05:56 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/03/01 15:05:56 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, test3, partition 0,PROCESS_LOCAL, 12976 bytes)
17/03/01 15:05:56 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on test3:50531 (size: 854.0 B, free: 511.5 MB)
17/03/01 15:05:56 INFO DAGScheduler: ResultStage 0 (collectPartitions at NativeMethodAccessorImpl.java:-2) finished in 0.396 s
17/03/01 15:05:56 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 389 ms on test3 (1/1)
17/03/01 15:05:56 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/03/01 15:05:56 INFO DAGScheduler: Job 0 finished: collectPartitions at NativeMethodAccessorImpl.java:-2, took 0.526915 s
> showDF(df)
17/03/01 15:06:02 INFO SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:-2
17/03/01 15:06:02 INFO DAGScheduler: Got job 1 (showString at NativeMethodAccessorImpl.java:-2) with 1 output partitions
17/03/01 15:06:02 INFO DAGScheduler: Final stage: ResultStage 1 (showString at NativeMethodAccessorImpl.java:-2)
17/03/01 15:06:02 INFO DAGScheduler: Parents of final stage: List()
17/03/01 15:06:02 INFO DAGScheduler: Missing parents: List()
17/03/01 15:06:02 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[4] at showString at NativeMethodAccessorImpl.java:-2), which has no missing parents
17/03/01 15:06:02 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 8.7 KB, free 10.8 KB)
17/03/01 15:06:02 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 3.5 KB, free 14.4 KB)
17/03/01 15:06:02 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.16.31.137:49150 (size: 3.5 KB, free: 511.5 MB)
17/03/01 15:06:02 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
17/03/01 15:06:02 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[4] at showString at NativeMethodAccessorImpl.java:-2)
17/03/01 15:06:02 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/03/01 15:06:02 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, test2, partition 0,PROCESS_LOCAL, 12976 bytes)
17/03/01 15:06:03 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on test2:57552 (size: 3.5 KB, free: 511.5 MB)
17/03/01 15:06:04 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1, test2): java.io.IOException: Cannot run program "Rscript": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at org.apache.spark.api.r.RRDD$.createRProcess(RRDD.scala:413)
at org.apache.spark.api.r.RRDD$.createRWorker(RRDD.scala:429)
at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:187)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
... 20 more 17/03/01 15:06:04 INFO TaskSetManager: Starting task 0.1 in stage 1.0 (TID 2, test2, partition 0,PROCESS_LOCAL, 12976 bytes)
17/03/01 15:06:04 INFO TaskSetManager: Lost task 0.1 in stage 1.0 (TID 2) on executor test2: java.io.IOException (Cannot run program "Rscript": error=2, No such file or directory) [duplicate 1]
17/03/01 15:06:04 INFO TaskSetManager: Starting task 0.2 in stage 1.0 (TID 3, test3, partition 0,PROCESS_LOCAL, 12976 bytes)
17/03/01 15:06:04 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on test3:50531 (size: 3.5 KB, free: 511.5 MB)
17/03/01 15:06:04 INFO TaskSetManager: Lost task 0.2 in stage 1.0 (TID 3) on executor test3: java.io.IOException (Cannot run program "Rscript": error=2, No such file or directory) [duplicate 2]
17/03/01 15:06:04 INFO TaskSetManager: Starting task 0.3 in stage 1.0 (TID 4, test3, partition 0,PROCESS_LOCAL, 12976 bytes)
17/03/01 15:06:04 INFO TaskSetManager: Lost task 0.3 in stage 1.0 (TID 4) on executor test3: java.io.IOException (Cannot run program "Rscript": error=2, No such file or directory) [duplicate 3]
17/03/01 15:06:04 ERROR TaskSetManager: Task 0 in stage 1.0 failed 4 times; aborting job
17/03/01 15:06:04 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
17/03/01 15:06:04 INFO TaskSchedulerImpl: Cancelling stage 1
17/03/01 15:06:04 INFO DAGScheduler: ResultStage 1 (showString at NativeMethodAccessorImpl.java:-2) failed in 2.007 s
17/03/01 15:06:04 INFO DAGScheduler: Job 1 failed: showString at NativeMethodAccessorImpl.java:-2, took 2.027519 s
17/03/01 15:06:04 ERROR RBackendHandler: showString on 15 failed
Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) :
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4, test3): java.io.IOException: Cannot run program "Rscript": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at org.apache.spark.api.r.RRDD$.createRProcess(RRDD.scala:413)
at org.apache.spark.api.r.RRDD$.createRWorker(RRDD.scala:429)
at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.R
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4, test3): java.io.IOException: Cannot run program "Rscript": error=2, No such file or directory 重点为这一句
这一错误,使得在sparkr中,定义class为
class(df)
[1] "DataFrame"
attr(,"package")
[1] "SparkR"
的对象之后,使用class以及names以及show可以查看
但使用showDF以及head则报出如上错误。即无法读取
关注重点报错句,可知,其他节点上没有
Rscript 解决办法为,登陆其他的机器,将将Rscript copy到/usr/bin便可
或改成单节点:
即启动时,去掉--master
sparkR --driver-class-path /data1/mysql-connector-java-5.1.18.jar
sparkr——报错的更多相关文章
- 启动 ./spark-shell 命令报错
当使用./spark-shell 命令报错 Caused by: ERROR XJ040: Failed to start database @476fde05, see the next excep ...
- Windows 7上执行Cake 报错原因是Powershell 版本问题
在Windows 7 SP1 电脑上执行Cake的的例子 http://cakebuild.net/docs/tutorials/getting-started ,运行./Build.ps1 报下面的 ...
- 关于VS2015 ASP.NET MVC添加控制器的时候报错
调试环境:VS2015 数据库Mysql WIN10 在调试过程中出现类似下两图的同学们,注意啦. 其实也是在学习的过程中遇到这个问题的,找了很多资料都没有正面的解决添加控制器的时候报错的问题,还是 ...
- php报错 ----> Call to undefined function imagecreatetruecolor()
刚才在写验证码的时候,发现报错,然后排查分析了一下,原来是所用的php版本(PHP/5.3.13)没有开启此扩展功能. 进入php.ini 找到extension=php_gd2.dll ,将其前面的 ...
- scp报错 -bash: scp: command not found
环境:RHEL6.5 使用scp命令报错: [root@oradb23 media]# scp /etc/hosts oradb24:/etc/ -bash: scp: command not fou ...
- VS2015使用scanf报错的解决方案
1.在程序最前面加: #define _CRT_SECURE_NO_DEPRECATE 2.在程序最前面加: #pragma warning(disable:4996) 3.把scanf改为scanf ...
- VS项目中使用Nuget还原包后编译生产还一直报错?
Nuget官网下载Nuget项目包的命令地址:https://www.nuget.org/packages 今天就遇到一个比较奇葩的问题,折腾了很久终于搞定了: 问题是这样的:我的解决方案原本是好好的 ...
- Tomcat启动报错org.springframework.web.context.ContextLoaderListener类配置错误——SHH框架
SHH框架工程,Tomcat启动报错org.springframework.web.context.ContextLoaderListener类配置错误 1.查看配置文件web.xml中是否配置.or ...
- Android——eclipse下运行android项目报错 Conversion to Dalvik format failed with error 1解决
在eclipse中导入android项目,项目正常没有任何错误,但是运行时候会报错,(clean什么的都没用了.....)如图: 百度大神大多说是jdk的问题,解决: 右键项目-Properties如 ...
随机推荐
- Chrome英文版离线安装包下载
在原来在线安装地址后面加上 ?standalone=1 即可 https://www.google.com/intl/en/chrome/browser/desktop/index.html?st ...
- stm32 外扩SRAM使用问题
当把外扩SRAM内存拷贝到片上SRAM内存时使用内存拷贝函数memset()或者原子定义的mymemset()函数,编译器会提示空间不足. 原因是这两个函数一个是只能对片上SRAM操作,一个是只能对外 ...
- mysql-5.7 saving and restore buffer pool state 详解
一.mysql 重启要面临的问题: 由于重启后之前innodb buffer pool中缓存的数据就都没有了,如果这个时候业务SQL来临,mysql就只能是从磁盘中 读取数据到内存:可能要经过数个小时 ...
- scrapy 的框架的安装
1.简介: scrapy 是用python写成的一个web 爬虫框架,scrapy 会把大多数在爬取网站时的通用的事给自动化的做了:我最开始爬别人的网站的时候 用的是requests这个库,用这个库我 ...
- 基于EM的多直线拟合实现及思考
作者:桂. 时间:2017-03-22 06:13:50 链接:http://www.cnblogs.com/xingshansi/p/6597796.html 声明:欢迎被转载,不过记得注明出处哦 ...
- 关于 二维码 与 NFC 之间的出身贫贱说
关于 二维码 与 NFC 之间的出身贫贱说 太阳火神的漂亮人生 (http://blog.csdn.net/opengl_es) 本文遵循"署名-非商业用途-保持一致"创作公用协议 ...
- C++中虚基类在派生类中的内存布局
今天重温C++的知识,当看到虚基类这点的时候,那时候也没有太过追究,就是知道虚基类是消除了类继承之间的二义性问题而已,可是很是好奇,它是怎么消除的,内存布局是怎么分配的呢?于是就深入研究了一下,具体的 ...
- Xilinx ISE Design Suite 14.7 ISim 简单仿真
1.创建完项目(以Xilinx ISE Design Suite 14.7开发流程的例子 led例子 为例),编译通过,我们就可以对这个项目进行仿真: 2.然后切换到simulation,然 ...
- app 图标规格参考表
转自:http://www.cocoachina.com/appstore/top/2012/1105/5031.html 像我一样记不住iOS应用图标像素尺寸的开发者不在少数,我经常需要查询不同设备 ...
- 将一个4X4的数组进行逆时针旋转90度后输出,要求原数组数据随机输入
//将一个4X4的数组进行逆时针旋转90度后输出,要求原数组数据随机输入 #include<stdio.h> int main() { int a[4][4],b[4][4],i,j;// ...