> sc <- sparkR.init()
Re-using existing Spark Context. Please stop SparkR with sparkR.stop() or restart R to create a new Spark Context
> sqlContext <- sparkRSQL.init(sc)
> df <- createDataFrame(sqlContext, faithful)
17/03/01 15:05:56 INFO SparkContext: Starting job: collectPartitions at NativeMethodAccessorImpl.java:-2
17/03/01 15:05:56 INFO DAGScheduler: Got job 0 (collectPartitions at NativeMethodAccessorImpl.java:-2) with 1 output partitions
17/03/01 15:05:56 INFO DAGScheduler: Final stage: ResultStage 0 (collectPartitions at NativeMethodAccessorImpl.java:-2)
17/03/01 15:05:56 INFO DAGScheduler: Parents of final stage: List()
17/03/01 15:05:56 INFO DAGScheduler: Missing parents: List()
17/03/01 15:05:56 INFO DAGScheduler: Submitting ResultStage 0 (ParallelCollectionRDD[0] at parallelize at RRDD.scala:460), which has no missing parents
17/03/01 15:05:56 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1280.0 B, free 1280.0 B)
17/03/01 15:05:56 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 854.0 B, free 2.1 KB)
17/03/01 15:05:56 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.16.31.137:49150 (size: 854.0 B, free: 511.5 MB)
17/03/01 15:05:56 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:1006
17/03/01 15:05:56 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (ParallelCollectionRDD[0] at parallelize at RRDD.scala:460)
17/03/01 15:05:56 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/03/01 15:05:56 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, test3, partition 0,PROCESS_LOCAL, 12976 bytes)
17/03/01 15:05:56 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on test3:50531 (size: 854.0 B, free: 511.5 MB)
17/03/01 15:05:56 INFO DAGScheduler: ResultStage 0 (collectPartitions at NativeMethodAccessorImpl.java:-2) finished in 0.396 s
17/03/01 15:05:56 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 389 ms on test3 (1/1)
17/03/01 15:05:56 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/03/01 15:05:56 INFO DAGScheduler: Job 0 finished: collectPartitions at NativeMethodAccessorImpl.java:-2, took 0.526915 s
> showDF(df)
17/03/01 15:06:02 INFO SparkContext: Starting job: showString at NativeMethodAccessorImpl.java:-2
17/03/01 15:06:02 INFO DAGScheduler: Got job 1 (showString at NativeMethodAccessorImpl.java:-2) with 1 output partitions
17/03/01 15:06:02 INFO DAGScheduler: Final stage: ResultStage 1 (showString at NativeMethodAccessorImpl.java:-2)
17/03/01 15:06:02 INFO DAGScheduler: Parents of final stage: List()
17/03/01 15:06:02 INFO DAGScheduler: Missing parents: List()
17/03/01 15:06:02 INFO DAGScheduler: Submitting ResultStage 1 (MapPartitionsRDD[4] at showString at NativeMethodAccessorImpl.java:-2), which has no missing parents
17/03/01 15:06:02 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 8.7 KB, free 10.8 KB)
17/03/01 15:06:02 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 3.5 KB, free 14.4 KB)
17/03/01 15:06:02 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.16.31.137:49150 (size: 3.5 KB, free: 511.5 MB)
17/03/01 15:06:02 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:1006
17/03/01 15:06:02 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 1 (MapPartitionsRDD[4] at showString at NativeMethodAccessorImpl.java:-2)
17/03/01 15:06:02 INFO TaskSchedulerImpl: Adding task set 1.0 with 1 tasks
17/03/01 15:06:02 INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID 1, test2, partition 0,PROCESS_LOCAL, 12976 bytes)
17/03/01 15:06:03 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on test2:57552 (size: 3.5 KB, free: 511.5 MB)
17/03/01 15:06:04 WARN TaskSetManager: Lost task 0.0 in stage 1.0 (TID 1, test2): java.io.IOException: Cannot run program "Rscript": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at org.apache.spark.api.r.RRDD$.createRProcess(RRDD.scala:413)
at org.apache.spark.api.r.RRDD$.createRWorker(RRDD.scala:429)
at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:66)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.IOException: error=2, No such file or directory
at java.lang.UNIXProcess.forkAndExec(Native Method)
at java.lang.UNIXProcess.<init>(UNIXProcess.java:187)
at java.lang.ProcessImpl.start(ProcessImpl.java:130)
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1028)
... 20 more 17/03/01 15:06:04 INFO TaskSetManager: Starting task 0.1 in stage 1.0 (TID 2, test2, partition 0,PROCESS_LOCAL, 12976 bytes)
17/03/01 15:06:04 INFO TaskSetManager: Lost task 0.1 in stage 1.0 (TID 2) on executor test2: java.io.IOException (Cannot run program "Rscript": error=2, No such file or directory) [duplicate 1]
17/03/01 15:06:04 INFO TaskSetManager: Starting task 0.2 in stage 1.0 (TID 3, test3, partition 0,PROCESS_LOCAL, 12976 bytes)
17/03/01 15:06:04 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on test3:50531 (size: 3.5 KB, free: 511.5 MB)
17/03/01 15:06:04 INFO TaskSetManager: Lost task 0.2 in stage 1.0 (TID 3) on executor test3: java.io.IOException (Cannot run program "Rscript": error=2, No such file or directory) [duplicate 2]
17/03/01 15:06:04 INFO TaskSetManager: Starting task 0.3 in stage 1.0 (TID 4, test3, partition 0,PROCESS_LOCAL, 12976 bytes)
17/03/01 15:06:04 INFO TaskSetManager: Lost task 0.3 in stage 1.0 (TID 4) on executor test3: java.io.IOException (Cannot run program "Rscript": error=2, No such file or directory) [duplicate 3]
17/03/01 15:06:04 ERROR TaskSetManager: Task 0 in stage 1.0 failed 4 times; aborting job
17/03/01 15:06:04 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
17/03/01 15:06:04 INFO TaskSchedulerImpl: Cancelling stage 1
17/03/01 15:06:04 INFO DAGScheduler: ResultStage 1 (showString at NativeMethodAccessorImpl.java:-2) failed in 2.007 s
17/03/01 15:06:04 INFO DAGScheduler: Job 1 failed: showString at NativeMethodAccessorImpl.java:-2, took 2.027519 s
17/03/01 15:06:04 ERROR RBackendHandler: showString on 15 failed
Error in invokeJava(isStatic = FALSE, objId$id, methodName, ...) :
org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4, test3): java.io.IOException: Cannot run program "Rscript": error=2, No such file or directory
at java.lang.ProcessBuilder.start(ProcessBuilder.java:1047)
at org.apache.spark.api.r.RRDD$.createRProcess(RRDD.scala:413)
at org.apache.spark.api.r.RRDD$.createRWorker(RRDD.scala:429)
at org.apache.spark.api.r.BaseRRDD.compute(RRDD.scala:63)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.R
  org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 1.0 failed 4 times, most recent failure: Lost task 0.3 in stage 1.0 (TID 4, test3): java.io.IOException: Cannot run program "Rscript": error=2, No such file or directory

重点为这一句
这一错误,使得在sparkr中,定义class为

class(df)
[1] "DataFrame"
attr(,"package")
[1] "SparkR"
的对象之后,使用class以及names以及show可以查看

但使用showDF以及head则报出如上错误。即无法读取

关注重点报错句,可知,其他节点上没有

Rscript

解决办法为,登陆其他的机器,将将Rscript copy到/usr/bin便可

或改成单节点:

即启动时,去掉--master

sparkR --driver-class-path /data1/mysql-connector-java-5.1.18.jar

sparkr——报错的更多相关文章

  1. 启动 ./spark-shell 命令报错

    当使用./spark-shell 命令报错 Caused by: ERROR XJ040: Failed to start database @476fde05, see the next excep ...

  2. Windows 7上执行Cake 报错原因是Powershell 版本问题

    在Windows 7 SP1 电脑上执行Cake的的例子 http://cakebuild.net/docs/tutorials/getting-started ,运行./Build.ps1 报下面的 ...

  3. 关于VS2015 ASP.NET MVC添加控制器的时候报错

    调试环境:VS2015 数据库Mysql  WIN10 在调试过程中出现类似下两图的同学们,注意啦. 其实也是在学习的过程中遇到这个问题的,找了很多资料都没有正面的解决添加控制器的时候报错的问题,还是 ...

  4. php报错 ----> Call to undefined function imagecreatetruecolor()

    刚才在写验证码的时候,发现报错,然后排查分析了一下,原来是所用的php版本(PHP/5.3.13)没有开启此扩展功能. 进入php.ini 找到extension=php_gd2.dll ,将其前面的 ...

  5. scp报错 -bash: scp: command not found

    环境:RHEL6.5 使用scp命令报错: [root@oradb23 media]# scp /etc/hosts oradb24:/etc/ -bash: scp: command not fou ...

  6. VS2015使用scanf报错的解决方案

    1.在程序最前面加: #define _CRT_SECURE_NO_DEPRECATE 2.在程序最前面加: #pragma warning(disable:4996) 3.把scanf改为scanf ...

  7. VS项目中使用Nuget还原包后编译生产还一直报错?

    Nuget官网下载Nuget项目包的命令地址:https://www.nuget.org/packages 今天就遇到一个比较奇葩的问题,折腾了很久终于搞定了: 问题是这样的:我的解决方案原本是好好的 ...

  8. Tomcat启动报错org.springframework.web.context.ContextLoaderListener类配置错误——SHH框架

    SHH框架工程,Tomcat启动报错org.springframework.web.context.ContextLoaderListener类配置错误 1.查看配置文件web.xml中是否配置.or ...

  9. Android——eclipse下运行android项目报错 Conversion to Dalvik format failed with error 1解决

    在eclipse中导入android项目,项目正常没有任何错误,但是运行时候会报错,(clean什么的都没用了.....)如图: 百度大神大多说是jdk的问题,解决: 右键项目-Properties如 ...

随机推荐

  1. ASP中页面之间传递值的几种方式

    ASP.NET页面之间传递值的几种方式 页面传值是学习asp.net初期都会面临的一个问题,总的来说有页面传值.存储对象传值.ajax.类.model.表单等.但是一般来说,常用的较简单有QueryS ...

  2. codeforces #296 div2 (527C) STL中set的运用

    题意:在一块H*M的玻璃上每次划一刀(仅仅能水平或竖直).输出每次划开之后剩下的玻璃中面积最大的一块的面积. 做题的时候.觉得这么大的数据量,有每次查询输出,应该是数据结构的内容. 这道题能够用STL ...

  3. 解决windows server2012 评估版本过期,系统会自动关机

    解决windows server2012 评估版本过期,系统在1小时后会自动关机. 重置方法:以管理员方式运行命令行,执行slmgr.vbs /rearm,可以使授权延长180天,但是仅能延长5次.

  4. MySql按字段分组取最大值记录

    数据库原始数据如下:数据库名:tbl_clothers 需求是:按照type分组,并获取个分组中price中的最大值,解决sql如下: 方法一: select * from (select type, ...

  5. Racket 版本的 24 点实现

    Racket 版本的 24 点实现 #lang racket ; Author: woodfox ; Date: Oct 11, 2014 ; ==================== 1. Non- ...

  6. ThinkPHP CURD方法中field方法详解

    导读:ThinkPHP CURD方法的field方法属于模型的连贯操作方法之一,主要目的是标识要返回或者操作的字段,可以用于查询和写入操作. 1.用于查询在查询操作中field方法是使用最频繁的.$M ...

  7. js中的前绑定和后绑定详解

    这篇文章详细介绍了js中的前绑定和后绑定,有需要的朋友可以参考一下 其主要意思就是看我有没有用过前绑定,即Dom树中的某些元素在还没有创建出来时,就指定该类型的元素一出生就应该拥有的某些事件.在实际开 ...

  8. sudo日志记录记录(rsyslog)

    1,查软件 rpm -qa|egrep "sudo|rsyslog" 2,编辑sudoers echo "Defaults logfile=/var/log/sudo.l ...

  9. 【Android】5.5 状态切换(Switch)和评级条(RatingBar)

    分类:C#.Android.VS2015: 创建日期:2016-02-07 一.简介 1.利用Switch或者ToggleButton切换状态 如果只有两种状态,可以用ToggleButton控件或S ...

  10. linux下使用shell脚本自动化部署项目

    在Java开发项目时经常要把正在开发的项目发布到测试服务器中去测试,一般的话是要把项目先打成war包,然后把war包发布到服务器中,关闭服务器, 最后重新启动服务器,虽然这过程不是很繁琐,但如果是多个 ...