运行

mport org.apache.log4j.{Level, Logger}
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext} /**
* Created by Lee_Rz on 2017/8/30.
*/
object SparkDemo {
def main(args: Array[String]) {
Logger.getLogger("org.apache.spark").setLevel(Level.OFF)
val sc: SparkContext = new SparkContext(new SparkConf().setAppName(this.getClass().getName()).setMaster("local[2]"))
val rdd1: RDD[String] = sc.textFile("C:\\Users\\166\\Desktop\\text.txt") //一行一行的读数据 //懒算子
val key: RDD[(String, Int)] = rdd1.flatMap(_.split(" ")).map((_,)).reduceByKey(_+_)
println(key.collect().toBuffer)//收集到Driver
}
}

报错

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
// :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO Slf4jLogger: Slf4jLogger started
// :: INFO Remoting: Starting remoting
// :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.0.166:51388]
// :: ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$$$anonfun$.apply(SparkContext.scala:)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$$$anonfun$.apply(SparkContext.scala:)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$.apply(HadoopRDD.scala:)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$.apply(HadoopRDD.scala:)
at scala.Option.map(Option.scala:)
at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$.apply(PairRDDFunctions.scala:)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$.apply(PairRDDFunctions.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:)
at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:)
at zx.SparkDemo$.main(SparkDemo.scala:)
at zx.SparkDemo.main(SparkDemo.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:)
// :: INFO FileInputFormat: Total input paths to process :
// :: INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
// :: INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
// :: INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
// :: INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
// :: INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
ArrayBuffer((are,), (hello,), (any,), (ok,), (world,), (me,), (alone,), (you,), (no,), (believie,), (more,))
// :: INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. Process finished with exit code

检查发现hadoop下bin目录下已经存在winutils.exe,检查hadoop的path路径,发现没有严格按照格式创建hadoop的path,真确的格式是HADOOP_HOME=......,因为在hadoop的生态圈中很多框架都是依赖hadoop的,所以他们的配置文件中,默认的export的hadoop路径是格式是HADOOP_HOME

Spark- ERROR Shell: Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.的更多相关文章

  1. spark开发常见问题之一:java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    最近在学习研究pyspark机器学习算法,执行代码出现以下异常: 19/06/29 10:08:26 ERROR Shell: Failed to locate the winutils binary ...

  2. java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries

    在已经搭建好的集群环境Centos6.6+Hadoop2.7+Hbase0.98+Spark1.3.1下,在Win7系统Intellij开发工具中调试Spark读取Hbase.运行直接报错: ? 1 ...

  3. windows 中使用hbase 异常:java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    平时一般是在windows环境下进行开发,在windows 环境下操作hbase可能会出现异常(java.io.IOException: Could not locate executable nul ...

  4. idea 提示:ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path java.io.IOException解决方法

    Windows系统中的IDEA链接Linux里面的Hadoop的api时出现的问题 提示:ERROR util.Shell: Failed to locate the winutils binary ...

  5. Spark报错java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    Spark 读取 JSON 文件时运行报错 java.io.IOException: Could not locate executable null\bin\winutils.exe in the ...

  6. 安装spark 报错:java.io.IOException: Could not locate executable E:\hadoop-2.7.7\bin\winutils.exe

    打开 cmd 输入 spark-shell 虽然可以正常出现 spark 的标志符,但是报错:java.io.IOException: Could not locate executable E:\h ...

  7. executable null\bin\winutils.exe in the Hadoop binaries.

    在windows 使用eclipse远程调用hadoop集群时抛出下面异常 executable null\bin\winutils.exe in the Hadoop binaries. 这个问题 ...

  8. Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

    很明显应该是HADOOP_HOME的问题.如果HADOOP_HOME为空,必然fullExeName为null\bin\winutils.exe.解决方法很简单,配置环境变量,不想重启电脑可以在程序里 ...

  9. Could not locate executable null\bin\winutils.exe in the Hadoop binaries解决方式 spark运行wordcoult

    虽然可以正常运行,但是会出异常,现给出解决方法. 1.问题:   2.  问题解决: 仔细查看报错是缺少winutils.exe程序. Hadoop都是运行在Linux系统下的,在windows下ec ...

随机推荐

  1. 微信小程序 - 单个题目

    后端传过来的数据,如果通过wx:for遍历出来那就是一个页面全部排下来.... 我的想法就是,页面初始化时设置一个默认值 1/50 就是 index+1 / 50(后端传过来的数组长度),通过控制in ...

  2. axios 异步加载 导致 {{}} 中变量为 undefined 报错 的 解决方案

    情景:axios 异步加载数据,当返回数据为一个 数组 时,双花括号中 这样写 会报错 {{informationDetail[0].img}} 解决方案一:通过 v-if 进行判断 解决方案二:单独 ...

  3. javascript 数组Array排序

    var numberAry = [9,9,10,8,7,80,33,55,22]; numberAry.sort(); /*输出:10,22,33,55,7,8,80,9,9 上面的代码没有按照数值的 ...

  4. 如何禁止同IP站点查询和同IP站点查询的原理分析 Robots.txt屏蔽BINGBOT

    很多站长工具中都有“同IP站点查询”.“IP反查域名”这种服务不少人都不知道是什么原理,其实这些服务几乎都是用BING(以前的LIVE)来实现 的,BING有个特别功能 BING抓取页面时会把站点的I ...

  5. 跟我一起写 Makefile(一)[转]

    原文链接 http://bbs.chinaunix.net/thread-408225-1-1.html(出处: http://bbs.chinaunix.net/) 陈皓 概述—— 什么是makef ...

  6. c# .net 关于接口实现方式:隐式实现/显式实现!

    以前在用到接口时,从来没注意到接口分为隐式实现与显示实现.昨天在浏览博客时看到相关内容,现在根据自己的理解记录一下,方便日后碰到的时候温习温习.  通俗的来讲,“显示接口实现”就是使用接口名称作为方法 ...

  7. 深入Asyncio(七)异步上下文管理器

    Async Context Managers: async with 在某些场景下(如管理网络资源的连接建立.断开),用支持异步的上下文管理器是很方便的. 那么如何理解async with关键字? 先 ...

  8. 不同特权级间代码段的跳转{ 门 + 跳转(jmp + call) + 返回(ret) }

    [0]写在前面 0.1)我们讲 CPU的保护机制,它是可靠的多任务运行环境所必须的: 0.2) CPU保护机制:分为段级保护 + 页级保护: 0.2.1)段级保护分为:段限长 limit 检查.段类型 ...

  9. 错误记录--更改tomcat端口号方法,Several ports (8005, 8080, 8009)【转】

    启动Tomcat服务器报错: Several ports (8005, 8080, 8009) required by Tomcat v5.5 Server at localhost are alre ...

  10. loadrunner动态从mysql取值 [需要下载跟数据库服务器一致的dll,32位或64位]

    loadrunner中有参数化从数据库中取值,但是只是静态的,对于一些要实时取值的数据就game over了,比如取短信验证码,因为MySQL中有一个libmysql.dll,里面提供了可以操作数据库 ...