Spark- ERROR Shell: Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
运行
mport org.apache.log4j.{Level, Logger}
import org.apache.spark.rdd.RDD
import org.apache.spark.{SparkConf, SparkContext}
/**
* Created by Lee_Rz on 2017/8/30.
*/
object SparkDemo {
def main(args: Array[String]) {
Logger.getLogger("org.apache.spark").setLevel(Level.OFF)
val sc: SparkContext = new SparkContext(new SparkConf().setAppName(this.getClass().getName()).setMaster("local[2]"))
val rdd1: RDD[String] = sc.textFile("C:\\Users\\166\\Desktop\\text.txt") //一行一行的读数据 //懒算子
val key: RDD[(String, Int)] = rdd1.flatMap(_.split(" ")).map((_,)).reduceByKey(_+_)
println(key.collect().toBuffer)//收集到Driver
}
}
报错
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
// :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO Slf4jLogger: Slf4jLogger started
// :: INFO Remoting: Starting remoting
// :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriverActorSystem@192.168.0.166:51388]
// :: ERROR Shell: Failed to locate the winutils binary in the hadoop binary path
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:)
at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:)
at org.apache.hadoop.util.Shell.<clinit>(Shell.java:)
at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:)
at org.apache.hadoop.mapred.FileInputFormat.setInputPaths(FileInputFormat.java:)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$$$anonfun$.apply(SparkContext.scala:)
at org.apache.spark.SparkContext$$anonfun$hadoopFile$$$anonfun$.apply(SparkContext.scala:)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$.apply(HadoopRDD.scala:)
at org.apache.spark.rdd.HadoopRDD$$anonfun$getJobConf$.apply(HadoopRDD.scala:)
at scala.Option.map(Option.scala:)
at org.apache.spark.rdd.HadoopRDD.getJobConf(HadoopRDD.scala:)
at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at org.apache.spark.rdd.RDD$$anonfun$partitions$.apply(RDD.scala:)
at scala.Option.getOrElse(Option.scala:)
at org.apache.spark.rdd.RDD.partitions(RDD.scala:)
at org.apache.spark.Partitioner$.defaultPartitioner(Partitioner.scala:)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$.apply(PairRDDFunctions.scala:)
at org.apache.spark.rdd.PairRDDFunctions$$anonfun$reduceByKey$.apply(PairRDDFunctions.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:)
at org.apache.spark.rdd.RDD.withScope(RDD.scala:)
at org.apache.spark.rdd.PairRDDFunctions.reduceByKey(PairRDDFunctions.scala:)
at zx.SparkDemo$.main(SparkDemo.scala:)
at zx.SparkDemo.main(SparkDemo.scala)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:)
at java.lang.reflect.Method.invoke(Method.java:)
at com.intellij.rt.execution.application.AppMain.main(AppMain.java:)
// :: INFO FileInputFormat: Total input paths to process :
// :: INFO deprecation: mapred.tip.id is deprecated. Instead, use mapreduce.task.id
// :: INFO deprecation: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id
// :: INFO deprecation: mapred.task.is.map is deprecated. Instead, use mapreduce.task.ismap
// :: INFO deprecation: mapred.task.partition is deprecated. Instead, use mapreduce.task.partition
// :: INFO deprecation: mapred.job.id is deprecated. Instead, use mapreduce.job.id
ArrayBuffer((are,), (hello,), (any,), (ok,), (world,), (me,), (alone,), (you,), (no,), (believie,), (more,))
// :: INFO RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon. Process finished with exit code
检查发现hadoop下bin目录下已经存在winutils.exe,检查hadoop的path路径,发现没有严格按照格式创建hadoop的path,真确的格式是HADOOP_HOME=......,因为在hadoop的生态圈中很多框架都是依赖hadoop的,所以他们的配置文件中,默认的export的hadoop路径是格式是HADOOP_HOME
Spark- ERROR Shell: Failed to locate the winutils binary in the hadoop binary path java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.的更多相关文章
- spark开发常见问题之一:java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
最近在学习研究pyspark机器学习算法,执行代码出现以下异常: 19/06/29 10:08:26 ERROR Shell: Failed to locate the winutils binary ...
- java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries
在已经搭建好的集群环境Centos6.6+Hadoop2.7+Hbase0.98+Spark1.3.1下,在Win7系统Intellij开发工具中调试Spark读取Hbase.运行直接报错: ? 1 ...
- windows 中使用hbase 异常:java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
平时一般是在windows环境下进行开发,在windows 环境下操作hbase可能会出现异常(java.io.IOException: Could not locate executable nul ...
- idea 提示:ERROR util.Shell: Failed to locate the winutils binary in the hadoop binary path java.io.IOException解决方法
Windows系统中的IDEA链接Linux里面的Hadoop的api时出现的问题 提示:ERROR util.Shell: Failed to locate the winutils binary ...
- Spark报错java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
Spark 读取 JSON 文件时运行报错 java.io.IOException: Could not locate executable null\bin\winutils.exe in the ...
- 安装spark 报错:java.io.IOException: Could not locate executable E:\hadoop-2.7.7\bin\winutils.exe
打开 cmd 输入 spark-shell 虽然可以正常出现 spark 的标志符,但是报错:java.io.IOException: Could not locate executable E:\h ...
- executable null\bin\winutils.exe in the Hadoop binaries.
在windows 使用eclipse远程调用hadoop集群时抛出下面异常 executable null\bin\winutils.exe in the Hadoop binaries. 这个问题 ...
- Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
很明显应该是HADOOP_HOME的问题.如果HADOOP_HOME为空,必然fullExeName为null\bin\winutils.exe.解决方法很简单,配置环境变量,不想重启电脑可以在程序里 ...
- Could not locate executable null\bin\winutils.exe in the Hadoop binaries解决方式 spark运行wordcoult
虽然可以正常运行,但是会出异常,现给出解决方法. 1.问题: 2. 问题解决: 仔细查看报错是缺少winutils.exe程序. Hadoop都是运行在Linux系统下的,在windows下ec ...
随机推荐
- perl的几个小tips
1.字符串比较 if($str eq "hello"){ ... } 字符串比较必须用eq,用==只适合标量 2.字符串连接 $str3=$str1."-".$ ...
- Linux 的计划任务(运维基础|可用于提权)
Linux操作系统定时任务系统 Cron 入门 先写笔记: crontab -u //设定某个用户的cron服务,一般root用户在执行这个命令的时候需要此参数 crontab -l //列出某个用户 ...
- HTTP学习笔记(一)报文和连接管理
对TCP/IP协议簇有些了解的同学们应该都知道.TCP/IP协议通过精简ISO网络7层协议(事实上了解历史渊源的话,TCP/IP协议本来目的并非简化ISO的7层协议.仅仅是因为ISO协议簇制定速度慢于 ...
- JS创建表单提交备份
//保存 function saveFT() { var data = { createDate: GetDateStr(0), name: $("#txtName").val() ...
- mysql 配置 安装和 root password 更改
第一步: 修改my.ini文件,替换为以下内容 (skip_grant_tables***重点) # For advice on how to change settings please see # ...
- RF --系统关键字开发
需求: 接收一个目录路径,自动遍历目录下以及子目录下的所有批处理(.bat) 文件并执行. 首先在..\Python27\Lib\site-packages 目录下创建 CustomLibrary 目 ...
- Linux系统rootpassword改动
重新启动系统. 进入系统引导界面: watermark/2/text/aHR0cDovL2Jsb2cuY3Nkbi5uZXQvdTAxMzMzOTg1MQ==/font/5a6L5L2T/fontsi ...
- client交互技术简单介绍
随着网络应用的不断丰富,client交互技术也如雨后春笋一般,遍地开花. 正是这些技术的支持,我们的互联网世界变得更加丰富多彩.一个浏览器上.不用说是简单的动画效果,就是一个Office应用也能顺畅的 ...
- python coding style guide 的高速落地实践
python coding style guide 的高速落地实践 机器和人各有所长,如coding style检查这样的可自己主动化的工作理应交给机器去完毕,故发此文帮助你在几分钟内实现coding ...
- 详解spring boot实现多数据源代码实战
之前在介绍使用JdbcTemplate和Spring-data-jpa时,都使用了单数据源.在单数据源的情况下,Spring Boot的配置非常简单,只需要在application.propertie ...