基于Eclipse搭建hadoop开发环境
一、基础环境准备
1、Eclipse 下载地址:http://pan.baidu.com/s/1slArxAP
2、JDK1.8 下载地址:http://pan.baidu.com/s/1i5iNyTZ
二、win10下hadoop开发环境搭建
1、下载hadoop插件:hadoop-eclipse-plugin-2.7.3.jar,插件放在eclipse\dropins目录下。
2、在windows解压hadoop-2.7.3.tar.gz
3、配置Hadoop
Map/Reduce
4、点击show
view -> other… ,在mapreduce tools下选择Map/ReduceLocations
5、查看是否连接成功
三、运行新建WordCount
项目并运行
1.右击New->Map/Reduce
Project
2.在hdfs输入目录创建需要统计的文本
- bin/hadoop dfs -mkdir -p hdfs://192.168.168.200:9000/input
- bin/hadoop dfs -mkdir -p hdfs://192.168.168.200:9000/output
- bin/hadoop fs -put words.txt /input
words.txt内容为:
- HelloHadoop
- HelloBigData
- HelloSpark
- HelloFlume
- HelloKafka
3.新建WordCount.java
- import java.io.IOException;
- import java.util.StringTokenizer;
- import org.apache.hadoop.conf.Configuration;
- import org.apache.hadoop.fs.Path;
- import org.apache.hadoop.io.IntWritable;
- import org.apache.hadoop.io.Text;
- import org.apache.hadoop.mapreduce.Job;
- import org.apache.hadoop.mapreduce.Mapper;
- import org.apache.hadoop.mapreduce.Reducer;
- import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
- import org.apache.hadoop.mapreduce.lib.input.NLineInputFormat;
- import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
- /**
- * 第一个MapReduce程序
- *
- * @author sunchen
- *
- */
- public class WordCount {
- public static class TokenizerMapper extends
- Mapper<Object, Text, Text, IntWritable> {
- private final static IntWritable one = new IntWritable(1);
- private Text word = new Text();
- public void map(Object key, Text value, Context context)
- throws IOException, InterruptedException {
- StringTokenizer itr = new StringTokenizer(value.toString());
- while (itr.hasMoreTokens()) {
- word.set(itr.nextToken());
- context.write(word, one);
- }
- }
- }
- public static class IntSumReducer extends
- Reducer<Text, IntWritable, Text, IntWritable> {
- private IntWritable result = new IntWritable();
- public void reduce(Text key, Iterable<IntWritable> values,
- Context context) throws IOException, InterruptedException {
- int sum = 0;
- for (IntWritable val : values) {
- sum += val.get();
- }
- result.set(sum);
- context.write(key, result);
- }
- }
- public static void main(String[] args) throws Exception {
- Configuration conf = new Configuration();
- Job job = Job.getInstance(conf, "word count");
- job.setJarByClass(WordCount.class);
- job.setMapperClass(TokenizerMapper.class);
- job.setCombinerClass(IntSumReducer.class);
- job.setReducerClass(IntSumReducer.class);
- job.setOutputKeyClass(Text.class);
- job.setOutputValueClass(IntWritable.class);
- job.setInputFormatClass(NLineInputFormat.class);
- // 输入文件路径
- FileInputFormat.addInputPath(job, new Path(
- "hdfs://192.168.168.200:9000/input/words.txt"));
- // 输出文件路径
- FileOutputFormat.setOutputPath(job, new Path(
- "hdfs://192.168.168.200:9000/output/wordcount"));
- System.exit(job.waitForCompletion(true) ? 0 : 1);
- }
- }
4、配置JDK1.8
因为Hadoop-eclipse-plugin-2.7.3.jar是使用JDK1.8编译的,如果不使用JDK1.8,则会出现以下报错:
Java.lang.UnsupportedClassVersionError: WordCount : Unsupported major.minor version 52.0
原因:JDK版本太低,一定要换成JDK1.8。
5、在项目的src下面新建file名为log4j.properties的文件
在项目的src下面新建file名为log4j.properties的文件,内容为:
- ### 设置日志级别及日志存储器 ###
- #log4j.rootLogger=DEBUG, Console
- ### 设置日志级别及日志存储器 ###
- log4j.rootLogger=info,consolePrint,errorFile,logFile
- #log4j.rootLogger=DEBUG,consolePrint,errorFile,logFile,Console
- ### 输出到控制台 ###
- log4j.appender.consolePrint.Encoding = UTF-8
- log4j.appender.consolePrint = org.apache.log4j.ConsoleAppender
- log4j.appender.consolePrint.Target = System.out
- log4j.appender.consolePrint.layout = org.apache.log4j.PatternLayout
- log4j.appender.consolePrint.layout.ConversionPattern=%d %p [%c] - %m%n
- ### 输出到日志文件 ###
- log4j.appender.logFile.Encoding = UTF-8
- log4j.appender.logFile = org.apache.log4j.DailyRollingFileAppender
- log4j.appender.logFile.File = D:/RUN_Data/log/dajiangtai_ok.log
- log4j.appender.logFile.Append = true
- log4j.appender.logFile.Threshold = info
- log4j.appender.logFile.layout = org.apache.log4j.PatternLayout
- log4j.appender.logFile.layout.ConversionPattern = %-d{yyyy-MM-dd HH:mm:ss} [ %t:%r ] - [ %p ] %m%n
- ### 保存异常信息到单独文件 ###
- log4j.appender.errorFile.Encoding = UTF-8
- log4j.appender.errorFile = org.apache.log4j.DailyRollingFileAppender
- log4j.appender.errorFile.File = D:/RUN_Data/log/dajiangtai_error.log
- log4j.appender.errorFile.Append = true
- log4j.appender.errorFile.Threshold = ERROR
- log4j.appender.errorFile.layout = org.apache.log4j.PatternLayout
- log4j.appender.errorFile.layout.ConversionPattern =%-d{yyyy-MM-dd HH\:mm\:ss} [ %t\:%r ] - [ %p ] %m%n
- #Console
- log4j.appender.Console=org.apache.log4j.ConsoleAppender
- log4j.appender.Console.layout=org.apache.log4j.PatternLayout
- log4j.appender.Console.layout.ConversionPattern=%d [%t] %-5p [%c] - %m%n
- log4j.logger.java.sql.ResultSet=INFO
- log4j.logger.org.apache=INFO
- log4j.logger.java.sql.Connection=DEBUG
- log4j.logger.java.sql.Statement=DEBUG
- log4j.logger.java.sql.PreparedStatement=DEBUG
- #log4j.logger.com.dajiangtai.dao=DEBUG,TRACE
- log4j.logger.com.dajiangtai.dao.IFollowDao=DEBUG
如图:
没有log4j.properties日志打不出来,会报警告信息:
- log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
- log4j:WARN Please initialize the log4j system properly.
- log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
6、配置hadoop环境变量
添加环境变量HADOOP_HOME=D:\hadoop-2.7.3
追加环境变量path内容:%HADOOP_HOME%/bin
如果没有生效,重启eclipse;如果还是没有生效,重启电脑。
如果没配置hadoop环境变量,则会出现以下报错:
Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
- 2017-07-08 15:53:03,783 ERROR [org.apache.hadoop.util.Shell] - Failed to locate the winutils binary in the hadoop binary path
- java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
- at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
- at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
- at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
- at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
- at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:610)
- at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
- at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
- at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
- at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
- at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
- at org.apache.hadoop.mapreduce.task.JobContextImpl.<init>(JobContextImpl.java:72)
- at org.apache.hadoop.mapreduce.Job.<init>(Job.java:142)
- at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:185)
- at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:204)
- at WordCount.main(WordCount.java:56)
跟代码就去发现是HADOOP_HOME的问题。如果HADOOP_HOME为空,必然fullExeName为null\bin\winutils.exe。解决方法很简单,配置环境变量吧。
7、下载winutils.exe,hadoop.dll拷贝到%HADOOP_HOME%\bin目录
java.io.IOException: Could not locate executable D:\hadoop-2.7.3\bin\winutils.exe in the Hadoop binaries.
- 2017-07-08 16:17:13,272 ERROR [org.apache.hadoop.util.Shell] - Failed to locate the winutils binary in the hadoop binary path
- java.io.IOException: Could not locate executable D:\hadoop-2.7.3\bin\winutils.exe in the Hadoop binaries.
- at org.apache.hadoop.util.Shell.getQualifiedBinPath(Shell.java:379)
- at org.apache.hadoop.util.Shell.getWinUtilsPath(Shell.java:394)
- at org.apache.hadoop.util.Shell.<clinit>(Shell.java:387)
- at org.apache.hadoop.util.StringUtils.<clinit>(StringUtils.java:80)
- at org.apache.hadoop.security.SecurityUtil.getAuthenticationMethod(SecurityUtil.java:610)
- at org.apache.hadoop.security.UserGroupInformation.initialize(UserGroupInformation.java:273)
- at org.apache.hadoop.security.UserGroupInformation.ensureInitialized(UserGroupInformation.java:261)
- at org.apache.hadoop.security.UserGroupInformation.loginUserFromSubject(UserGroupInformation.java:791)
- at org.apache.hadoop.security.UserGroupInformation.getLoginUser(UserGroupInformation.java:761)
- at org.apache.hadoop.security.UserGroupInformation.getCurrentUser(UserGroupInformation.java:634)
- at org.apache.hadoop.mapreduce.task.JobContextImpl.<init>(JobContextImpl.java:72)
- at org.apache.hadoop.mapreduce.Job.<init>(Job.java:142)
- at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:185)
- at org.apache.hadoop.mapreduce.Job.getInstance(Job.java:204)
- at WordCount.main(WordCount.java:56)
少了hadoop.dll会报以下错误:
- 2017-07-08 16:34:27,170 WARN [org.apache.hadoop.util.NativeCodeLoader] - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
8、点击WordCount.java右击-->Run As-->Run on Hadoop
运行结果:
单词统计结果如下:
至此搭建完毕,666!
基于Eclipse搭建hadoop开发环境的更多相关文章
- 基于eclipse搭建android开发环境-win7 32bit
基于eclipse搭建android开发环境-win7 32bit 前言:在使用朋友已搭建的Android开发环境时,发现朋友的开发环境版本较低且在update SDk时失败,便决定根据网上文章提示从 ...
- 在ubuntu下使用Eclipse搭建Hadoop开发环境
一.安装准备1.JDK版本:jdk1.7.0(jdk-7-linux-i586.tar.gz)2.hadoop版本:hadoop-1.1.1(hadoop-1.1.1.tar.gz)3.eclipse ...
- 在windows环境中用eclipse搭建hadoop开发环境
1. 整体环境和设置 1.1 hadoo1.0.4集群部署在4台VMWare虚拟机中,这四台虚拟机都是通过NAT模式连接主机 集群中/etc/hosts文件配置 #本机127.0.0.1 localh ...
- 使用eclipse搭建hadoop开发环境
下载一个 hadoop-eclipse-plugin-*.jar的eclipse插件,并放在plugins目录下 重启eclipse 打开视象,找“大象” 连接HDFS success 编程准 ...
- 搭建基于MyEclipse的Hadoop开发环境
不多说,直接上干货! 前面我们已经搭建了一个伪分布模式的Hadoop运行环境.请移步, hadoop-2.2.0.tar.gz的伪分布集群环境搭建(单节点) 我们绝大多数都习惯在Eclipse或MyE ...
- 基于Eclipse搭建Hadoop源码环境
Hadoop使用ant+ivy组织工程,无法直接导入Eclipse中.本文将介绍如何基于Eclipse搭建Hadoop源码环境. 准备工作 本文使用的操作系统为CentOS.需要的软件版本:hadoo ...
- 基于Eclipse的Android开发环境搭建
1. Java开发环境搭建 1.1 JDK下载安装 JDK(Java Development Kit )是针对Java开发人员发布的软件开发工具包.JDK 是整个Java的核心,包括了Java运行 ...
- Jdk1.7+eclipse搭建Java开发环境
Jdk1.7+eclipse搭建Java开发环境 1. 下载jdk1.7 http://www.oracle.com/technetwork/java/javase/downloads/jdk7 ...
- 使用Eclipse搭建JavaWeb开发环境的几个基本问题
Eclipse搭建JavaWeb开发环境 eclipse是一个用于java程序开发的ide软件,tomcat是一个运行javaweb应用的服务器软件,使用eclipse开发javaweb应用的时,首要 ...
随机推荐
- debian镜像下载地址
http://cdimage.debian.org/debian-cd/9.8.0-live/amd64/iso-hybrid/
- HDU6043 17多校1 KazaQ's Socks 水题
题目传送:http://acm.hdu.edu.cn/showproblem.php?pid=6043 Problem Description KazaQ wears socks everyday. ...
- RIP路由协议(一)
实验要求:使用RIPv2配置路由器,使路由器能接收到所有的路由条目 拓扑如下: 配置如下: R1enable 进入特权模式configure terminal 进入全局模式interface s0/0 ...
- dubbo 框架文档地址
http://dubbo.apache.org/books/dubbo-dev-book/ http://dubbo.apache.org/books/dubbo-admin-book/ http:/ ...
- hadoop day 4
1.自定义的一种数据类型,要在hadoop的各个节点之间传输,应该遵循hadoop的序列化机制 就必须实现hadoop相应的序列化接口Writable 实现的方法包括:write(),readFiel ...
- make命令回显Makefile执行脚本命令
/********************************************************************** * make命令回显Makefile执行脚本命令 * 说 ...
- Unity调用Windows对话框保存时另存为弹框
Unity开发VR之Vuforia 本文提供全流程,中文翻译. Chinar 坚持将简单的生活方式,带给世人!(拥有更好的阅读体验 -- 高分辨率用户请根据需求调整网页缩放比例) Chinar -- ...
- manjaro初体验
manjaro Linux是https://distrowatch.com/网站上排名第一的Linux分支. https://manjaro.org/ 选择,下载,打开主页下载页:https://ma ...
- [洛谷P1417 烹调方案]贪心+dp
http://acm.zju.edu.cn/onlinejudge/showProblem.do?problemCode=3211Dream City Time Limit: 1 Second ...
- switch语句和switch-case与if-else之间的转换
switch语句格式:switch(变量){case 常量1:语句1;break;case 常量2:语句2;break;......default:语句;break;}特点:1.根据变量的值,选择相应 ...