1、下载hadoop-eclipse-plugin-1.2.1.jar,并将之复制到eclipse/plugins下。

2、打开map-reduce视图

在eclipse中,打开window——>open perspetive——>other,选择map/reduce。

3、选择Map/Reduce Locations标签页,新建一个Location

4、在project exploer中,可以浏览刚才定义站点的文件系统

5、准备测试数据,并上传到hdfs中。

liaoliuqingdeMacBook-Air:Downloads liaoliuqing$ hadoop fs -mkdir in

liaoliuqingdeMacBook-Air:Downloads liaoliuqing$ hadoop fs -copyFromLocal maxTemp.txt in

liaoliuqingdeMacBook-Air:Downloads liaoliuqing$ hadoop fs -ls in

Found 1 items

-rw-r--r--   1 liaoliuqing supergroup        953 2014-12-14 09:47 /user/liaoliuqing/in/maxTemp.txt

其中maxTemp.txt的内容如下:

123456798676231190101234567986762311901012345679867623119010123456798676231190101234561+00121534567890356

123456798676231190101234567986762311901012345679867623119010123456798676231190101234562+01122934567890456

123456798676231190201234567986762311901012345679867623119010123456798676231190101234562+02120234567893456

123456798676231190401234567986762311901012345679867623119010123456798676231190101234561+00321234567803456

123456798676231190101234567986762311902012345679867623119010123456798676231190101234561+00429234567903456

123456798676231190501234567986762311902012345679867623119010123456798676231190101234561+01021134568903456

123456798676231190201234567986762311902012345679867623119010123456798676231190101234561+01124234578903456

123456798676231190301234567986762311905012345679867623119010123456798676231190101234561+04121234678903456

123456798676231190301234567986762311905012345679867623119010123456798676231190101234561+00821235678903456

6、准备map-reduce程序

程序请见http://blog.csdn.net/jediael_lu/article/details/37596469

7、运行程序

MaxTemperature.java——>run as——>run configuration

在arguments中填入输入及输出目录,开始run。

此处是在hdfs中运行程序,事实上也可以在本地文件系统中运行程序,此方法可以方便的用于程序调试。

如在参数中填入:

/Users/liaoliuqing/in   /Users/liaoliuqing/out

即可。

8、以下是eclise console中的输出内容

14/12/14 10:52:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

14/12/14 10:52:05 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.

14/12/14 10:52:05 WARN mapred.JobClient: No job jar file set.  User classes may not be found. See JobConf(Class) or JobConf#setJar(String).

14/12/14 10:52:05 INFO input.FileInputFormat: Total input paths to process : 1

14/12/14 10:52:05 WARN snappy.LoadSnappy: Snappy native library not loaded

14/12/14 10:52:06 INFO mapred.JobClient: Running job: job_local1815770300_0001

14/12/14 10:52:06 INFO mapred.LocalJobRunner: Waiting for map tasks

14/12/14 10:52:06 INFO mapred.LocalJobRunner: Starting task: attempt_local1815770300_0001_m_000000_0

14/12/14 10:52:06 INFO mapred.Task:  Using ResourceCalculatorPlugin : null

14/12/14 10:52:06 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/liaoliuqing/in/maxTemp.txt:0+953

14/12/14 10:52:06 INFO mapred.MapTask: io.sort.mb = 100

14/12/14 10:52:06 INFO mapred.MapTask: data buffer = 79691776/99614720

14/12/14 10:52:06 INFO mapred.MapTask: record buffer = 262144/327680

14/12/14 10:52:06 INFO mapred.MapTask: Starting flush of map output

14/12/14 10:52:06 INFO mapred.MapTask: Finished spill 0

14/12/14 10:52:06 INFO mapred.Task: Task:attempt_local1815770300_0001_m_000000_0 is done. And is in the process of commiting

14/12/14 10:52:06 INFO mapred.LocalJobRunner:

14/12/14 10:52:06 INFO mapred.Task: Task 'attempt_local1815770300_0001_m_000000_0' done.

14/12/14 10:52:06 INFO mapred.LocalJobRunner: Finishing task: attempt_local1815770300_0001_m_000000_0

14/12/14 10:52:06 INFO mapred.LocalJobRunner: Map task executor complete.

14/12/14 10:52:06 INFO mapred.Task:  Using ResourceCalculatorPlugin : null

14/12/14 10:52:06 INFO mapred.LocalJobRunner:

14/12/14 10:52:06 INFO mapred.Merger: Merging 1 sorted segments

14/12/14 10:52:06 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 90 bytes

14/12/14 10:52:06 INFO mapred.LocalJobRunner:

14/12/14 10:52:06 INFO mapred.Task: Task:attempt_local1815770300_0001_r_000000_0 is done. And is in the process of commiting

14/12/14 10:52:06 INFO mapred.LocalJobRunner:

14/12/14 10:52:06 INFO mapred.Task: Task attempt_local1815770300_0001_r_000000_0 is allowed to commit now

14/12/14 10:52:06 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1815770300_0001_r_000000_0' to hdfs://localhost:9000/user/liaoliuqing/out

14/12/14 10:52:06 INFO mapred.LocalJobRunner: reduce > reduce

14/12/14 10:52:06 INFO mapred.Task: Task 'attempt_local1815770300_0001_r_000000_0' done.

14/12/14 10:52:07 INFO mapred.JobClient:  map 100% reduce 100%

14/12/14 10:52:07 INFO mapred.JobClient: Job complete: job_local1815770300_0001

14/12/14 10:52:07 INFO mapred.JobClient: Counters: 19

14/12/14 10:52:07 INFO mapred.JobClient:   File Output Format Counters

14/12/14 10:52:07 INFO mapred.JobClient:     Bytes Written=43

14/12/14 10:52:07 INFO mapred.JobClient:   File Input Format Counters

14/12/14 10:52:07 INFO mapred.JobClient:     Bytes Read=953

14/12/14 10:52:07 INFO mapred.JobClient:   FileSystemCounters

14/12/14 10:52:07 INFO mapred.JobClient:     FILE_BYTES_READ=450

14/12/14 10:52:07 INFO mapred.JobClient:     HDFS_BYTES_READ=1906

14/12/14 10:52:07 INFO mapred.JobClient:     FILE_BYTES_WRITTEN=135618

14/12/14 10:52:07 INFO mapred.JobClient:     HDFS_BYTES_WRITTEN=43

14/12/14 10:52:07 INFO mapred.JobClient:   Map-Reduce Framework

14/12/14 10:52:07 INFO mapred.JobClient:     Reduce input groups=5

14/12/14 10:52:07 INFO mapred.JobClient:     Map output materialized bytes=94

14/12/14 10:52:07 INFO mapred.JobClient:     Combine output records=0

14/12/14 10:52:07 INFO mapred.JobClient:     Map input records=9

14/12/14 10:52:07 INFO mapred.JobClient:     Reduce shuffle bytes=0

14/12/14 10:52:07 INFO mapred.JobClient:     Reduce output records=5

14/12/14 10:52:07 INFO mapred.JobClient:     Spilled Records=16

14/12/14 10:52:07 INFO mapred.JobClient:     Map output bytes=72

14/12/14 10:52:07 INFO mapred.JobClient:     Total committed heap usage (bytes)=329252864

14/12/14 10:52:07 INFO mapred.JobClient:     SPLIT_RAW_BYTES=118

14/12/14 10:52:07 INFO mapred.JobClient:     Map output records=8

14/12/14 10:52:07 INFO mapred.JobClient:     Combine input records=0

14/12/14 10:52:07 INFO mapred.JobClient:     Reduce input records=8

在Eclipse中运行hadoop程序的更多相关文章

  1. Ubuntu下Eclipse中运行Hadoop程序的参数问题

    需要统一的参数: 当配置好eclipse中hadoop的程序后,几个参数需要统一一下: hadoop安装目录下/etc/core_site.xml中 fs.default.name的端口号一定要与ha ...

  2. 在Eclipse中运行hadoop程序 分类: A1_HADOOP 2014-12-14 11:11 624人阅读 评论(0) 收藏

    1.下载hadoop-eclipse-plugin-1.2.1.jar,并将之复制到eclipse/plugins下. 2.打开map-reduce视图 在eclipse中,打开window--> ...

  3. 【爬坑】在 IDEA 中运行 Hadoop 程序 报 winutils.exe 不存在错误解决方案

    0. 问题说明 环境为 Windows 10 在 IDEA 中运行 Hadoop 程序报   winutils.exe 不存在  错误 1. 解决方案 [1.1 解压] 解压 hadoop-2.7.3 ...

  4. eclipse中运行java程序

    1 package ttt; public class Testttt { public static void main() { Person p =new Person(); p.name=&qu ...

  5. 关于在Eclipse上运行Hadoop程序的日志输出问题

    在安装由Eclipse-Hadoop-Plugin的Eclipse中, 可以直接运行Hadoop的MapReduce程序, 但是如果什么都不配置的话你发现Eclipse控制台没有任何日志输出, 这个问 ...

  6. 关于在Eclipse中运行java程序报出:The project:XXXX which is referenced by the classpath10

    1.work_space名称与project是否一样,如果是一样的可能会导致错误. 2.project所在的文件夹中的.mymetadata文件中定义的project-module名称是否与proje ...

  7. Ubuntu下eclipse中运行Hadoop时所需要的JRE与JDK的搭配

    第一组: Eclise 版本:Indigo,Service Release 1 Build id:20110916-0149 Window-->Preferences -->Compile ...

  8. 使用Eclipse编译运行MapReduce程序 Hadoop2.6.0_Ubuntu/CentOS

    使用Eclipse编译运行MapReduce程序 Hadoop2.6.0_Ubuntu/CentOS  2014-10-10 (updated: 2016-05-22) 64246 153 本教程介绍 ...

  9. Nodejs学习笔记(二)——Eclipse中运行调试Nodejs

    前篇<Nodejs学习笔记(一)——初识Nodejs>主要介绍了在搭建node环境过程中遇到的小问题以及搭建Eclipse开发Node环境的前提步骤.本篇主要介绍如何在Eclipse中运行 ...

随机推荐

  1. 分数相加减的代码(c++)

    #include <iostream> using namespace std; int gy(int a,int k1) {int min; if(a>k1)min=k1; els ...

  2. mysql foreign key 外键

    ALTER TABLE `fd_rel_customer_doctor` ADD CONSTRAINT `FK_fd_rel_customer_doctor_1` FOREIGN KEY (`CUST ...

  3. android studio如何查看数据库文件

    android studio查看数据库文件有两种方式: 1.SQLSCOUT 优点:集成在as中,功能强大. 缺点:收费,破解麻烦. 2.Android Device Monitor 中的File E ...

  4. C 语言字符 和字符串输出

    int main(void){ char ch; char str[80]; printf("Input a string: ");    //先输入字符串 gets(str);/ ...

  5. Idea使用记录--添加Problems&&解决Autowired报错could not autowire

    今天在使用Idea的时候,发现Idea在整个每次找到错误代码非常不方便(Idea如果类中有错误,没有打开过类并不会提示,比如构建工程后缺少jar包问题).我想快速看到工程哪里出问题类似于eclipse ...

  6. BZOJ3016: [Usaco2012 Nov]Clumsy Cows

    3016: [Usaco2012 Nov]Clumsy Cows Time Limit: 1 Sec  Memory Limit: 128 MBSubmit: 71  Solved: 52[Submi ...

  7. linux下的shell和脚本

    1.各种Unix shell linux下的shell基本是从unix环境中的shell发展而来,贴一下wiki:其中我们常用的,可归类为Bourne Shell(/usr/bin/sh或/bin/s ...

  8. <php>文件操作*(重要)

    //touch("./3.txt");//创建文件:在当前目录下创建3.txt文件 //copy("./3.txt","./touxiang/5.ph ...

  9. LNMP、LAMP、LANMP一键安装脚本(定期更新)[转]

    这个脚本是使用shell编写,为了快速在生产环境上部署LNMP/LAMP/LANMP(Linux.Nginx/Tengine.MySQL/MariaDB/Percona.PHP),适用于CentOS/ ...

  10. QlikView实现部分载入数据的功能(Partial Load)

    问题背景: 一直非常想不通,公司花了N多钱请了一帮QlikView的Consultant做出来的solution居然没有涉及Reload的部分,以至于每次刷新数据都须要刷新整个Data Model,之 ...