在Eclipse中运行hadoop程序
1、下载hadoop-eclipse-plugin-1.2.1.jar,并将之复制到eclipse/plugins下。
2、打开map-reduce视图
在eclipse中,打开window——>open perspetive——>other,选择map/reduce。
3、选择Map/Reduce Locations标签页,新建一个Location
4、在project exploer中,可以浏览刚才定义站点的文件系统
5、准备测试数据,并上传到hdfs中。
liaoliuqingdeMacBook-Air:Downloads liaoliuqing$ hadoop fs -mkdir in
liaoliuqingdeMacBook-Air:Downloads liaoliuqing$ hadoop fs -copyFromLocal maxTemp.txt in
liaoliuqingdeMacBook-Air:Downloads liaoliuqing$ hadoop fs -ls in
Found 1 items
-rw-r--r-- 1 liaoliuqing supergroup 953 2014-12-14 09:47 /user/liaoliuqing/in/maxTemp.txt
其中maxTemp.txt的内容如下:
123456798676231190101234567986762311901012345679867623119010123456798676231190101234561+00121534567890356
123456798676231190101234567986762311901012345679867623119010123456798676231190101234562+01122934567890456
123456798676231190201234567986762311901012345679867623119010123456798676231190101234562+02120234567893456
123456798676231190401234567986762311901012345679867623119010123456798676231190101234561+00321234567803456
123456798676231190101234567986762311902012345679867623119010123456798676231190101234561+00429234567903456
123456798676231190501234567986762311902012345679867623119010123456798676231190101234561+01021134568903456
123456798676231190201234567986762311902012345679867623119010123456798676231190101234561+01124234578903456
123456798676231190301234567986762311905012345679867623119010123456798676231190101234561+04121234678903456
123456798676231190301234567986762311905012345679867623119010123456798676231190101234561+00821235678903456
6、准备map-reduce程序
程序请见http://blog.csdn.net/jediael_lu/article/details/37596469
7、运行程序
MaxTemperature.java——>run as——>run configuration
在arguments中填入输入及输出目录,开始run。
此处是在hdfs中运行程序,事实上也可以在本地文件系统中运行程序,此方法可以方便的用于程序调试。
如在参数中填入:
/Users/liaoliuqing/in /Users/liaoliuqing/out
即可。
8、以下是eclise console中的输出内容
14/12/14 10:52:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/12/14 10:52:05 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
14/12/14 10:52:05 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/12/14 10:52:05 INFO input.FileInputFormat: Total input paths to process : 1
14/12/14 10:52:05 WARN snappy.LoadSnappy: Snappy native library not loaded
14/12/14 10:52:06 INFO mapred.JobClient: Running job: job_local1815770300_0001
14/12/14 10:52:06 INFO mapred.LocalJobRunner: Waiting for map tasks
14/12/14 10:52:06 INFO mapred.LocalJobRunner: Starting task: attempt_local1815770300_0001_m_000000_0
14/12/14 10:52:06 INFO mapred.Task: Using ResourceCalculatorPlugin : null
14/12/14 10:52:06 INFO mapred.MapTask: Processing split: hdfs://localhost:9000/user/liaoliuqing/in/maxTemp.txt:0+953
14/12/14 10:52:06 INFO mapred.MapTask: io.sort.mb = 100
14/12/14 10:52:06 INFO mapred.MapTask: data buffer = 79691776/99614720
14/12/14 10:52:06 INFO mapred.MapTask: record buffer = 262144/327680
14/12/14 10:52:06 INFO mapred.MapTask: Starting flush of map output
14/12/14 10:52:06 INFO mapred.MapTask: Finished spill 0
14/12/14 10:52:06 INFO mapred.Task: Task:attempt_local1815770300_0001_m_000000_0 is done. And is in the process of commiting
14/12/14 10:52:06 INFO mapred.LocalJobRunner:
14/12/14 10:52:06 INFO mapred.Task: Task 'attempt_local1815770300_0001_m_000000_0' done.
14/12/14 10:52:06 INFO mapred.LocalJobRunner: Finishing task: attempt_local1815770300_0001_m_000000_0
14/12/14 10:52:06 INFO mapred.LocalJobRunner: Map task executor complete.
14/12/14 10:52:06 INFO mapred.Task: Using ResourceCalculatorPlugin : null
14/12/14 10:52:06 INFO mapred.LocalJobRunner:
14/12/14 10:52:06 INFO mapred.Merger: Merging 1 sorted segments
14/12/14 10:52:06 INFO mapred.Merger: Down to the last merge-pass, with 1 segments left of total size: 90 bytes
14/12/14 10:52:06 INFO mapred.LocalJobRunner:
14/12/14 10:52:06 INFO mapred.Task: Task:attempt_local1815770300_0001_r_000000_0 is done. And is in the process of commiting
14/12/14 10:52:06 INFO mapred.LocalJobRunner:
14/12/14 10:52:06 INFO mapred.Task: Task attempt_local1815770300_0001_r_000000_0 is allowed to commit now
14/12/14 10:52:06 INFO output.FileOutputCommitter: Saved output of task 'attempt_local1815770300_0001_r_000000_0' to hdfs://localhost:9000/user/liaoliuqing/out
14/12/14 10:52:06 INFO mapred.LocalJobRunner: reduce > reduce
14/12/14 10:52:06 INFO mapred.Task: Task 'attempt_local1815770300_0001_r_000000_0' done.
14/12/14 10:52:07 INFO mapred.JobClient: map 100% reduce 100%
14/12/14 10:52:07 INFO mapred.JobClient: Job complete: job_local1815770300_0001
14/12/14 10:52:07 INFO mapred.JobClient: Counters: 19
14/12/14 10:52:07 INFO mapred.JobClient: File Output Format Counters
14/12/14 10:52:07 INFO mapred.JobClient: Bytes Written=43
14/12/14 10:52:07 INFO mapred.JobClient: File Input Format Counters
14/12/14 10:52:07 INFO mapred.JobClient: Bytes Read=953
14/12/14 10:52:07 INFO mapred.JobClient: FileSystemCounters
14/12/14 10:52:07 INFO mapred.JobClient: FILE_BYTES_READ=450
14/12/14 10:52:07 INFO mapred.JobClient: HDFS_BYTES_READ=1906
14/12/14 10:52:07 INFO mapred.JobClient: FILE_BYTES_WRITTEN=135618
14/12/14 10:52:07 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=43
14/12/14 10:52:07 INFO mapred.JobClient: Map-Reduce Framework
14/12/14 10:52:07 INFO mapred.JobClient: Reduce input groups=5
14/12/14 10:52:07 INFO mapred.JobClient: Map output materialized bytes=94
14/12/14 10:52:07 INFO mapred.JobClient: Combine output records=0
14/12/14 10:52:07 INFO mapred.JobClient: Map input records=9
14/12/14 10:52:07 INFO mapred.JobClient: Reduce shuffle bytes=0
14/12/14 10:52:07 INFO mapred.JobClient: Reduce output records=5
14/12/14 10:52:07 INFO mapred.JobClient: Spilled Records=16
14/12/14 10:52:07 INFO mapred.JobClient: Map output bytes=72
14/12/14 10:52:07 INFO mapred.JobClient: Total committed heap usage (bytes)=329252864
14/12/14 10:52:07 INFO mapred.JobClient: SPLIT_RAW_BYTES=118
14/12/14 10:52:07 INFO mapred.JobClient: Map output records=8
14/12/14 10:52:07 INFO mapred.JobClient: Combine input records=0
14/12/14 10:52:07 INFO mapred.JobClient: Reduce input records=8
在Eclipse中运行hadoop程序的更多相关文章
- Ubuntu下Eclipse中运行Hadoop程序的参数问题
需要统一的参数: 当配置好eclipse中hadoop的程序后,几个参数需要统一一下: hadoop安装目录下/etc/core_site.xml中 fs.default.name的端口号一定要与ha ...
- 在Eclipse中运行hadoop程序 分类: A1_HADOOP 2014-12-14 11:11 624人阅读 评论(0) 收藏
1.下载hadoop-eclipse-plugin-1.2.1.jar,并将之复制到eclipse/plugins下. 2.打开map-reduce视图 在eclipse中,打开window--> ...
- 【爬坑】在 IDEA 中运行 Hadoop 程序 报 winutils.exe 不存在错误解决方案
0. 问题说明 环境为 Windows 10 在 IDEA 中运行 Hadoop 程序报 winutils.exe 不存在 错误 1. 解决方案 [1.1 解压] 解压 hadoop-2.7.3 ...
- eclipse中运行java程序
1 package ttt; public class Testttt { public static void main() { Person p =new Person(); p.name=&qu ...
- 关于在Eclipse上运行Hadoop程序的日志输出问题
在安装由Eclipse-Hadoop-Plugin的Eclipse中, 可以直接运行Hadoop的MapReduce程序, 但是如果什么都不配置的话你发现Eclipse控制台没有任何日志输出, 这个问 ...
- 关于在Eclipse中运行java程序报出:The project:XXXX which is referenced by the classpath10
1.work_space名称与project是否一样,如果是一样的可能会导致错误. 2.project所在的文件夹中的.mymetadata文件中定义的project-module名称是否与proje ...
- Ubuntu下eclipse中运行Hadoop时所需要的JRE与JDK的搭配
第一组: Eclise 版本:Indigo,Service Release 1 Build id:20110916-0149 Window-->Preferences -->Compile ...
- 使用Eclipse编译运行MapReduce程序 Hadoop2.6.0_Ubuntu/CentOS
使用Eclipse编译运行MapReduce程序 Hadoop2.6.0_Ubuntu/CentOS 2014-10-10 (updated: 2016-05-22) 64246 153 本教程介绍 ...
- Nodejs学习笔记(二)——Eclipse中运行调试Nodejs
前篇<Nodejs学习笔记(一)——初识Nodejs>主要介绍了在搭建node环境过程中遇到的小问题以及搭建Eclipse开发Node环境的前提步骤.本篇主要介绍如何在Eclipse中运行 ...
随机推荐
- 宏汇编软件MASM51的使用
单片机开发可以用手工汇编和机器汇编两种方法.采用手工汇编就是先编写出汇编程序,然后对照单片机汇编表手工将汇编程序翻译成机器码,最后将机器码一个一个地送入开发仿真器的RAM中去进行调试. 由于采用手工汇 ...
- Tracing JIT
在一个从Java源码编译到JVM字节码的编译器(如javac.ECJ)里,一个“编译单元”(CompilationUnit)指的是一个Java源文件.而在Dalvik VM的JIT里也有一个结构体名为 ...
- Struts2标签库之iterator
传说中的第一种方式,这种在Struts2.1权威指南的例子中也木有说明白: <%@ page language="java" contentType="text/h ...
- How to run Tomcat without root privileges? 常规用户使用tomcat的80端口
How to run Tomcat without root privileges? 1. The best way is to use jsvc, available as part of the ...
- BZOJ1089: [SCOI2003]严格n元树
1089: [SCOI2003]严格n元树 Time Limit: 1 Sec Memory Limit: 162 MBSubmit: 762 Solved: 387[Submit][Status ...
- 掌握 Java 8 Lambda 表达式
Lambda 表达式 是 Java8 中最重要的功能之一.使用 Lambda 表达式 可以替代只有一个函数的接口实现,告别匿名内部类,代码看起来更简洁易懂.Lambda 表达式 同时还提升了对 集合 ...
- linux内核--中断和中断处理(一)
让硬件在需要的时候再向内核发出信号.这就是中断机制,先讨论中断,进而讨论内核如何使用所谓的中断处理函数处理对应的中断. 一.中断 1)中断 中断使得硬件得以发出通知给处理器.例如, ...
- 使用kthread内核线程的内核模块
这里使用了msleep(50); printk 开启其实挺大的,当我使用msleep(10);机器直接卡死了: 另外ISERR不能判断结构体的,只能判断 空指针 #cat hello.c #inclu ...
- pyqt tabliwdget表头属性修改
# -*- coding: utf-8 -*-__author__ = 'Administrator'import sysfrom PyQt4 import QtGui class MyWindow( ...
- [转]用Node.js创建自签名的HTTPS服务器
用Node.js创建自签名的HTTPS服务器 创建自己的CA机构 创建服务器端证书 创建客户端证书 将证书打包 创建自己的CA机构 为CA生成私钥 openssl genrsa -out ca-key ...