Hadoop示例程序WordCount编译运行
首先确保Hadoop已正确安装及运行。
将WordCount.java拷贝出来
$ cp ./src/examples/org/apache/hadoop/examples/WordCount.java /home/hadoop/
在当前目录下创建一个存放WordCount.class的文件夹
$ mkdir class
编译WordCount.java
$ javac -classpath /usr/local/hadoop/hadoop-core-0.20.203.0.jar:/usr/local/hadoop/lib/commons-cli-1.2.jar WordCount.java -d class
编译完成后class文件夹下会出现一个org文件夹
$ ls class
org
对编译好的class打包
$ cd class
$ jar cvf WordCount.jar *
已添加清单
正在添加: org/(输入 = 0) (输出 = 0)(存储了 0%)
正在添加: org/apache/(输入 = 0) (输出 = 0)(存储了 0%)
正在添加: org/apache/hadoop/(输入 = 0) (输出 = 0)(存储了 0%)
正在添加: org/apache/hadoop/examples/(输入 = 0) (输出 = 0)(存储了 0%)
正在添加: org/apache/hadoop/examples/WordCount$TokenizerMapper.class(输入 = 1790) (输出 = 765)(压缩了 57%)
正在添加: org/apache/hadoop/examples/WordCount$IntSumReducer.class(输入 = 1793) (输出 = 746)(压缩了 58%)
正在添加: org/apache/hadoop/examples/WordCount.class(输入 = 1911) (输出 = 996)(压缩了 47%)
至此java文件的编译工作已经完成
准备测试文件,启动Hadoop。
由于运行Hadoop时指定的输入文件只能是HDFS文件系统里的文件,所以我们必须将要测试的文件从本地文件系统拷贝到HDFS文件系统中。
$ hadoop fs -mkdir input
$ hadoop fs -ls
Found 1 items
drwxr-xr-x - hadoop supergroup 0 2014-03-26 10:39 /user/hadoop/input
$ hadoop fs -put file input
$ hadoop fs -ls input
Found 1 items
-rw-r--r-- 2 hadoop supergroup 75 2014-03-26 10:40 /user/hadoop/input/file
运行程序
$ cd class
$ ls
org WordCount.jar
$ hadoop jar WordCount.jar org.apache.hadoop.examples.WordCount input output
14/03/26 10:57:39 INFO input.FileInputFormat: Total input paths to process : 1
14/03/26 10:57:40 INFO mapred.JobClient: Running job: job_201403261015_0001
14/03/26 10:57:41 INFO mapred.JobClient: map 0% reduce 0%
14/03/26 10:57:54 INFO mapred.JobClient: map 100% reduce 0%
14/03/26 10:58:06 INFO mapred.JobClient: map 100% reduce 100%
14/03/26 10:58:11 INFO mapred.JobClient: Job complete: job_201403261015_0001
14/03/26 10:58:11 INFO mapred.JobClient: Counters: 25
14/03/26 10:58:11 INFO mapred.JobClient: Job Counters
14/03/26 10:58:11 INFO mapred.JobClient: Launched reduce tasks=1
14/03/26 10:58:11 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=12321
14/03/26 10:58:11 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/03/26 10:58:11 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/03/26 10:58:11 INFO mapred.JobClient: Launched map tasks=1
14/03/26 10:58:11 INFO mapred.JobClient: Data-local map tasks=1
14/03/26 10:58:11 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=10303
14/03/26 10:58:11 INFO mapred.JobClient: File Output Format Counters
14/03/26 10:58:11 INFO mapred.JobClient: Bytes Written=51
14/03/26 10:58:11 INFO mapred.JobClient: FileSystemCounters
14/03/26 10:58:11 INFO mapred.JobClient: FILE_BYTES_READ=85
14/03/26 10:58:11 INFO mapred.JobClient: HDFS_BYTES_READ=184
14/03/26 10:58:11 INFO mapred.JobClient: FILE_BYTES_WRITTEN=42541
14/03/26 10:58:11 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=51
14/03/26 10:58:11 INFO mapred.JobClient: File Input Format Counters
14/03/26 10:58:11 INFO mapred.JobClient: Bytes Read=75
14/03/26 10:58:11 INFO mapred.JobClient: Map-Reduce Framework
14/03/26 10:58:11 INFO mapred.JobClient: Reduce input groups=7
14/03/26 10:58:11 INFO mapred.JobClient: Map output materialized bytes=85
14/03/26 10:58:11 INFO mapred.JobClient: Combine output records=7
14/03/26 10:58:11 INFO mapred.JobClient: Map input records=1
14/03/26 10:58:11 INFO mapred.JobClient: Reduce shuffle bytes=0
14/03/26 10:58:11 INFO mapred.JobClient: Reduce output records=7
14/03/26 10:58:11 INFO mapred.JobClient: Spilled Records=14
14/03/26 10:58:11 INFO mapred.JobClient: Map output bytes=131
14/03/26 10:58:11 INFO mapred.JobClient: Combine input records=14
14/03/26 10:58:11 INFO mapred.JobClient: Map output records=14
14/03/26 10:58:11 INFO mapred.JobClient: SPLIT_RAW_BYTES=109
14/03/26 10:58:11 INFO mapred.JobClient: Reduce input records=7
查看结果
$ hadoop fs -ls
Found 2 items
drwxr-xr-x - hadoop supergroup 0 2014-03-26 10:40 /user/hadoop/input
drwxr-xr-x - hadoop supergroup 0 2014-03-26 10:58 /user/hadoop/output
可以发现hadoop中多了一个output文件,查看output中的文件信息
$ hadoop fs -ls output
Found 3 items
-rw-r--r-- 2 hadoop supergroup 0 2014-03-26 11:04 /user/hadoop/output/_SUCCESS
drwxr-xr-x - hadoop supergroup 0 2014-03-26 11:04 /user/hadoop/output/_logs
-rw-r--r-- 2 hadoop supergroup 65 2014-03-26 11:04 /user/hadoop/output/part-r-00000
查看运行结果
$ hadoop fs -cat output/part-r-00000
Bye 3
Hello 3
Word 1
World 3
bye 1
hello 2
world 1
至此,Hadoop下WordCount示例运行结束。
如果还想运行一遍就需要把output文件夹删除,否则会报异常,如下
14/03/26 11:41:30 INFO mapred.JobClient: Cleaning up the staging area hdfs://localhost:9000/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201403261015_0003
Exception in thread "main" org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory output already exists
at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:134)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:830)
at org.apache.hadoop.mapred.JobClient$2.run(JobClient.java:791)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1059)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:791)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:465)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:494)
at org.apache.hadoop.examples.WordCount.main(WordCount.java:67)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at org.apache.hadoop.util.RunJar.main(RunJar.java:156)
删除output文件夹操作如下
$ hadoop fs -rmr output
Deleted hdfs://localhost:9000/user/hadoop/output
也可以直接运行Hadoop示例中已经编译过的jar文件
$ hadoop jar /usr/local/hadoop/hadoop-examples-0.20.203.0.jar wordcount input output
14/03/28 17:02:33 INFO input.FileInputFormat: Total input paths to process : 2
14/03/28 17:02:33 INFO mapred.JobClient: Running job: job_201403281439_0004
14/03/28 17:02:34 INFO mapred.JobClient: map 0% reduce 0%
14/03/28 17:02:49 INFO mapred.JobClient: map 100% reduce 0%
14/03/28 17:03:01 INFO mapred.JobClient: map 100% reduce 100%
14/03/28 17:03:06 INFO mapred.JobClient: Job complete: job_201403281439_0004
14/03/28 17:03:06 INFO mapred.JobClient: Counters: 25
14/03/28 17:03:06 INFO mapred.JobClient: Job Counters
14/03/28 17:03:06 INFO mapred.JobClient: Launched reduce tasks=1
14/03/28 17:03:06 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=17219
14/03/28 17:03:06 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
14/03/28 17:03:06 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
14/03/28 17:03:06 INFO mapred.JobClient: Launched map tasks=2
14/03/28 17:03:06 INFO mapred.JobClient: Data-local map tasks=2
14/03/28 17:03:06 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=10398
14/03/28 17:03:06 INFO mapred.JobClient: File Output Format Counters
14/03/28 17:03:06 INFO mapred.JobClient: Bytes Written=65
14/03/28 17:03:06 INFO mapred.JobClient: FileSystemCounters
14/03/28 17:03:06 INFO mapred.JobClient: FILE_BYTES_READ=131
14/03/28 17:03:06 INFO mapred.JobClient: HDFS_BYTES_READ=343
14/03/28 17:03:06 INFO mapred.JobClient: FILE_BYTES_WRITTEN=63840
14/03/28 17:03:06 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=65
14/03/28 17:03:06 INFO mapred.JobClient: File Input Format Counters
14/03/28 17:03:06 INFO mapred.JobClient: Bytes Read=124
14/03/28 17:03:06 INFO mapred.JobClient: Map-Reduce Framework
14/03/28 17:03:06 INFO mapred.JobClient: Reduce input groups=9
14/03/28 17:03:06 INFO mapred.JobClient: Map output materialized bytes=137
14/03/28 17:03:06 INFO mapred.JobClient: Combine output records=11
14/03/28 17:03:06 INFO mapred.JobClient: Map input records=2
14/03/28 17:03:06 INFO mapred.JobClient: Reduce shuffle bytes=85
14/03/28 17:03:06 INFO mapred.JobClient: Reduce output records=9
14/03/28 17:03:06 INFO mapred.JobClient: Spilled Records=22
14/03/28 17:03:06 INFO mapred.JobClient: Map output bytes=216
14/03/28 17:03:06 INFO mapred.JobClient: Combine input records=23
14/03/28 17:03:06 INFO mapred.JobClient: Map output records=23
14/03/28 17:03:06 INFO mapred.JobClient: SPLIT_RAW_BYTES=219
14/03/28 17:03:06 INFO mapred.JobClient: Reduce input records=11
参考资料:http://www.cnblogs.com/aukle/p/3214984.html
http://blog.csdn.net/turkeyzhou/article/details/8121601
http://www.cnblogs.com/xia520pi/archive/2012/05/16/2504205.html
Hadoop示例程序WordCount编译运行的更多相关文章
- (转载)Hadoop示例程序WordCount详解
最近在学习云计算,研究Haddop框架,费了一整天时间将Hadoop在Linux下完全运行起来,看到官方的map-reduce的demo程序WordCount,仔细研究了一下,算做入门了. 其实Wor ...
- Hadoop示例程序WordCount详解及实例(转)
1.图解MapReduce 2.简历过程: Input: Hello World Bye World Hello Hadoop Bye Hadoop Bye Hadoop Hello Hadoop M ...
- CentOS7虚拟机配置、Hadoop搭建、wordCount DEMO运行
安装虚拟机 最开始先安装虚拟机,我是12.5.7版本,如果要跟着我做的话,版本最好和我一致,不然后面可能会出一些莫名其妙的错误,下载链接如下(注册码也在里面了): 链接:https://pan.bai ...
- MFC:“Debug Assertion Failed!” ——自动生成的单文档程序项目编译运行就有错误
今天照着孙鑫老师的VC++教程学习文件的操作,VS2010,单文档应用程序,项目文件命名为File,也就有了自动生成的CFileDoc.CFileView等类,一进去就编译运行(就是最初自动生成的项目 ...
- Hadoop Map/Reduce 示例程序WordCount
#进入hadoop安装目录 cd /usr/local/hadoop #创建示例文件:input #在里面输入以下内容: #Hello world, Bye world! vim input #在hd ...
- Hadoop入门程序WordCount的执行过程
首先编写WordCount.java源文件,分别通过map和reduce方法统计文本中每个单词出现的次数,然后按照字母的顺序排列输出, Map过程首先是多个map并行提取多个句子里面的单词然后分别列出 ...
- hadoop 提交程序并监控运行
程序编写及打包 使用maven导入第三方jar pom.xml <?xml version="1.0" encoding="UTF-8"?> < ...
- HelloWord程序代码的编写和HelloWord程序的编译运行
1.新建文件夹,存放代码 2.新建一个Java文件 文件后缀名.java(Hello.java) 3.编写代码public class Hello{public static void main(St ...
- 伪分布式环境下命令行正确运行hadoop示例wordcount
首先确保hadoop已经正确安装.配置以及运行. 1. 首先将wordcount源代码从hadoop目录中拷贝出来. [root@cluster2 logs]# cp /usr/local/h ...
随机推荐
- S5PV210开发系列四_uCGUI的移植
S5PV210开发系列四 uCGUI的移植 象棋小子 1048272975 GUI(图形用户界面)极大地方便了非专业用户的使用,用户无需记忆大量的命令,取而代之的是能够通过窗体.菜单 ...
- 实现Android半透明Menu效果的开发实例
不知道大家是否用过天天动听,对于它界面上的半透明Menu效果,笔者感觉非常漂亮.下面是天天动听半透明Menu的截图,欣赏下吧: 感觉还不错吧?那么如何实现这种半透明Menu效果呢?本文就重点讨论并给出 ...
- 【经验记录】Android上传文件到服务器
Android中实现上传文件,其实是很简单的,和在java里面是一样的,基本上都是熟悉操作输出流和输入流!还有一个特别重要的就是需要配置content-type的一些参数!如果这些都弄好了,上传就很简 ...
- cocos2d粒子效果
第9章 粒子效果 游戏开发者通常使用粒子系统来制作视觉特效.粒子系统能够发射大量细小的粒子并对他们进行渲染,而且效率要远高于渲染同样数目的精灵.粒子系统可以模拟下雨.火焰.雪.爆炸.蒸气拖尾以及其他多 ...
- Jordan Lecture Note-10: Kernel Principal Components Analysis (KPCA).
Kernel Principal Components Analysis PCA实际上就是对原坐标进行正交变换,使得变换后的坐标之间相互无关,并且尽可能保留多的信息.但PCA所做的是线性变换,对于某些 ...
- SQL Insert语句数据以以unicode码存储 解决存储数据出现乱码的问题
写了个读取原始的文本数据导入数据库的工具 ,最后发现空中有几个值是乱码 例如 原始数据是 :Bjørn 存到数据库中是 Bj?rn 研究半天发现是一直以来忽略了一个标记‘N’ 2条 Insert 语句 ...
- 如何判断一个数是否为素数(zt)
怎么判断一个数是否为素数? 笨蛋的作法: bool IsPrime(unsigned n){ if (n<2) { //小于2的数即不是合数也不是素数 throw 0; ...
- python读取文本、配对、插入数据脚本
#在工作中遇见了一个处理数据的问题,纠结了很久,写下记录一下.#-*- coding:UTF-8 -*- #-*- author:ytxu -*- import codecs, os, sys, pl ...
- Code the Tree
Time Limit: 1000MS Memory Limit: 65536K Total Submissions: 2292 Accepted: 878 Description A tree ...
- cooking java ——加密解密
java安全与密码概述 主要分为三部分: 密码学基础,包括:相关术语:分类:常用安全体系. java的安全组成:jdk以及第三方扩展. 相关实现代码,包括:base64.MD5········ 密码学 ...