In the last post we've installed Hadoop 2.2.0 on Ubuntu. Now we'll see how to launch an example mapreduce task on Hadoop.

In the Hadoop directory (which you should find at /opt/hadoop/2.2.0) you can find a JAR containing some examples: the exact path is $HADOOP_COMMON_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar . 
This JAR contains different examples of mapreduce programs. We'll launch the WordCount program, which is the equivalent of "Hello, world" for MapReduce. This programs just count the occurrences of every single word of the file given as the input. 
To run this example we need to prepare something. We assume that we have the HDFS service running; if we didn't create a user directory, we have to do it now (assuming the hadoop user we're using is mapred):

$ hadoop fs -mkdir -p /user/mapred

When we pass "fs" as the first argument to the hadoop command, we're telling hadoop to work on HDFS filesystem; in this case, we used the mkdir command as a switch to create a new directory on HDFS. 
Now that our user has a home directory, we can create a directory that we'll use lo load the input file for the mapreduce programs:

$ hadoop fs -mkdir inputdir

We can check the result issuing a "ls" command on HDFS:

$ hadoop fs -ls
Found 1 items
drwxr-xr-x - mapred mrusers 0 2014-02-11 22:54 inputdir

Now we can decide which file we'll count the words of; in this example, I'll use the text of the novella Flatland by Edwin Abbot, which is freely available on gutemberg project for download:

$ wget http://www.gutenberg.org/cache/epub/201/pg201.txt

Now we can put this file onto the HDFS, more precisely into the inputdir dir we created a moment ago:

$ hadoop fs -put pg201.txt inputdir

The switch "-put" tells Hadoop to get the file from the machine's file system and to put it onto the HDFS filesystem. We can check that the file is really there:

$ hadoop fs -ls inputdir
Found 1 items
drwxr-xr-x - mapred mrusers 227368 2014-02-11 22:59 inputdir/pg201.txt

Now we're ready to execute the MapReduce program. Hadoop tarball comes with a JAR containing the WordCount example; we can launch Hadoop with these parameters:

  • jar: we're telling Hadoop we want to execute a mapreduce program contained in a JAR
  • /opt/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar: this is the absolute path and filename of the JAR
  • wordcount: tells Hadoop which of the many examples contained in the JAR to run
  • inputdir: the directory on HDFS in which Hadoop can find the input file(s)
  • outputdir: the directory on HDFS in which Hadoop must write the result of the program
$ hadoop jar /opt/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount inputdir outputdir

and the output is:

14/02/11 23:16:19 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
14/02/11 23:16:20 INFO input.FileInputFormat: Total input paths to process : 1
14/02/11 23:16:20 INFO mapreduce.JobSubmitter: number of splits:1
14/02/11 23:16:21 INFO Configuration.deprecation: user.name is deprecated. Instead, use mapreduce.job.user.name
14/02/11 23:16:21 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
14/02/11 23:16:21 INFO Configuration.deprecation: mapred.output.value.class is deprecated. Instead, use mapreduce.job.output.value.class
14/02/11 23:16:21 INFO Configuration.deprecation: mapreduce.combine.class is deprecated. Instead, use mapreduce.job.combine.class
14/02/11 23:16:21 INFO Configuration.deprecation: mapreduce.map.class is deprecated. Instead, use mapreduce.job.map.class
14/02/11 23:16:21 INFO Configuration.deprecation: mapred.job.name is deprecated. Instead, use mapreduce.job.name
14/02/11 23:16:21 INFO Configuration.deprecation: mapreduce.reduce.class is deprecated. Instead, use mapreduce.job.reduce.class
14/02/11 23:16:21 INFO Configuration.deprecation: mapred.input.dir is deprecated. Instead, use mapreduce.input.fileinputformat.inputdir
14/02/11 23:16:21 INFO Configuration.deprecation: mapred.output.dir is deprecated. Instead, use mapreduce.output.fileoutputformat.outputdir
14/02/11 23:16:21 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
14/02/11 23:16:21 INFO Configuration.deprecation: mapred.output.key.class is deprecated. Instead, use mapreduce.job.output.key.class
14/02/11 23:16:21 INFO Configuration.deprecation: mapred.working.dir is deprecated. Instead, use mapreduce.job.working.dir
14/02/11 23:16:21 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1392155226604_0001
14/02/11 23:16:22 INFO impl.YarnClientImpl: Submitted application application_1392155226604_0001 to ResourceManager at /0.0.0.0:8032
14/02/11 23:16:23 INFO mapreduce.Job: The url to track the job: http://hadoop-VirtualBox:8088/proxy/application_1392155226604_0001/
14/02/11 23:16:23 INFO mapreduce.Job: Running job: job_1392155226604_0001
14/02/11 23:16:38 INFO mapreduce.Job: Job job_1392155226604_0001 running in uber mode : false
14/02/11 23:16:38 INFO mapreduce.Job: map 0% reduce 0%
14/02/11 23:16:47 INFO mapreduce.Job: map 100% reduce 0%
14/02/11 23:16:57 INFO mapreduce.Job: map 100% reduce 100%
14/02/11 23:16:58 INFO mapreduce.Job: Job job_1392155226604_0001 completed successfully
14/02/11 23:16:58 INFO mapreduce.Job: Counters: 43
File System Counters
FILE: Number of bytes read=121375
FILE: Number of bytes written=401139
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=227485
HDFS: Number of bytes written=88461
HDFS: Number of read operations=6
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=7693
Total time spent by all reduces in occupied slots (ms)=7383
Map-Reduce Framework
Map input records=4239
Map output records=37680
Map output bytes=366902
Map output materialized bytes=121375
Input split bytes=117
Combine input records=37680
Combine output records=8341
Reduce input groups=8341
Reduce shuffle bytes=121375
Reduce input records=8341
Reduce output records=8341
Spilled Records=16682
Shuffled Maps =1
Failed Shuffles=0
Merged Map outputs=1
GC time elapsed (ms)=150
CPU time spent (ms)=5490
Physical memory (bytes) snapshot=399077376
Virtual memory (bytes) snapshot=1674149888
Total committed heap usage (bytes)=314048512
Shuffle Errors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
File Input Format Counters
Bytes Read=227368
File Output Format Counters
Bytes Written=88461

The last part of the output is a summary of the execution of the mapreduce program; just before this, we can spot the "Job job_1392155226604_0001 completed successfully" line, which tells us the mapreduce program has been executed successfully. As told, Hadoop wrote the output onto the outputdir on HDFS; let's see what's inside this dir:

$ hadoop fs -ls outputdir
Found 2 items
-rw-r--r-- 1 mapred mrusers 0 2014-02-11 23:16 outputdir/_SUCCESS
-rw-r--r-- 1 mapred mrusers 88461 2014-02-11 23:16 outputdir/part-r-00000

The presence of the _SUCCESS file confirms us the successful execution of the job; in the part-r-00000 Hadoop wrote the result of the execution. We can bring the file up to the filesystem of our machine using the "get" switch:

$ hadoop fs -get outputdir/part-r-00000 .

Now we can see the content of the file (this is a small subset of the whole file):

...
leading 2
leagues 1
leaning 1
leap 1
leaped 1
learn 7
learned 1
least 23
least. 1
leave 3
leaves 3
leaving 2
lecture 1
led 4
left 9
...

The wordcount program just count the occurrences of every single word and outputs it. 
Well, we've successfully run our first mapreduce job on our Hadoop installation!

 
from: http://andreaiacono.blogspot.com/2014/02/running-hadoop-example.html

运行Hadoop的示例程序WordCount-Running Hadoop Example的更多相关文章

  1. hadoop第一个程序WordCount

    hadoop第一个程序WordCount package test; import org.apache.hadoop.mapreduce.Job; import java.io.IOExceptio ...

  2. Hadoop示例程序WordCount编译运行

    首先确保Hadoop已正确安装及运行. 将WordCount.java拷贝出来 $ cp ./src/examples/org/apache/hadoop/examples/WordCount.jav ...

  3. Hadoop Map/Reduce 示例程序WordCount

    #进入hadoop安装目录 cd /usr/local/hadoop #创建示例文件:input #在里面输入以下内容: #Hello world, Bye world! vim input #在hd ...

  4. (转载)Hadoop示例程序WordCount详解

    最近在学习云计算,研究Haddop框架,费了一整天时间将Hadoop在Linux下完全运行起来,看到官方的map-reduce的demo程序WordCount,仔细研究了一下,算做入门了. 其实Wor ...

  5. Hadoop示例程序WordCount详解及实例(转)

    1.图解MapReduce 2.简历过程: Input: Hello World Bye World Hello Hadoop Bye Hadoop Bye Hadoop Hello Hadoop M ...

  6. [MapReduce_1] 运行 Word Count 示例程序

    0. 说明 MapReduce 实现 Word Count 示意图 && Word Count 代码编写 1. MapReduce 实现 Word Count 示意图 1. Map:预 ...

  7. CC2650LaunchPad 运行contiki hello-world示例程序

    最近做毕设,开始接触contiki. 下载并运行Instant Contiki 3.0 这是官方制作的虚拟机镜像,直接用vmware等工具就可以运行. 从这里下载. 下载并解压后,用vmware运行. ...

  8. 用Python语言写Hadoop MapReduce程序Writing an Hadoop MapReduce Program in Python

    In this tutorial I will describe how to write a simple MapReduce program for Hadoop in the Python pr ...

  9. IDEA Maven Hadoop调试hdfs程序

    IDEA 远程调试 Hadoop 两大特色:一是采用maven的pom配置:二是直接连接hdfs:9000端口,无须另外在服务端配置参数. 其实内容包含了两种方式:本地与远程调试.这里仅仅只是使用远程 ...

随机推荐

  1. java.lang.reflect.UndeclaredThrowableExceptionjiang

    实例包含由调用处理程序抛出的经过检查的未声明异常,可以使用 getUndeclaredThrowable() 方法获取 String msg = null; if (e instanceof Unde ...

  2. loadrunner测试ajax框架

    loadrunner测试ajax框架的系统时,录制回放都没有报错,但是回放后系统中没有产生数据,解决方法 loadrunnerajax框架测试脚本headerajax [问题描述]用loadrunne ...

  3. REST API 支持方式

    1.JSONPlaceholder 是一个提供免费的在线 REST API 的网站,我们在开发时可以使用它提供的 url 地址测试下网络请求以及请求参数.或者当我们程序需要获取一些假数据.假图片时也可 ...

  4. 哈尔滨理工大学第七届程序设计竞赛初赛(高年级组)I - B-旅行

    题目描述 小z放假了,准备到RRR城市旅行,其中这个城市有N个旅游景点.小z时间有限,只能在三个旅行景点进行游玩.小明租了辆车,司机很善良,说咱不计路程,只要你一次性缴费足够,我就带你走遍RRR城. ...

  5. DP 题集 1

    关于 DP 的一些题目 参考资料 [Tutorial] Non-trivial DP Tricks and Techniques DP Rain and Umbrellas Mr. Kitayuta, ...

  6. Xcode真机调试初体验

    1. 开发者证书(Certificates) 分为开发(iOS Development)和发布(iOS Distribution)两种,无论是真机调试,还是上传到App Store都需要该证书,是一个 ...

  7. Selenium模拟登陆简书

    from selenium import webdriver from selenium.webdriver import ActionChains from selenium.webdriver.c ...

  8. 【51Nod 1815】【算法马拉松 23】调查任务

    http://www.51nod.com/onlineJudge/questionCode.html#!problemId=1815 tarjan缩点后在DAG上递推即可. 每个点维护所有根到它的路径 ...

  9. 【离散化】【DFS】Gym - 101617H - Security Badges

    题意:给你一张有向图,每条边有个限制范围,只有权值在限制范围内的人能走这条边,问你权值不超过K的人中,有多少人能从S到T. K很大,因此我们只处理边的范围的上下界这O(m)个权值能否到达,以防万一,还 ...

  10. CXF和Axis2的区别

    1.CXF支持 WS-Addressing,WS-Policy, WS-RM, WS-Security和WS-I Basic Profile.Axis2不支持WS-Policy,但是承诺在下面的版本支 ...