1.在hadoop所在目录“usr/local”下创建一个文件夹input

root@ubuntu:/usr/local# mkdir input

2.在文件夹input中创建两个文本文件file1.txt和file2.txt,file1.txt中内容是“hello word”,file2.txt中内容是“hello hadoop”、“hello mapreduce”(分两行)。

root@ubuntu:/usr/local# cd input
root@ubuntu:/usr/local/input# echo "hello word" > file1.txt
root@ubuntu:/usr/local/input# echo "hello hadoop" > file2.txt
root@ubuntu:/usr/local/input# echo "hello mapreduce" > file2.txt   (hello mapreduce 会覆盖原来写入的hello hadoop ,可以使用gedit编辑file2.txt)
root@ubuntu:/usr/local/input# ls
file1.txt file2.txt

显示文件内容可用:

root@ubuntu:/usr/local/input# more file1.txt
hello word
root@ubuntu:/usr/local/input# more file2.txt
hello mapreduce
hello hadoop

3.在HDFS上创建输入文件夹wc_input,并将本地文件夹input中的两个文本文件上传到集群的wc_input下

root@ubuntu:/usr/local/hadoop-1.2.1# bin/hadoop fs -mkdir wc_input

root@ubuntu:/usr/local/hadoop-1.2.1# bin/hadoop fs -put /usr/local/input/file* wc_input

查看wc_input中的文件:

root@ubuntu:/usr/local/hadoop-1.2.1# bin/hadoop fs -ls wc_input
Found 2 items
-rw-r--r-- 1 root supergroup 11 2014-03-13 01:19 /user/root/wc_input/file1.txt
-rw-r--r-- 1 root supergroup 29 2014-03-13 01:19 /user/root/wc_input/file2.txt

4.启动所有进程并查看进程:

root@ubuntu:/# ssh localhost   (用于验证能否实现无密码登陆localhost,如果能会出现下面的信息。否则需要设置具体步骤见http://blog.csdn.net/joe_007/article/details/8298814)

Welcome to Ubuntu 12.04.3 LTS (GNU/Linux 3.2.0-24-generic-pae i686)

* Documentation: https://help.ubuntu.com/

Last login: Mon Mar 3 04:44:23 2014 from localhost

root@ubuntu:~# exit
logout
Connection to localhost closed.

root@ubuntu:/usr/local/hadoop-1.2.1/bin# ./start-all.sh

starting namenode, logging to /usr/local/hadoop-1.2.1/libexec/../logs/hadoop-root-namenode-ubuntu.out
localhost: starting datanode, logging to /usr/local/hadoop-1.2.1/libexec/../logs/hadoop-root-datanode-ubuntu.out
localhost: starting secondarynamenode, logging to /usr/local/hadoop-1.2.1/libexec/../logs/hadoop-root-secondarynamenode-ubuntu.out
starting jobtracker, logging to /usr/local/hadoop-1.2.1/libexec/../logs/hadoop-root-jobtracker-ubuntu.out
localhost: starting tasktracker, logging to /usr/local/hadoop-1.2.1/libexec/../logs/hadoop-root-tasktracker-ubuntu.out

root@ubuntu:/usr/local/hadoop-1.2.1/bin# jps
7847 SecondaryNameNode
4196
7634 DataNode
7423 NameNode
8319 Jps
7938 JobTracker
8157 TaskTracker

运行hadoop自带的wordcount jar包(注:再次运行时一定要先将前一次运行的输出文件夹删除)

root@ubuntu:/usr/local/hadoop-1.2.1# bin/hadoop jar ./hadoop-examples-1.2.1.jar wordcount wc_input wc_output
14/03/13 01:48:40 INFO input.FileInputFormat: Total input paths to process : 2
14/03/13 01:48:40 INFO util.NativeCodeLoader: Loaded the native-hadoop library
14/03/13 01:48:40 WARN snappy.LoadSnappy: Snappy native library not loaded
14/03/13 01:48:42 INFO mapred.JobClient: Running job: job_201403130031_0001
14/03/13 01:48:44 INFO mapred.JobClient: map 0% reduce 0%
14/03/13 01:52:47 INFO mapred.JobClient: map 50% reduce 0%
14/03/13 01:53:50 INFO mapred.JobClient: map 100% reduce 0%
14/03/13 01:54:14 INFO mapred.JobClient: map 100% reduce 100%

... ...

5.查看输出文件夹

root@ubuntu:/usr/local/hadoop-1.2.1# bin/hadoop fs -ls wc_output
Found 3 items
-rw-r--r-- 1 root supergroup 0 2014-03-13 01:54 /user/root/wc_output/_SUCCESS
drwxr-xr-x - root supergroup 0 2014-03-13 01:48 /user/root/wc_output/_logs
-rw-r--r-- 1 root supergroup 36 2014-03-13 01:54 /user/root/wc_output/part-r-00000   (实际输出结果在part-r-00000中)

6.查看输出文件part-r-00000中的内容

root@ubuntu:/usr/local/hadoop-1.2.1# bin/hadoop fs -cat /user/root/wc_output/part-r-00000
hadoop 1
hello 3
mapreduce 1
word 1

7.关闭所有进程

root@ubuntu:/usr/local/hadoop-1.2.1/bin# ./stop-all.sh
stopping jobtracker
localhost: stopping tasktracker
stopping namenode
localhost: stopping datanode
localhost: stopping secondarynamenode

hadoop自带例子wordcount的具体运行步骤的更多相关文章

  1. linux下在eclipse上运行hadoop自带例子wordcount

    启动eclipse:打开windows->open perspective->other->map/reduce 可以看到map/reduce开发视图.设置Hadoop locati ...

  2. 执行hadoop自带的WordCount实例

    hadoop 自带的WordCount实例可以统计一批文本文件中各单词出现的次数.下面介绍如何执行WordCount实例. 1.启动hadoop [root@hadoop ~]# start-all. ...

  3. 运行hadoop自带的wordcount例子程序

    1.准备文件 [root@master ~]# cat input.txt hello java hello python hello c hello java hello js hello html ...

  4. Hadoop(1)---运行Hadoop自带的wordcount出错问题。

    在hadoop2.9.0版本中,对namenode.yarn做了ha,随后在某一台namenode节点上运行自带的wordcount程序出现偶发性的错误(有时成功,有时失败),错误信息如下: // : ...

  5. windows环境下跑hadoop自带的wordcount遇到的问题

    hadoop环境自己之前也接触过,搭建的是一个伪分布的环境,主从节点都在我自己的机子上,即127.0.0.1,当初记得步骤很多很麻烦的样子(可能自己用ubuntu还不够熟练),包括myeclipse. ...

  6. hadoop第一个例子WordCount

    hadoop查看自己空间 http://127.0.0.1:50070/dfshealth.jsp import java.io.IOException; import java.util.Strin ...

  7. 在命令行中运行Hadoop自带的WordCount程序

    1.启动所有的线程服务 start-all.sh 记得要查看线程是否启动 jps 2.在根目录创建 wordcount.txt 文件 放置一些数据 3.创建  hdfs dfs -mkdir /文件夹 ...

  8. hadoop自带例子SecondarySort源码分析MapReduce原理

    这里分析MapReduce原理并没用WordCount,目前没用过hadoop也没接触过大数据,感觉,只是感觉,在项目中,如果真的用到了MapReduce那待排序的肯定会更加实用. 先贴上源码 pac ...

  9. Hadoop最基本的wordcount(统计词频)

    package com.uniclick.dapa.dstest; import java.io.IOException; import java.net.URI; import org.apache ...

随机推荐

  1. Linux some command(continue...)

    挂载硬盘 sudo mount -t ext4 /dev/sdb1 /media/hadoop 自动挂载相关 sudo blkid sudo fdisk -l vim /etc/fstab cat / ...

  2. Hubilder用git插件安装使用

    打开Hbuilder,工具->插件安装(git分布式版本管理插件) 打开https://www.github.com,注册.登录.创建仓库 在Hbuilder中新建项目→然后右键→Team→共享 ...

  3. asp:gridview 中显示日期格式

    boundfield中应该这样设置: <asp:BoundField HeaderText="发表时间" DataField="PostTime" Htm ...

  4. 生成uid的算法

    private function _getUid() { //2013-01-01 00:00:00 (timestamp-microtime) $startTime= 1356969600000; ...

  5. raspberryPi 拍照

    调用python的库,学习raspberryPi的摄像头操作方法. 参考链接: https://www.raspberrypi.org/learning/getting-started-with-pi ...

  6. sell-- 英文网站产品显示404?

    1. 简介: 通过在主页(header.jsp)查询B22212,在localhost本地, cn和us查询的结果search.jsp中显示都是没有找到! 但是在外网(www),cn能够查询到,并展示 ...

  7. Maven集成Sonar

    Sonar对maven提供了简单可配的支持,要做的事情很简单--在maven/conf下settings.xml <profiles></profiles>标签之间添加如下内容 ...

  8. Vim编辑器-批量注释与反注释

    标签:linuxLinuxLINUXvimVIMVim编程 2013-01-10 09:52 27517人阅读 评论(3) 收藏 举报  分类: Linux(18)  版权声明:本文为博主原创文章,未 ...

  9. CPU虚拟化技术(留坑)

    留坑~~~ 不知道这个是这么实现的 CPU虚拟化技术就是单CPU模拟多CPU并行,允许一个平台同时运行多个操作系统,并且应用程序都可以在相互独立的空间内运行而互不影响,从而显著提高计算机的工作效率.虚 ...

  10. Feature Scaling

    定义:Feature scaling is a method used to standardize the range of independent variables or features of ...