1、JDK安装 
下载网址: 
http://www.oracle.com/technetwork/java/javase/downloads/jdk-6u29-download-513648.html 
如果本地有安装包,则用SecureCRT连接Linux机器,然后用rz指令进行上传文件;

下载后获得jdk-6u29-linux-i586-rpm.bin文件,使用sh jdk-6u29-linux-i586-rpm.bin进行安装, 
等待安装完成即可;java默认会安装在/usr/java下;

在命令行输入:vi /etc/profile在里面添加如下内容export JAVA_HOME=/usr/java/jdk1.6.0_29export JAVA_BIN=/usr/java/jdk1.6.0_29/binexport PATH=$PATH:$JAVA_HOME/binexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport JAVA_HOME JAVA_BIN PATH CLASSPATH

进入 /usr/bin/目录cd /usr/binln -s -f /usr/java/jdk1.6.0_29/jre/bin/javaln -s -f /usr/java/jdk1.6.0_29/bin/javac 
在命令行输入java -version屏幕输出:java version "jdk1.6.0_02"Java(TM) 2 Runtime Environment, Standard Edition (build jdk1.6.0_02)Java HotSpot(TM) Client VM (build jdk1.6.0_02, mixed mode)则表示安装JDK1.6完毕.

2、Hadoop安装 
下载网址:http://www.apache.org/dyn/closer.cgi/hadoop/common/ 
如果本地有安装包,则用SecureCRT连接Linux机器,然后用rz指令进行上传文件;

下载后获得hadoop-0.21.0.tar.gz文件

解压 tar zxvf hadoop-0.21.0.tar.gz 
压缩:tar zcvf hadoop-0.21.0.tar.gz 目录名

在命令行输入:vi /etc/profile在里面添加如下内容 
export hadoop_home = /usr/george/dev/install/hadoop-0.21.0 
export JAVA_HOME=/usr/java/jdk1.6.0_29export JAVA_BIN=/usr/java/jdk1.6.0_29/binexport PATH=$PATH:$JAVA_HOME/bin:$hadoop_home/binexport CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jarexport JAVA_HOME JAVA_BIN PATH CLASSPATH

需要注销用户或重启vm,就可以直接输入hadoop指令了; 
WordCount例子代码 
3.1 Java代码: 
package demo;

import java.io.IOException; 
import java.util.Iterator; 
import java.util.StringTokenizer;

import org.apache.hadoop.fs.Path; 
import org.apache.hadoop.io.IntWritable; 
import org.apache.hadoop.io.LongWritable; 
import org.apache.hadoop.io.Text; 
import org.apache.hadoop.mapred.FileInputFormat; 
import org.apache.hadoop.mapred.FileOutputFormat; 
import org.apache.hadoop.mapred.JobClient; 
import org.apache.hadoop.mapred.JobConf; 
import org.apache.hadoop.mapred.MapReduceBase; 
import org.apache.hadoop.mapred.Mapper; 
import org.apache.hadoop.mapred.OutputCollector; 
import org.apache.hadoop.mapred.Reducer; 
import org.apache.hadoop.mapred.Reporter; 
import org.apache.hadoop.mapred.TextInputFormat; 
import org.apache.hadoop.mapred.TextOutputFormat;

public class WordCount { 
public static class Map extends MapReduceBase implements 
Mapper<LongWritable, Text, Text, IntWritable> { 
private final static IntWritable one = new IntWritable(1); 
private Text word = new Text();

public void map(LongWritable key, Text value, 
OutputCollector<Text, IntWritable> output, Reporter reporter) 
throws IOException { 
String line = value.toString(); 
StringTokenizer tokenizer = new StringTokenizer(line); 
while (tokenizer.hasMoreTokens()) { 
word.set(tokenizer.nextToken()); 
output.collect(word, one); 


}

public static class Reduce extends MapReduceBase implements 
Reducer<Text, IntWritable, Text, IntWritable> { 
public void reduce(Text key, Iterator<IntWritable> values, 
OutputCollector<Text, IntWritable> output, Reporter reporter) 
throws IOException { 
int sum = 0; 
while (values.hasNext()) { 
sum += values.next().get(); 

output.collect(key, new IntWritable(sum)); 

}

public static void main(String[] args) throws Exception { 
JobConf conf = new JobConf(WordCount.class); 
conf.setJobName("wordcount");

conf.setOutputKeyClass(Text.class); 
conf.setOutputValueClass(IntWritable.class);

conf.setMapperClass(Map.class); 
conf.setCombinerClass(Reduce.class); 
conf.setReducerClass(Reduce.class);

conf.setInputFormat(TextInputFormat.class); 
conf.setOutputFormat(TextOutputFormat.class);

FileInputFormat.setInputPaths(conf, new Path(args[0])); 
FileOutputFormat.setOutputPath(conf, new Path(args[1]));

JobClient.runJob(conf); 

}

3.2 编译: 
javac -classpath /usr/george/dev/install/hadoop-0.21.0/hadoop-hdfs-0.21.0.jar:/usr/george/dev/install/hadoop-0.21.0/hadoop-mapred-0.21.0.jar:/usr/george/dev/install/hadoop-0.21.0/hadoop-common-0.21.0.jar WordCount.java -d /usr/george/dev/wkspace/hadoop/wordcount/classes 
在windows中,多个classpath参数值用;分割;在linux中用:分割;

编译后,会在/usr/george/dev/wkspace/hadoop/wordcount/classes目录下生成三个class文件: 
WordCount.class  WordCount$Map.class  WordCount$Reduce.class

3.3将class文件打成jar包 
到/usr/george/dev/wkspace/hadoop/wordcount/classes目录,运行jar cvf WordCount.jar *.class就会生成: 
WordCount.class  WordCount.jar  WordCount$Map.class  WordCount$Reduce.class

3.4 创建输入数据: 
创建/usr/george/dev/wkspace/hadoop/wordcount/datas目录,在其下创建input1.txt和input2.txt文件: 
Touch input1.txt 
Vi input1.txt

文件内容如下: 
i love chinaare you ok?

按照同样的方法创建input2.txt,内容如下: 
hello, i love word 
You are ok

创建成功后可以通过cat input1.txt 和 cat input2.txt查看内容;

3.5 创建hadoop输入与输出目录: 
hadoop fs -mkdir wordcount/inputhadoop fs -mkdir wordcount/outputhadoop fs -put input1.txt wordcount/input/hadoop fs -put input2.txt wordcount/input/

Ps : 可以不创建out目录,要不运行WordCount程序时会报output文件已经存在,所以下面的命令行中使用了output1为输出目录; 
3.6运行 
到/usr/george/dev/wkspace/hadoop/wordcount/classes目录,运行 
[root@localhost classes]# hadoop jar WordCount.jar WordCount wordcount/input wordcount/output1 
11/12/02 05:53:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000 
11/12/02 05:53:59 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id 
11/12/02 05:53:59 WARN mapreduce.JobSubmitter: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same. 
11/12/02 05:53:59 INFO mapred.FileInputFormat: Total input paths to process : 2 
11/12/02 05:54:00 WARN conf.Configuration: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps 
11/12/02 05:54:00 INFO mapreduce.JobSubmitter: number of splits:2 
11/12/02 05:54:00 INFO mapreduce.JobSubmitter: adding the following namenodes' delegation tokens:null 
11/12/02 05:54:00 INFO mapreduce.Job: Running job: job_201112020429_0003 
11/12/02 05:54:01 INFO mapreduce.Job:  map 0% reduce 0% 
11/12/02 05:54:20 INFO mapreduce.Job:  map 50% reduce 0% 
11/12/02 05:54:23 INFO mapreduce.Job:  map 100% reduce 0% 
11/12/02 05:54:29 INFO mapreduce.Job:  map 100% reduce 100% 
11/12/02 05:54:32 INFO mapreduce.Job: Job complete: job_201112020429_0003 
11/12/02 05:54:32 INFO mapreduce.Job: Counters: 33 
        FileInputFormatCounters 
                BYTES_READ=54 
        FileSystemCounters 
                FILE_BYTES_READ=132 
                FILE_BYTES_WRITTEN=334 
                HDFS_BYTES_READ=274 
                HDFS_BYTES_WRITTEN=65 
        Shuffle Errors 
                BAD_ID=0 
                CONNECTION=0 
                IO_ERROR=0 
                WRONG_LENGTH=0 
                WRONG_MAP=0 
                WRONG_REDUCE=0 
        Job Counters 
                Data-local map tasks=2 
                Total time spent by all maps waiting after reserving slots (ms)=0 
                Total time spent by all reduces waiting after reserving slots (ms)=0 
                SLOTS_MILLIS_MAPS=24824 
                SLOTS_MILLIS_REDUCES=6870 
                Launched map tasks=2 
                Launched reduce tasks=1 
        Map-Reduce Framework 
                Combine input records=12 
                Combine output records=12 
                Failed Shuffles=0 
                GC time elapsed (ms)=291 
                Map input records=4 
                Map output bytes=102 
                Map output records=12 
                Merged Map outputs=2 
                Reduce input groups=10 
                Reduce input records=12 
                Reduce output records=10 
                Reduce shuffle bytes=138 
                Shuffled Maps =2 
                Spilled Records=24 
                SPLIT_RAW_BYTES=220

3.7 查看输出目录 
[root@localhost classes]# hadoop fs -ls wordcount/output1 
11/12/02 05:54:59 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000 
11/12/02 05:55:00 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id 
Found 2 items 
-rw-r--r--   1 root supergroup          0 2011-12-02 05:54 /user/root/wordcount/output1/_SUCCESS 
-rw-r--r--   1 root supergroup         65 2011-12-02 05:54 /user/root/wordcount/output1/part-00000

[root@localhost classes]# hadoop fs -cat /user/root/wordcount/output1/part-00000 
11/12/02 05:56:05 INFO security.Groups: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping; cacheTimeout=300000 
11/12/02 05:56:05 WARN conf.Configuration: mapred.task.id is deprecated. Instead, use mapreduce.task.attempt.id 
You     1 
are     2 
china   1 
hello,i 1 
i       1 
love    2 
ok      1 
ok?     1 
word    1 
you     1

hadoop安装与WordCount例子的更多相关文章

  1. 三.hadoop mapreduce之WordCount例子

    目录: 目录见文章1 这个案列完成对单词的计数,重写map,与reduce方法,完成对mapreduce的理解. Mapreduce初析 Mapreduce是一个计算框架,既然是做计算的框架,那么表现 ...

  2. RedHat 安装Hadoop并运行wordcount例子

    1.安装 Red Hat 环境 2.安装JDK 3.下载hadoop2.8.0 http://mirrors.tuna.tsinghua.edu.cn/apache/hadoop/common/had ...

  3. [Linux][Hadoop] 运行WordCount例子

    紧接上篇,完成Hadoop的安装并跑起来之后,是该运行相关例子的时候了,而最简单最直接的例子就是HelloWorld式的WordCount例子.   参照博客进行运行:http://xiejiangl ...

  4. hadoop的wordcount例子运行

    可以通过一个简单的例子来说明MapReduce到底是什么: 我们要统计一个大文件中的各个单词出现的次数.由于文件太大.我们把这个文件切分成如果小文件,然后安排多个人去统计.这个过程就是”Map”.然后 ...

  5. Hadoop【单机安装-测试程序WordCount】

    Hadoop程序说明,就是创建一个文本文件,然后统计这个文本文件中单词出现过多少次! (MapReduce 运行在本地   启动JVM ) 第一步    创建需要的文件目录,然后进入该文件中进行编辑 ...

  6. (二)Hadoop例子——运行example中的wordCount例子

    Hadoop例子——运行example中的wordCount例子 一.   需求说明 单词计数是最简单也是最能体现MapReduce思想的程序之一,可以称为 MapReduce版"Hello ...

  7. 【hadoop】看懂WordCount例子

    前言:今天刚开始看到map和reduce类里面的内容时,说实话一片迷茫,who are you?,最后实在没办法,上B站看别人的解说视频,再加上自己去网上查java的包的解释,终于把WordCount ...

  8. 转载:Hadoop安装教程_单机/伪分布式配置_Hadoop2.6.0/Ubuntu14.04

    原文 http://www.powerxing.com/install-hadoop/ 当开始着手实践 Hadoop 时,安装 Hadoop 往往会成为新手的一道门槛.尽管安装其实很简单,书上有写到, ...

  9. Hadoop安装教程_单机/伪分布式配置_Hadoop2.6.0/Ubuntu14.04

    摘自: http://www.cnblogs.com/kinglau/p/3796164.html http://www.powerxing.com/install-hadoop/ 当开始着手实践 H ...

随机推荐

  1. PHP使用Mysql事务实例解析

    <?php //数据库连接 $conn = mysql_connect('localhost', 'root', ''); mysql_select_db('test', $conn); mys ...

  2. [ffmpeg 扩展第三方库编译系列] 关于libopenjpeg mingw32编译问题

    在mingw32如果想编译libopenjpeg 会比较麻烦 会出现undefined reference to `_imp__opj_destroy_cstr_info@4' 等错误 因此编译时候需 ...

  3. UVa 10539 (筛素数、二分查找) Almost Prime Numbers

    题意: 求正整数L和U之间有多少个整数x满足形如x=pk 这种形式,其中p为素数,k>1 分析: 首先筛出1e6内的素数,枚举每个素数求出1e12内所有满足条件的数,然后排序. 对于L和U,二分 ...

  4. BZOJ1954: Pku3764 The xor-longest Path

    题解: 在树上i到j的异或和可以直接转化为i到根的异或和^j到根的异或和. 所以我们把每个点到根的异或和处理出来放到trie里面,再把每个点放进去跑一遍即可. 代码: #include<cstd ...

  5. Azure 中的多个 VM NIC 和网络虚拟设备

    YU-SHUN WANG Azure 网络高级项目经理 在 2014 年欧洲 TechEd 大会上,我们宣布了在Azure VM 中为多个网络接口 (NIC) 提供支持,并与多家重要厂商合作,在 Az ...

  6. postgresql 行转列,拼接字符串

    create table k_user ( op_id ) not null, op_name ) not null, password ) not null, real_name ) not nul ...

  7. 【转】Android动态改变对 onCreateDialog话框值 -- 不错不错!!!

    原文网址:http://www.111cn.net/sj/android/46484.htm 使用方法是这样的,Activity.showDialog()激发Activity.onCreateDial ...

  8. 3.2版uploadify详细例子(含FF和IE SESSION问题)

    最近做项目中碰到上传需要显示进度的问题,通过uploadfiy很好的解决了这个问题不过(IE9出现了按钮不能点击的问题,至今仍找不到良策) 在使用uploadfiy3.2版本时需要下载jquery.t ...

  9. windows系统下的文件夹链接功能mklink/linkd

    vista及以上系统的mklink命令可以创建文件夹的链接(感觉像是文件夹的映射).因为是从底层实现文件夹链接,所以这个链接是对应用程序透明的. (windows 2000,xp,server 200 ...

  10. A Fast Priority Queue Implementation of the Dijkstra Shortest Path Algorithm

    http://www.codeproject.com/Articles/24816/A-Fast-Priority-Queue-Implementation-of-the-Dijkst http:// ...