一、需求

data:  将相同名字合并为一个,并计算出平均数

tom
小明
jerry
2哈
tom
tom
小明

二、编码

1.导入jar包

2.编码

2.1Map编写

package com.wzy.studentscore;

import java.io.IOException;
import java.util.StringTokenizer; import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper; /**
* @author:吴兆跃
* @version 创建时间:2018年6月5日 下午5:58:55
* 类说明:
*/
public class ScoreMap extends Mapper<LongWritable, Text, Text, IntWritable>{
@Override
public void map(LongWritable key, Text value, Context context)
throws IOException,InterruptedException{ String line = value.toString(); //一行的数据
StringTokenizer tokenizerArticle = new StringTokenizer(line, "\n"); System.out.println("key: "+key);
System.out.println("value-line: "+line);
System.out.println("count: "+tokenizerArticle.countTokens()); while(tokenizerArticle.hasMoreTokens()){
String token = tokenizerArticle.nextToken();
System.out.println("token: "+token); StringTokenizer tokenizerLine = new StringTokenizer(token);
String strName = tokenizerLine.nextToken(); // 得到name
String strScore = tokenizerLine.nextToken(); // 得到分数 Text name = new Text(strName);
int scoreInt = Integer.parseInt(strScore);
context.write(name, new IntWritable(scoreInt)); }
System.out.println("context: "+context.toString());
} }

2.2Reduce编写

package com.wzy.studentscore;

import java.io.IOException;
import java.util.Iterator; import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Reducer; /**
* @author:吴兆跃
* @version 创建时间:2018年6月5日 下午6:50:28
* 类说明:
*/
public class ScoreReduce extends Reducer<Text, IntWritable, Text, IntWritable>{ @Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException{ int sum = ;
int count = ;
Iterator<IntWritable> iterator = values.iterator();
while(iterator.hasNext()){
sum += iterator.next().get(); //求和
count++;
}
int average = (int)sum / count; //求平均数
context.write(key, new IntWritable(average));
} }

2.3运行类编写

package com.wzy.studentscore;

import org.apache.hadoop.conf.Configured;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.TextInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.mapreduce.lib.output.TextOutputFormat;
import org.apache.hadoop.util.Tool;
import org.apache.hadoop.util.ToolRunner; /**
* @author:吴兆跃
* @version 创建时间:2018年6月5日 下午6:59:29
* 类说明:
*/
public class ScoreProcess extends Configured implements Tool{ public static void main(String[] args) throws Exception {
int ret = ToolRunner.run(new ScoreProcess(), new String[]{"input","output"});
System.exit(ret);
} @Override
public int run(String[] args) throws Exception {
Job job = new Job(getConf());
job.setJarByClass(ScoreProcess.class);
job.setJobName("score_process"); job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class); job.setMapperClass(ScoreMap.class);
job.setCombinerClass(ScoreReduce.class);
job.setReducerClass(ScoreReduce.class); job.setInputFormatClass(TextInputFormat.class);
job.setOutputFormatClass(TextOutputFormat.class); FileInputFormat.setInputPaths(job, new Path(args[]));
FileOutputFormat.setOutputPath(job, new Path(args[])); boolean success = job.waitForCompletion(true);
return success ? : ;
} }

3.打包

三、调试

1. java本地运行

root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# ls
input part scoreProcess.jar
root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# java -jar scoreProcess.jar
Jun , :: AM org.apache.hadoop.util.NativeCodeLoader <clinit>
WARNING: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Jun , :: AM org.apache.hadoop.mapreduce.lib.input.FileInputFormat listStatus
INFO: Total input paths to process :
Jun , :: AM org.apache.hadoop.io.compress.snappy.LoadSnappy <clinit>
WARNING: Snappy native library not loaded
Jun , :: AM org.apache.hadoop.mapred.JobClient monitorAndPrintJob
INFO: Running job: job_local1903623691_0001
Jun , :: AM org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable run
INFO: Starting task: attempt_local1903623691_0001_m_000000_0
Jun , :: AM org.apache.hadoop.mapred.LocalJobRunner$Job run
INFO: Waiting for map tasks
Jun , :: AM org.apache.hadoop.util.ProcessTree isSetsidSupported
INFO: setsid exited with exit code
Jun , :: AM org.apache.hadoop.mapred.Task initialize
INFO: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@5ddf714a
Jun , :: AM org.apache.hadoop.mapred.MapTask runNewMapper
INFO: Processing split: file:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess/input/data:+
Jun , :: AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer <init>
INFO: io.sort.mb =
Jun , :: AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer <init>
INFO: data buffer = /
Jun , :: AM org.apache.hadoop.mapred.MapTask$MapOutputBuffer <init>
INFO: record buffer = /
key:
value-line: tom
count:
token: tom
context: org.apache.hadoop.mapreduce.Mapper$Context@41b9bff9
key:
value-line: 小明
count:
token: 小明
context: org.apache.hadoop.mapreduce.Mapper$Context@41b9bff9
key:
value-line: jerry
count:
token: jerry
context: org.apache.hadoop.mapreduce.Mapper$Context@41b9bff9
key:
value-line: 哈2
count:
token: 哈2
context: org.apache.hadoop.mapreduce.Mapper$Context@41b9bff9
key:
value-line: tom
count:
token: tom
context: org.apache.hadoop.mapreduce.Mapper$Context@41b9bff9
key:
value-line: tom
count:
token: tom
context: org.apache.hadoop.mapreduce.Mapper$Context@41b9bff9
key:
value-line: 小明
count:
token: 小明
context: org.apache.hadoop.mapreduce.Mapper$Context@41b9bff9
root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# ls
input output part scoreProcess.jar
root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# cd output/
root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess/output# ls
part-r- _SUCCESS
root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess/output# cat part-r-
jerry
tom
哈2
小明

2. 在hadoop hdfs上运行

2.1 data文件上传到hdfs

root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# ../../bin/hadoop fs -mkdir /user
root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# ../../bin/hadoop fs -mkdir /user/root
root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# ../../bin/hadoop fs -mkdir /user/root/input
root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# ../../bin/hadoop fs -put input/data /user/root/input
root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# ../../bin/hadoop fs -ls /user/root/input
Found items
-rw-r--r-- root supergroup -- : /user/root/input/data

2.2 运行

root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# ../../bin/hadoop jar scoreProcess.jar
// :: INFO input.FileInputFormat: Total input paths to process :
// :: INFO mapred.JobClient: Running job: job_201806060358_0002
// :: INFO mapred.JobClient: map % reduce %
// :: INFO mapred.JobClient: map % reduce %
// :: INFO mapred.JobClient: map % reduce %
// :: INFO mapred.JobClient: Job complete: job_201806060358_0002
// :: INFO mapred.JobClient: Counters:
// :: INFO mapred.JobClient: Map-Reduce Framework
// :: INFO mapred.JobClient: Combine output records=
// :: INFO mapred.JobClient: Spilled Records=
// :: INFO mapred.JobClient: Reduce input records=
// :: INFO mapred.JobClient: Reduce output records=
// :: INFO mapred.JobClient: Map input records=
// :: INFO mapred.JobClient: Map output records=
// :: INFO mapred.JobClient: Map output bytes=
// :: INFO mapred.JobClient: Reduce shuffle bytes=
// :: INFO mapred.JobClient: Combine input records=
// :: INFO mapred.JobClient: Reduce input groups=
// :: INFO mapred.JobClient: FileSystemCounters
// :: INFO mapred.JobClient: HDFS_BYTES_READ=
// :: INFO mapred.JobClient: FILE_BYTES_WRITTEN=
// :: INFO mapred.JobClient: FILE_BYTES_READ=
// :: INFO mapred.JobClient: HDFS_BYTES_WRITTEN=
// :: INFO mapred.JobClient: Job Counters
// :: INFO mapred.JobClient: Launched map tasks=
// :: INFO mapred.JobClient: Launched reduce tasks=
// :: INFO mapred.JobClient: Data-local map tasks=

2.3 查看结果

root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# ../../bin/hadoop fs -ls /user/root/output/
Found items
drwxr-xr-x - root supergroup -- : /user/root/output/_logs
-rw-r--r-- root supergroup -- : /user/root/output/part-r-00000
root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# ../../bin/hadoop fs -get /user/root/output/part-r- part
root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# ls
input output part scoreProcess.jar
root@master:/home/wzy/software/hadoop-0.20./testfile/ScoreProcess# cat part
jerry
tom
2哈
小明

原生MapReduce开发样例的更多相关文章

  1. spring+springmvc+hibernate架构、maven分模块开发样例小项目案例

    maven分模块开发样例小项目案例 spring+springmvc+hibernate架构 以用户管理做測试,分dao,sevices,web层,分模块开发測试!因时间关系.仅仅測查询成功.其它的准 ...

  2. hadoop学习;block数据块;mapreduce实现样例;UnsupportedClassVersionError异常;关联项目源代码

    对于开源的东东,尤其是刚出来不久,我认为最好的学习方式就是能够看源代码和doc,測试它的样例 为了方便查看源代码,关联导入源代码的项目 先前的项目导入源代码是关联了源代码文件 block数据块,在配置 ...

  3. hadoop得知;block数据块;mapreduce实现样例;UnsupportedClassVersionError变态;该项目的源代码相关联

    对于开源的东西.特别是刚出来不久.我认为最好的学习方法是能够看到源代码,doc,样品测试 为了方便查看源代码,导入与项目相关的源代码 watermark/2/text/aHR0cDovL2Jsb2cu ...

  4. PyQt开发样例: 利用QToolBox开发的桌面工具箱Demo

    老猿Python博文目录 专栏:使用PyQt开发图形界面Python应用 老猿Python博客地址 一.引言 toolBox工具箱是一个容器部件,对应类为QToolBox,在其内有一列从上到下顺序排列 ...

  5. OpenHarmony 3.1 Beta 样例:使用分布式菜单创建点餐神器

    (以下内容来自开发者分享,不代表 OpenHarmony 项目群工作委员会观点) 刘丽红 随着社会的进步与发展,科技手段的推陈出新,餐饮行业也在寻求新的突破与变革,手机扫描二维码点餐系统已经成为餐饮行 ...

  6. AppCan移动应用开发平台新增9个超有用插件(内含演示样例代码)

    使用AppCan平台进行移动开发.你所须要具备的是Html5+CSS +JS前端语言基础.此外.Hybrid混合模式应用还需结合原生语言对功能模块进行封装,对于没有原生基础的开发人员,怎样实现App里 ...

  7. [b0010] windows 下 eclipse 开发 hdfs程序样例 (二)

    目的: 学习windows 开发hadoop程序的配置 相关: [b0007] windows 下 eclipse 开发 hdfs程序样例 环境: 基于以下环境配置好后. [b0008] Window ...

  8. 构造Scala开发环境并创建ApiDemos演示样例项目

    从2011年開始写Android ApiDemos 以来.Android的版本号也更新了非常多,眼下的版本号已经是4.04. ApiDemos中的样例也添加了不少,有必要更新Android ApiDe ...

  9. 让你提前认识软件开发(19):C语言中的协议及单元測试演示样例

    第1部分 又一次认识C语言 C语言中的协议及单元測试演示样例 [文章摘要] 在实际的软件开发项目中.常常要实现多个模块之间的通信.这就须要大家约定好相互之间的通信协议,各自依照协议来收发和解析消息. ...

随机推荐

  1. 000-mysql小技巧

    1.使用Navicat 链接5.7版本出现 mysql 5.7.9 [Err] 1055报错解决,[Err] 1055 – Expression #1 of ORDER BY clause is no ...

  2. LeNet5

    Lecun Y, Bottou L, Bengio Y, et al. Gradient-based learning applied to document recognition[J]. Proc ...

  3. python的分布式爬虫框架

    scrapy + celery: Scrapy原生不支持js渲染,需要单独下载[scrapy-splash](GitHub - scrapy-plugins/scrapy-splash: Scrapy ...

  4. LeetCode-day05

    45. Single Number 在个数都为2的数组中找到个数为1的数 46. Missing Number 在数组中找到从0到n缺失的数字 47. Find the Difference 找两个字 ...

  5. Groovy系列-groovy比起Java--有哪些地方写起来更舒服?

    groovy比起java-有哪些地方写起来更舒服 java发展缓慢,语法落后冗余 说起java,其实java挺好的,java现在的性能也不错,但是,java的语法显然比较落后,而且冗余,getter/ ...

  6. Manacher专题

    1.POJ 3974 Palindrome 题意:求一个长字符串的最长回文子串. 思路:Manacher模板. #include<iostream> #include<algorit ...

  7. windows下客户端开发hdf--环境搭建

    1.引入依赖 <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop- ...

  8. asp.net 移除Server, X-Powered-By, 和X-AspNet-Version头

    我们在开发Asp.net中,最后部署在IIS上. 然后发送HTTP请求,返回的HTTP头中包含Server, X-Powered-By, 和 X-AspNet-Version信息. 这些信息有时给攻击 ...

  9. UI控件之UINavigationController

    ViewController1 *vc1=[[ViewController1 alloc]init]; UINavigationController *nav1=[[UINavigationContr ...

  10. Linux环境下的图形系统和AMD R600显卡编程(2)——Framebuffer、DRM、EXA和Mesa简介

    转:https://www.cnblogs.com/shoemaker/p/linux_graphics02.html 1. Framebuffer Framebuffer驱动提供基本的显示,fram ...