MapReduce任务分析与讨论MapReduce job explained
In the last post we saw how to run a MapReduce job on Hadoop. Now we're going to analyze how a MapReduce program works. And, if you don't know what MapReduce is, the short answer is "MapReduce is a programming model for processing large data sets with a parallel, distributed algorithm on a cluster" (from Wikipedia).
Let's take a look at the source code: we can find a Java main method that is called from Hadoop, and two inner static classes, the mapper and the reducer. The code for the mapper is:
public static class TokenizerMapper extends Mapper<Object, Text, Text, IntWritable> {
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
@Override
public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
StringTokenizer itr = new StringTokenizer(value.toString());
while (itr.hasMoreTokens()) {
word.set(itr.nextToken());
context.write(word, one);
}
}
}
As we can see, this class extends Mapper, which - as its JavaDoc says - maps input key/value pairs to a set of intermediate key/value pairs; when the job starts, the Hadoop framework passes to the mapper a chunk of data (a subset of the whole dataset) to process. The output of the mapper will be the input of the reducers (it's not the complete story, but we'll arrive there in another post). The Mapper uses Java generics to specify what kind of data will process; in this example, we use a class that extends Mapper and specifies Object and Text as the classes of key/value pairs in input, and Text and IntWritable as the classes of key/value pairs for the output to the reducers (we'll see the details of those classes in a moment).
Let's examine the code: there's only one overridden method, the map() that takes the key/value pair as arguments and the Hadoop context; every time this method is called by Hadoop, the method receives an offset of the file where the value is as the key, and a line of the text file we're reading as the value.
Hadoop has some basic types that ore optimized for network serialization; here is a table with a few of them:
| Java type | Hadoop type |
|---|---|
| Integer | IntWritable |
| Long | LongWritable |
| Double | DoubleWritable |
| String | TextWritable |
| Map | MapWritable |
| Array | ArrayWritable |
Now it's easy to understand what this method does: for every line of the book it receives, it uses a StringTokenizer to split the line into every single word; then it sets the word in the Textobject and maps it the the value of 1; then writes it to the mappers via the Hadoop context.
Let's now look at the reducer:
public static class IntSumReducer extends Reducer<Text,IntWritable,Text,IntWritable> {
private IntWritable result = new IntWritable();
@Override
public void reduce(Text key, Iterable<IntWritable> values, Context context) throws IOException, InterruptedException {
int sum = 0;
for (IntWritable val : values) {
sum += val.get();
}
result.set(sum);
context.write(key, result);
}
}
This time we have the first two arguments of the overridden method reduce that are the same type of the last two of the TokenizerMapper class; that's because - as we said - the mapper outputs the data that the reducer will use as an input. The Hadoop framework takes care of calling this method for every key that comes from the mappers; as we saw before, the keys are the words of the file we're counting the words of.
The reduce method now has to sum all the occurrences of every single word, so it initializes a sum variable to 0 and then loops over all the values for that specific key that it receives from the mappers. For every word it updates the sum variable with the value mapped to that key. At the end of the loop, when all the occurrences of that word are counted, the method sets the value obtained into an IntWritable object and gives it to the Hadoop context to be outputted to the user.
We're now at the main method of the class, which is the one that is called by Hadoop when it's executed as a JAR file.
public static void main(String[] args) throws Exception {
Configuration conf = new Configuration();
String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs();
if (otherArgs.length != 2) {
System.err.println("Usage: wordcount <in> <out>");
System.exit(2);
}
Job job = new Job(conf, "word count");
job.setJarByClass(WordCount.class);
job.setMapperClass(TokenizerMapper.class);
job.setCombinerClass(IntSumReducer.class);
job.setReducerClass(IntSumReducer.class);
job.setOutputKeyClass(Text.class);
job.setOutputValueClass(IntWritable.class);
FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true) ? 0 : 1);
}
}
In the method, we first setup a Configuration object, then we check for the number of arguments passed to it; If the number of arguments is correct, we create a Job object and we set a few values for making it work. Let's dive into the details:
- setJarByClass: sets the Jar by finding where a given class came from; this needs an explanation: Hadoop distributes the code to execute to the cluster as a JAR file; instead of specifying the name of the JAR, we tell Hadoop the name of the class that every instance on the cluster has to look for inside its classpath
- setMapperClass: sets the class that will be executed as the mapper
- setCombinerClass: sets the class that will be executed as the combiner (we'll explain what is a combiner in a future post)
- setReducerClass: sets the class that will be executed as the reducer
- setOutputKeyClass: sets the class that will be used as the key for outputting data to the user
- setOutputValueClass: sets the class that will be used as the value for outputting data to the user
Then we say to Hadoop where it can find the input with the FileInputFormat.addInputPath() method and where it has to write the output with the FileOutputFormat.setOutputPath()method. The last method call is the waitForCompletion(), that submits the job to the cluster and waits for it to finish.
Now that the mechanism of a MapReduce job is more clear, we can start playing with it.
from: http://andreaiacono.blogspot.com/2014/02/mapreduce-job-explained.html
MapReduce任务分析与讨论MapReduce job explained的更多相关文章
- MapReduce教程(一)基于MapReduce框架开发<转>
1 MapReduce编程 1.1 MapReduce简介 MapReduce是一种编程模型,用于大规模数据集(大于1TB)的并行运算,用于解决海量数据的计算问题. MapReduce分成了两个部分: ...
- Migrating from MapReduce 1 (MRv1) to MapReduce 2 (MRv2, YARN)...
This is a guide to migrating from Apache MapReduce 1 (MRv1) to the Next Generation MapReduce (MRv2 o ...
- 使用Cloudera Manager搭建MapReduce集群及MapReduce HA
使用Cloudera Manager搭建MapReduce集群及MapReduce HA 作者:尹正杰 版权声明:原创作品,谢绝转载!否则将追究法律责任. 一.通过CM部署MapReduce On ...
- 【MapReduce】一、MapReduce简介与实例
(一)MapReduce介绍 1.MapReduce简介 MapReduce是Hadoop生态系统的一个重要组成部分,与分布式文件系统HDFS.分布式数据库HBase一起合称为传统Hadoop的三 ...
- hadoop2.2编程:从default mapreduce program 来理解mapreduce
下面写一个default mapreduce 的程序: import org.apache.hadoop.mapreduce.Mapper; import org.apache.hadoop.mapr ...
- Top N之MapReduce程序加强版Enhanced MapReduce for Top N items
In the last post we saw how to write a MapReduce program for finding the top-n items of a dataset. T ...
- Python实现MapReduce,wordcount实例,MapReduce实现两表的Join
Python实现MapReduce 下面使用mapreduce模式实现了一个简单的统计日志中单词出现次数的程序: from functools import reduce from multiproc ...
- yarn/mapreduce工作机制及mapreduce客户端代码编写
首先需要知道的就是在老版本的hadoop中是没有yarn的,mapreduce既负责资源分配又负责业务逻辑处理.为了解耦,把资源分配这块抽了出来,形成了yarn,这样不仅mapreudce可以用yar ...
- 【MapReduce】三、MapReduce运行机制
通过前面对map端.reduce端以及整个shuffle端工作流程的介绍,我们已经了解了MapReduce的并行运算模型,基本可以使用MapReduce进行编程,那么MapRecude究竟是如何执 ...
随机推荐
- 【工具】获取pojo类属性,并写入表格
1.添加依赖 <!-- https://mvnrepository.com/artifact/org.apache.poi/poi --> <dependency> <g ...
- 美团DB数据同步到数据仓库的架构与实践
背景 在数据仓库建模中,未经任何加工处理的原始业务层数据,我们称之为ODS(Operational Data Store)数据.在互联网企业中,常见的ODS数据有业务日志数据(Log)和业务DB数据( ...
- merge into issue
ORA-30926: unable to get a stable set of rows in the source tables 一.经检查,这个错误是由于数据来源表(即语句中,using后面的f ...
- iOS 11开发教程(八)定制iOS11应用程序图标
iOS 11开发教程(八)定制iOS11应用程序图标 在图1.9中可以看到应用程序的图标是网状白色图像,它是iOS模拟器上的应用程序默认的图标.这个图标是可以进行改变的.以下就来实现在iOS模拟器上将 ...
- wmware虚拟系统光盘的问题
拿到系统盘,需要通过UltralSO工具中:工具-制作光盘映像文件,做成系统iso文件,而不是直接拷贝系统盘里的文件压缩成iso格式. 主要原因:主要是系统盘有一个引导区,win系统复制光盘时,是不能 ...
- [BZOJ2683]简单题/[BZOJ1176][BalkanOI2007]Mokia
[BZOJ2683]简单题 题目大意: 一个\(n\times n(n\le5\times10^5)\)的矩阵,初始时每个格子里的数全为\(0\).\(m(m\le2\times10^5)\)次操作, ...
- 保存全局Crash报告
CrashHandler.java UncaughtException处理类,当程序发生Uncaught异常的时候,有该类来接管程序,并记录发送错误报告 package com.amanda;imp ...
- CF961E Tufurama【主席树】
CF961E Tufurama 题意翻译 题目描述 有一天Polycarp决定重看他最喜爱的电视剧<Tufurama>.当他搜索“在线全高清免费观看Tufurama第3季第7集”却只得到第 ...
- python学习两月总结_汇总大牛们的思想_值得收藏
下面是我汇总的我学习两个月python(version:3.3.2)的所有笔记 你可以访问:http://www.python.org获取更多信息 你也可以访问:http://www.cnblogs. ...
- python 爬虫 处理超级课程表传输的数据
借鉴的别人的思路 http://www.oschina.net/code/snippet_2463131_53711 抓取超级课程表传输的数据 他的传输数据居然是明文的-- 现在已经把自己的课表都抓出 ...