Hadoop 学习笔记3 Develping MapReduce
小笔记:
Mavon是一种项目管理工具,通过xml配置来设置项目信息。
Mavon POM(project of model).
Steps:
1. set up and configure the development environment.
2. writing your map and reduce functions and run them in local (standalone) mode from the command line or within your IDE.
3. unit test --> test on small dataset --> test on the full dataset after unleash in a cluster
--> tuning
1. Configuration API
- Components in Hadoop are configured using Hadoop’s own configuration API.
- org.apache.hadoop.conf package
- Configurations read their properties from resources — XML files with a simple structure for defining name-value pairs.
For example, write a configuration-1.xml like:
<?xml version="1.0"?>
<configuration>
<property>
<name>color</name>
<value>yellow</value>
<description>Color</description>
</property>
<property>
<name>size</name>
<value>10</value>
<description>Size</description>
</property>
<property>
<name>weight</name>
<value>heavy</value>
<final>true</final>
<description>Weight</description>
</property>
<property>
<name>size-weight</name>
<value>${size},${weight}</value>
<description>Size and weight</description>
</property>
</configuration>
then access it by coding below:
Configuration conf = new Configuration();
conf.addResource("configuration-1.xml");
conf.addResource("configuration-2.xml"); // more than one resource are added orderly, and the latter will overwrite the former. assertThat(conf.get("color"), is("yellow"));
assertThat(conf.getInt("size", 0), is(10));
assertThat(conf.get("breadth", "wide"), is("wide"));
Note:
- type information is not stored in the XML file;
- instead, properties can be interpreted as a given type when they are read.
- Also, the get() methods allow you to specify a default value, which is used if the property is not defined in the XML file, as in the case of breadth here.
- more than one resource are added orderly, and the latter properties will overwrite the former.
- However, properties that are marked as final cannot be overridden in later definitions.
- system properties take priority:
System.setProperty("size", "14")
Options specified with -D take priority over properties from the configuration files.
This will override the number of reducers set on the cluster or set in any client-side configuration files.
% hadoop ConfigurationPrinter -D color=yellow | grep color
2. Set up dev enviroment
The Maven POMs (Project Object Model) are used to show the dependencies needed for building and testing MapReduce programs. Actually a xml file.

- hadoop-client dependency, which contains all the Hadoop client-side classes needed to interact with HDFS and MapReduce.
- For running unit tests, we use junit,
- for writing MapReduce tests, we use mrunit.
- The hadoop-minicluster library contains the “mini-” clusters that are useful for testing with Hadoop clusters running in a single JVM.
Many IDEs can read Maven POMs directly, so you can just point them at the directory containing the pom.xml file and start writing code.
Alternatively, you can use Maven to generate configuration files for your IDE. For example, the following creates Eclipse configuration files so you can import the project into Eclipse:
% mvn eclipse:eclipse -DdownloadSources=true -DdownloadJavadocs=true
3. Managing switching
It is common to switch between running the application locally and running it on a cluster.
- have Hadoop configuration files containing the connection settings for each cluster
- we assume the existence of a directory called conf that contains three configuration files: hadoop-local.xml, hadoop-localhost.xml, and hadoopcluster.xml
For example, the following command shows a directory listing on the HDFS serverrunning in pseudodistributed mode on localhost:
- conf
% hadoop fs -conf conf/hadoop-localhost.xml -ls Found 2 items
drwxr-xr-x - tom supergroup 0 2014-09-08 10:19 input
drwxr-xr-x - tom supergroup 0 2014-09-08 10:19 output
4. Starts MapReduce example:
Mapper: to get year and temperature from an input string
public class MaxTemperatureMapper
extends Mapper<LongWritable, Text, Text, IntWritable> { @Override
public void map(LongWritable key, Text value, Context context)
throws IOException, InterruptedException {
String line = value.toString();
String year = line.substring(15, 19);
int airTemperature = Integer.parseInt(line.substring(87, 92)); context.write(new Text(year), new IntWritable(airTemperature));
}
}
Unit test for the Mapper:
import java.io.IOException;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mrunit.mapreduce.MapDriver;
import org.junit.*; public class MaxTemperatureMapperTest {
@Test
public void processesValidRecord() throws IOException, InterruptedException {
Text value = new Text("0043011990999991950051518004+68750+023550FM-12+0382" +
// Year ^^^^
"99999V0203201N00261220001CN9999999N9-00111+99999999999");
// Temperature ^^^^^ new MapDriver<LongWritable, Text, Text, IntWritable>()
.withMapper(new MaxTemperatureMapper())
.withInput(new LongWritable(0), value)
.withOutput(new Text("1950"), new IntWritable(-11))
.runTest();
}
}
Reducer: to get the maxmium
public class MaxTemperatureReducer
extends Reducer<Text, IntWritable, Text, IntWritable> { @Override
public void reduce(Text key, Iterable<IntWritable> values, Context context)
throws IOException, InterruptedException { int maxValue = Integer.MIN_VALUE; for (IntWritable value : values) {
maxValue = Math.max(maxValue, value.get());
} context.write(key, new IntWritable(maxValue));
}
}
Unit test for the Reducer:
@Test
public void returnsMaximumIntegerInValues() throws IOException, InterruptedException { new ReduceDriver<Text, IntWritable, Text, IntWritable>()
.withReducer(new MaxTemperatureReducer())
.withInput(new Text("1950"),
Arrays.asList(new IntWritable(10), new IntWritable(5)))
.withOutput(new Text("1950"), new IntWritable(10))
.runTest();
}
5 . a write job driver
Using the Tool interface , it’s easy to write a driver to run a MapReduce job.
Then run the driver locally.
% mvn compile
% export HADOOP_CLASSPATH=target/classes/
% hadoop v2.MaxTemperatureDriver -conf conf/hadoop-local.xml \
input/ncdc/micro output
或
% hadoop v2.MaxTemperatureDriver -fs file:/// -jt local input/ncdc/micro output
The local job runner uses a single JVM to run a job, so as long as all the classes that your job needs are on its classpath, then things will just work.
6. Running on a cluster
a job’s classes must be packaged into a job JAR file to send to the cluster
Hadoop 学习笔记3 Develping MapReduce的更多相关文章
- Hadoop学习笔记—4.初识MapReduce
一.神马是高大上的MapReduce MapReduce是Google的一项重要技术,它首先是一个编程模型,用以进行大数据量的计算.对于大数据量的计算,通常采用的处理手法就是并行计算.但对许多开发者来 ...
- Hadoop学习笔记(2) 关于MapReduce
1. 查找历年最高的温度. MapReduce任务过程被分为两个处理阶段:map阶段和reduce阶段.每个阶段都以键/值对作为输入和输出,并由程序员选择它们的类型.程序员还需具体定义两个函数:map ...
- Hadoop学习笔记—22.Hadoop2.x环境搭建与配置
自从2015年花了2个多月时间把Hadoop1.x的学习教程学习了一遍,对Hadoop这个神奇的小象有了一个初步的了解,还对每次学习的内容进行了总结,也形成了我的一个博文系列<Hadoop学习笔 ...
- Hadoop学习笔记(7) ——高级编程
Hadoop学习笔记(7) ——高级编程 从前面的学习中,我们了解到了MapReduce整个过程需要经过以下几个步骤: 1.输入(input):将输入数据分成一个个split,并将split进一步拆成 ...
- Hadoop学习笔记(6) ——重新认识Hadoop
Hadoop学习笔记(6) ——重新认识Hadoop 之前,我们把hadoop从下载包部署到编写了helloworld,看到了结果.现是得开始稍微更深入地了解hadoop了. Hadoop包含了两大功 ...
- Hadoop学习笔记(2)
Hadoop学习笔记(2) ——解读Hello World 上一章中,我们把hadoop下载.安装.运行起来,最后还执行了一个Hello world程序,看到了结果.现在我们就来解读一下这个Hello ...
- Hadoop学习笔记(5) ——编写HelloWorld(2)
Hadoop学习笔记(5) ——编写HelloWorld(2) 前面我们写了一个Hadoop程序,并让它跑起来了.但想想不对啊,Hadoop不是有两块功能么,DFS和MapReduce.没错,上一节我 ...
- Hadoop学习笔记(2) ——解读Hello World
Hadoop学习笔记(2) ——解读Hello World 上一章中,我们把hadoop下载.安装.运行起来,最后还执行了一个Hello world程序,看到了结果.现在我们就来解读一下这个Hello ...
- Hadoop学习笔记(1) ——菜鸟入门
Hadoop学习笔记(1) ——菜鸟入门 Hadoop是什么?先问一下百度吧: [百度百科]一个分布式系统基础架构,由Apache基金会所开发.用户可以在不了解分布式底层细节的情况下,开发分布式程序. ...
随机推荐
- BZOJ 1304: [CQOI2009]叶子的染色
1304: [CQOI2009]叶子的染色 Time Limit: 10 Sec Memory Limit: 162 MBSubmit: 566 Solved: 358[Submit][Statu ...
- curl命令
定位后端接口是否ok,经常使用到curl -b/cookie <name=string/file> cookie字符串或文件读取位置 curl http://localhost --co ...
- webstrom快捷键速查
编辑 Ctrl + Space 基本代码完成 (任何类. 方法或变量名称)Ctrl + Shift + Enter 完整的语句Ctrl + P (在方法调用参数) 内的参数信息Ctrl + Q 快速的 ...
- Java数据类型和变量
Java中存在2种数据类型,下面我们来详解一下: 基本数据类型: 引用数据类型: 可以用一张表来记录: 基本数据类型 整型 byte:1个字节8位,取值范围为:[-128, 127],直接写值:(by ...
- NET WebApi OWIN 实现 OAuth 2.0
NET WebApi OWIN 实现 OAuth 2.0 OAuth(开放授权)是一个开放标准,允许用户让第三方应用访问该用户在某一网站上存储的私密的资源(如照片,视频,联系人列表),而无需将用户名和 ...
- Linux下who命令之C语言实现
Linux下who命令之C语言实现 Step1:前期准备 首先要有一个清楚的认识:linux中一切皆文件 实现who命令,who命令也是Linux中的一个文件,那我们怎么找到它呢?我们可以" ...
- EF下泛型分页方法,更新方法
/// <summary> /// 获取分页的分页集合 /// </summary> /// <typeparam name="S">实体类型& ...
- 你误解 .net 了吗?
我现在发现很多人对C#还存在很大的误解,例如C#是完全封闭的,C#不能跨平台,C#性能很差,C#不支持指针等等,持以上观点的人非常多,甚至最近看到的国内某机构对开发语言的统计中还写着C#不跨平台,不开 ...
- 代码重构之 —— 一堆if、esle 逻辑的处理
这几天,接手一个同事的代码,关于微信接口开发的,那一堆的 if,看得哥蛋痛了,这个毛病也是很多新手容易犯的,所以特地把这次重构写出来. 下面来我们看看这个代码的问题所在,if else 里面的代码块逻 ...
- 关于Node.js的httpClieint请求报错ECONNRESET的原因和解决措施
背景说明 最近在工作项目中有下面一个场景: 使用Node.js的express框架实现了一个文件系统服务器端,其中有个API用于客户端上传文件.客户端使用Node.js的HttpClient来调用服务 ...