MapReduce编程系列 — 3:数据去重
1、项目名称:

2、程序代码:
package com.dedup; import java.io.IOException;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser; public class Dedup {
//map将输入中的value复制到输出数据的key上,并直接输出,注意参数类型和个数
public static class Map extends Mapper<Object, Text, Text, Text>{
public static Text line = new Text();
//注意参数类型和个数
public void map(Object key , Text value , Context context) throws IOException,InterruptedException{
System.out.println("mapper.......");
System.out.println("key:"+key+" value:"+value);
line = value;
context.write(line, new Text(" "));
System.out.println("line:"+ line +" value"+ value +" context:" + context);
}
}
//reduce将输入中的key复制到输出数据的key上,并直接输出,注意参数类型和个数
public static class Reduce extends Reducer<Text, Text, Text, Text>{
//注意参数类型和个数
public void reduce(Text key , Iterable<Text> values, Context context)throws IOException,InterruptedException{
System.out.println("reducer.......");
System.out.println("key:"+key+" values:"+values);
context.write(key, new Text(" "));
System.out.println("key:"+key+" values"+values+" context:"+context);
}
} public static void main(String [] args)throws Exception{
Configuration conf = new Configuration();
String otherArgs[] = new GenericOptionsParser(conf,args).getRemainingArgs();
if(otherArgs.length!=2){
System.out.println("Usage:dedup <in> <out>");
System.exit(2);
}
Job job = new Job(conf,"Data Deduplication");
job.setJarByClass(Dedup.class); job.setMapperClass(Map.class);
job.setReducerClass(Reduce.class); job.setOutputKeyClass(Text.class);
job.setOutputValueClass(Text.class); FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
FileOutputFormat.setOutputPath(job, new Path(otherArgs[1]));
System.exit(job.waitForCompletion(true)? 0 : 1 );
}
}
3、测试数据:
2006-6-10 b
2006-6-11 c
2006-6-12 d
2006-6-13 a
2006-6-14 b
2006-6-15 c
2006-6-11 c
2006-6-10 a
2006-6-11 b
2006-6-12 d
2006-6-13 a
2006-6-14 c
2006-6-15 d
2006-6-11 c
14/09/21 16:51:16 WARN mapred.JobClient: No job jar file set. User classes may not be found. See JobConf(Class) or JobConf#setJar(String).
14/09/21 16:51:16 INFO input.FileInputFormat: Total input paths to process : 2
14/09/21 16:51:16 WARN snappy.LoadSnappy: Snappy native library not loaded
14/09/21 16:51:16 INFO mapred.JobClient: Running job: job_local_0001
14/09/21 16:51:16 INFO util.ProcessTree: setsid exited with exit code 0
14/09/21 16:51:16 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@2e9aa770
14/09/21 16:51:16 INFO mapred.MapTask: io.sort.mb = 100
14/09/21 16:51:16 INFO mapred.MapTask: data buffer = 79691776/99614720
14/09/21 16:51:16 INFO mapred.MapTask: record buffer = 262144/327680
mapper.......
key:0 value:2006-6-9 a
line:2006-6-9 a value2006-6-9 a context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:11 value:2006-6-10 b
line:2006-6-10 b value2006-6-10 b context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:23 value:2006-6-11 c
line:2006-6-11 c value2006-6-11 c context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:35 value:2006-6-12 d
line:2006-6-12 d value2006-6-12 d context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:47 value:2006-6-13 a
line:2006-6-13 a value2006-6-13 a context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:59 value:2006-6-14 b
line:2006-6-14 b value2006-6-14 b context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:71 value:2006-6-15 c
line:2006-6-15 c value2006-6-15 c context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
mapper.......
key:83 value:2006-6-11 c
line:2006-6-11 c value2006-6-11 c context:org.apache.hadoop.mapreduce.Mapper$Context@2d3b0087
14/09/21 16:51:16 INFO mapred.MapTask: Starting flush of map output
14/09/21 16:51:16 INFO mapred.MapTask: Finished spill 0
14/09/21 16:51:16 INFO mapred.Task: Task:attempt_local_0001_m_000000_0 is done. And is in the process of commiting
14/09/21 16:51:17 INFO mapred.JobClient: map 0% reduce 0%
14/09/21 16:51:19 INFO mapred.LocalJobRunner:
14/09/21 16:51:19 INFO mapred.Task: Task 'attempt_local_0001_m_000000_0' done.
14/09/21 16:51:19 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@3697e580
14/09/21 16:51:19 INFO mapred.MapTask: io.sort.mb = 100
14/09/21 16:51:19 INFO mapred.MapTask: data buffer = 79691776/99614720
14/09/21 16:51:19 INFO mapred.MapTask: record buffer = 262144/327680
mapper.......
key:0 value:2006-6-9 b
line:2006-6-9 b value2006-6-9 b context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:11 value:2006-6-10 a
line:2006-6-10 a value2006-6-10 a context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:23 value:2006-6-11 b
line:2006-6-11 b value2006-6-11 b context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:35 value:2006-6-12 d
line:2006-6-12 d value2006-6-12 d context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:47 value:2006-6-13 a
line:2006-6-13 a value2006-6-13 a context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:59 value:2006-6-14 c
line:2006-6-14 c value2006-6-14 c context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:71 value:2006-6-15 d
line:2006-6-15 d value2006-6-15 d context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
mapper.......
key:83 value:2006-6-11 c
line:2006-6-11 c value2006-6-11 c context:org.apache.hadoop.mapreduce.Mapper$Context@319af5dd
14/09/21 16:51:19 INFO mapred.MapTask: Starting flush of map output
14/09/21 16:51:19 INFO mapred.MapTask: Finished spill 0
14/09/21 16:51:19 INFO mapred.Task: Task:attempt_local_0001_m_000001_0 is done. And is in the process of commiting
14/09/21 16:51:20 INFO mapred.JobClient: map 100% reduce 0%
14/09/21 16:51:22 INFO mapred.LocalJobRunner:
14/09/21 16:51:22 INFO mapred.Task: Task 'attempt_local_0001_m_000001_0' done.
14/09/21 16:51:22 INFO mapred.Task: Using ResourceCalculatorPlugin : org.apache.hadoop.util.LinuxResourceCalculatorPlugin@3c844c07
14/09/21 16:51:22 INFO mapred.LocalJobRunner:
14/09/21 16:51:22 INFO mapred.Merger: Merging 2 sorted segments
14/09/21 16:51:22 INFO mapred.Merger: Down to the last merge-pass, with 2 segments left of total size: 258 bytes
14/09/21 16:51:22 INFO mapred.LocalJobRunner:
reducer.......
key:2006-6-10 a values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-10 a valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78 context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-10 b values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-10 b valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78 context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-11 b values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-11 b valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78 context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-11 c values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-11 c valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78 context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-12 d values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-12 d valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78 context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-13 a values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-13 a valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78 context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-14 b values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-14 b valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78 context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-14 c values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-14 c valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78 context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-15 c values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-15 c valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78 context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-15 d values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-15 d valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78 context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-9 a values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-9 a valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78 context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
reducer.......
key:2006-6-9 b values:org.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78
key:2006-6-9 b valuesorg.apache.hadoop.mapreduce.ReduceContext$ValueIterable@9c8fd78 context:org.apache.hadoop.mapreduce.Reducer$Context@52767ce8
14/09/21 16:51:22 INFO mapred.Task: Task:attempt_local_0001_r_000000_0 is done. And is in the process of commiting
14/09/21 16:51:22 INFO mapred.LocalJobRunner:
14/09/21 16:51:22 INFO mapred.Task: Task attempt_local_0001_r_000000_0 is allowed to commit now
14/09/21 16:51:22 INFO output.FileOutputCommitter: Saved output of task 'attempt_local_0001_r_000000_0' to hdfs://localhost:9000/user/hadoop/dedup_output
14/09/21 16:51:25 INFO mapred.LocalJobRunner: reduce > reduce
14/09/21 16:51:25 INFO mapred.Task: Task 'attempt_local_0001_r_000000_0' done.
14/09/21 16:51:26 INFO mapred.JobClient: map 100% reduce 100%
14/09/21 16:51:26 INFO mapred.JobClient: Job complete: job_local_0001
14/09/21 16:51:26 INFO mapred.JobClient: Counters: 22
14/09/21 16:51:26 INFO mapred.JobClient: Map-Reduce Framework
14/09/21 16:51:26 INFO mapred.JobClient: Spilled Records=32
14/09/21 16:51:26 INFO mapred.JobClient: Map output materialized bytes=266
14/09/21 16:51:26 INFO mapred.JobClient: Reduce input records=16
14/09/21 16:51:26 INFO mapred.JobClient: Virtual memory (bytes) snapshot=0
14/09/21 16:51:26 INFO mapred.JobClient: Map input records=16
14/09/21 16:51:26 INFO mapred.JobClient: SPLIT_RAW_BYTES=232
14/09/21 16:51:26 INFO mapred.JobClient: Map output bytes=222
14/09/21 16:51:26 INFO mapred.JobClient: Reduce shuffle bytes=0
14/09/21 16:51:26 INFO mapred.JobClient: Physical memory (bytes) snapshot=0
14/09/21 16:51:26 INFO mapred.JobClient: Reduce input groups=12
14/09/21 16:51:26 INFO mapred.JobClient: Combine output records=0
14/09/21 16:51:26 INFO mapred.JobClient: Reduce output records=12
14/09/21 16:51:26 INFO mapred.JobClient: Map output records=16
14/09/21 16:51:26 INFO mapred.JobClient: Combine input records=0
14/09/21 16:51:26 INFO mapred.JobClient: CPU time spent (ms)=0
14/09/21 16:51:26 INFO mapred.JobClient: Total committed heap usage (bytes)=813170688
14/09/21 16:51:26 INFO mapred.JobClient: File Input Format Counters
14/09/21 16:51:26 INFO mapred.JobClient: Bytes Read=190
14/09/21 16:51:26 INFO mapred.JobClient: FileSystemCounters
14/09/21 16:51:26 INFO mapred.JobClient: HDFS_BYTES_READ=475
14/09/21 16:51:26 INFO mapred.JobClient: FILE_BYTES_WRITTEN=122061
14/09/21 16:51:26 INFO mapred.JobClient: FILE_BYTES_READ=1665
14/09/21 16:51:26 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=166
14/09/21 16:51:26 INFO mapred.JobClient: File Output Format Counters
14/09/21 16:51:26 INFO mapred.JobClient: Bytes Written=166
2006-6-10 b
2006-6-11 b
2006-6-11 c
2006-6-12 d
2006-6-13 a
2006-6-14 b
2006-6-14 c
2006-6-15 c
2006-6-15 d
2006-6-9 a
2006-6-9 b
MapReduce编程系列 — 3:数据去重的更多相关文章
- [Python] 文科生零基础学编程系列三——数据运算符的基本类别
上一篇:[Python] 文科生零基础学编程系列二--数据类型.变量.常量的基础概念 下一篇: ※ 程序的执行过程,就是对数据进行运算的过程. 不同的数据类型,可以进行不同的运算, 按照数据运算类型的 ...
- 学习ASP.NET Core Blazor编程系列八——数据校验
学习ASP.NET Core Blazor编程系列一--综述 学习ASP.NET Core Blazor编程系列二--第一个Blazor应用程序(上) 学习ASP.NET Core Blazor编程系 ...
- 【原创】MapReduce编程系列之二元排序
普通排序实现 普通排序的实现利用了按姓名的排序,调用了默认的对key的HashPartition函数来实现数据的分组.partition操作之后写入磁盘时会对数据进行排序操作(对一个分区内的数据作排序 ...
- MapReduce编程系列 — 5:单表关联
1.项目名称: 2.项目数据: chile parentTom LucyTom JackJone LucyJone JackLucy MaryLucy Ben ...
- MapReduce编程系列 — 2:计算平均分
1.项目名称: 2.程序代码: package com.averagescorecount; import java.io.IOException; import java.util.Iterator ...
- 【原创】MapReduce编程系列之表连接
问题描述 需要连接的表如下:其中左边是child,右边是parent,我们要做的是找出grandchild和grandparent的对应关系,为此需要进行表的连接. Tom Lucy Tom Jim ...
- MapReduce编程系列 — 6:多表关联
1.项目名称: 2.程序代码: 版本一(详细版): package com.mtjoin; import java.io.IOException; import java.util.Iterator; ...
- MapReduce编程系列 — 4:排序
1.项目名称: 2.程序代码: package com.sort; import java.io.IOException; import org.apache.hadoop.conf.Configur ...
- MapReduce编程系列 — 1:计算单词
1.代码: package com.mrdemo; import java.io.IOException; import java.util.StringTokenizer; import org.a ...
随机推荐
- opencv java api提取图片sift特征
opencv在2.4.4版本以后添加了对java的最新支持,可以利用java api了.下面就是我利用opencv的java api 提取图片的sift特征. import org.opencv.co ...
- jq实现手机自定义弹出输入框
手机涉及到填写表单时,需要手机弹出自定义的输入框,而非手机自带的输入键盘,如大写数字等. 实现思路(考虑多种文本输入形式): 首先,文本框获取焦点时禁止手机弹出自带的输入键盘. // 禁用手机自带的键 ...
- 从 Typecho 自定义字段的调用代码看去
千呼万唤,Typecho 的"自定义字段"功能终于在 0.9 中出来了.然而,多数人还蒙在这样一个鼓里--该怎么在模板调用已经设置好的自定义字段呢?让我们从这里开始说下去: Typ ...
- apache、php隐藏头信息的方法
本文介绍下,在apache与php中隐藏头部信息的方法,有需要的朋友参考下. 一.apache隐藏头部信息 apache 的 httpd.conf 有两个配置可以控制是否显示服务器信息给用户.Serv ...
- window2003安全设置
1. 网上邻居->右键 属性->本地连接 右键属性->Microsoft网络的文件和打印机共享去掉选中 (影响端口: 139,445) 2. 禁止ADMIN$缺省共享 ...
- Web服务器集群搭建关键步骤纪要
前言:本文记述了搭建一个小型web服务器集群的过程,由于篇幅所限,系统.软件的安装和基本配置我这里就省略了,只记叙关键配置和脚本内容.假如各位朋友想了解各软件详细配置建议查阅官方文档. 一 需求分析: ...
- 2016 系统设计第一期 (档案一)MVC 相关控件整理
说明:前者是MVC,后者是boostrap 1.form 表单 @using (Html.BeginForm("Create", "User", FormMet ...
- sharepoint One-Time Passwords (windows basic authentication)
//设计中,未完成 references: http://www.asp.net/web-api/overview/security/basic-authentication http://techn ...
- Nhibernate 多对多级联更新
问题是这样的,有两个表:文章(Article)和分类(Lable),这两者之间的关系是多对多关联,如果你用Nhibernate来保存数据的话非常的好操作,新建Article,然后把Lable值赋值给A ...
- unity3d引擎程序员养成
标准流程:1. c++ Primer 英文版(第四或第五版)全部看完习题做完是必须的.渲染程序设计比较复杂,后期会用到c++的全部特性.c++学的越好后面越轻松.要看英文版,计算机翻来覆去就那么几个单 ...