We are going to explain how join works in MR , we will focus on reduce side join and map side join.

Reduce Side Join

Assuming we have 2 datasets , one is user information(id, name...) , the other is comments made by users(user id, content, date...). We want to join the 2 datasets to select the username and comment they posted. So, this is a typical join example.  You can implement all types of join including innter join/outer join/full outer join... As the name indicates, the join is done in reducer.

  • We use 2/n mappers for each dataset(table in RDBMS). So, we set this with code below.

     MultipleInputs.addInputPath(job,filePath,TextInputFormat.class,UserMapper.class)
    MultipleInputs.addInputPath(job,filePath,TextInputFormat.class,CommentsMapper.class)
    3 ....
    4 MultipleInputs.addInputPath(job,filePath,TextInputFormat.class,OtherMapper.class)
    ....
  • In each mapper, we just need to output the key/value pairs as the job is most done in reducer.  In reduce function, when it iterators the values for a given key, reduce function needs to know the value is from which dataset to perform the join. Reducer itself may not be able to distinguish which value is from which mapper(UserMapper or CommentsMapper) for a given key. So, in the map function, we have a chance to mark the value like prefix the value with the mapper name  something like that.
     outkey.set(userId);
    //mark this value so reduce function knows
    outvalue.set("UserMapper"+value.toString);
    context.write(outkey,outvalue)
  • In reducer, we get the join type from configuration, perform the join.  there can be multiple reducers and with multiple threads.
     public void setup(Context context){
    joinType = context.getConfiguration().get("joinType");
    }
    public void reduce(Text text, Iterable<Text> values, Context context)
    throws Exception {
    listUser.clear();
    listComments.clear();
    for (Text t: values){
    if(isFromUserMapper(t)){
    listUser.add(realContent(t));
    }else if (isFromCommentsMapper(t)){
    listUser.add(realContent(t));
    }
    }
    doJoin(context);
    }
    private void doJoin(Context context) throws Exception{
    if (joinType.equals("inner")){
    if(both are not empty){
    for (Text user:listUser){
    for (Text comm: listComments){
    context.write(user,comm);
    }
    }
    }
    }else if (){
    }.....
    }

In reducer side join, all data will be sent to reducer side, so, the overall network bandwith is required.

Map Side Join/Replicated Join

As the name indicates , the join operation is done in map side . So, there is no reducer.   It is very suitable for join datasets which has only 1 large dataset and others are small dataset and can be read into small memory in a single machine. It is faster than reduce side join (as no reduce phase, no intermediate output, no network transfer)

We still use the sample example that is to join user(small) and comments(large) datasets. How to implement it?

  • Set the number of reduce to 0.
job.setNumReduceTasks(0);
  • Add the small datasets to hadoop distribute cache.The first one is deprecated.
 DistributedCache.addCacheFile(new Path(args[]).toUri(),job.getConfiguration)
job.addCacheFile(new Path(filename).toUri());
  • In mapper setup function, get the cache by code below. The first one is deprecated. Read the file and put the the key / value in an instance variable like HashMap. This is single thread, so it is safe.
Path[] localPaths = context.getLocalCacheFiles();
URI[] uris = context.getCacheFiles()
  • In the mapper function, since, you have the entire user data set in the HashMap, you can try to get the key(comes from the split of comment dataset) from the HashMap. If it exists, you get a match. Because only one split of comments dataset goes into each mapper task, you can only perform an inner join or a left outer join.

What is Hadoop Distributed Cache?

"DistributedCache is a facility provided by the Map-Reduce framework to cache files needed by applications. Once you cache a file for your job, hadoop framework will make it available on(or broadcast to) each and every data nodes (in file system, not in memory) where you map/reduce tasks are running. Then you can access the cache file as local file in your Mapper Or Reducer job. Now you can easily read the cache file and populate some collection (e.g Array, Hashmap etc.) in your code"  The cache will be removed once the job is done as they are temporary files.

The size of the cache can be configured in mapred-site.xml.

How to use Distributed Cache(the API has changed)?

  • Add cache in driver.
    Note the # sign in the URI. Before it, you specify the absolute data path in HDFS. After it, you set a name(symlink) to specify the local file path in your mapper/reducer.
     job.addCacheFile(new URI("/user/ricky/user.txt#user"));
job.addCacheFile(new URI("/user/ricky/org.txt#org")); return job.waitForCompletion(true) ? 0 : 1;
  • Read cache in your task(mapper/reduce), probably in setup function.
 @Override
protected void setup(
Mapper<LongWritable, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
if (context.getCacheFiles() != null
&& context.getCacheFiles().length > 0) { File some_file = new File("user");
File other_file = new File("org");
}
super.setup(context);
}

Reference:

https://www.youtube.com/user/pramodnarayana/videos

https://stackoverflow.com/questions/19678412/number-of-mappers-and-reducers-what-it-means

Map Reduce Application(Join)的更多相关文章

  1. Map Reduce Application(Partitioninig/Binning)

    Map Reduce Application(Partitioninig/Group data by a defined key) Assuming we want to group data by ...

  2. Map Reduce Application(Top 10 IDs base on their value)

    Top 10 IDs base on their value First , we need to set the reduce to 1. For each map task, it is not ...

  3. Map/Reduce中Join查询实现

    张表,分别较data.txt和info.txt,字段之间以/t划分. data.txt内容如下: 201001    1003    abc 201002    1005    def 201003  ...

  4. hadoop 多表join:Map side join及Reduce side join范例

    最近在准备抽取数据的工作.有一个id集合200多M,要从另一个500GB的数据集合中抽取出所有id集合中包含的数据集.id数据集合中每一个行就是一个id的字符串(Reduce side join要在每 ...

  5. hadoop的压缩解压缩,reduce端join,map端join

    hadoop的压缩解压缩 hadoop对于常见的几种压缩算法对于我们的mapreduce都是内置支持,不需要我们关心.经过map之后,数据会产生输出经过shuffle,这个时候的shuffle过程特别 ...

  6. HIVE 的MAP/REDUCE

    对于 JOIN 操作: Map: 以 JOIN ON 条件中的列作为 Key,如果有多个列,则 Key 是这些列的组合 以 JOIN 之后所关心的列作为 Value,当有多个列时,Value 是这些列 ...

  7. mapreduce: 揭秘InputFormat--掌控Map Reduce任务执行的利器

    随着越来越多的公司采用Hadoop,它所处理的问题类型也变得愈发多元化.随着Hadoop适用场景数量的不断膨胀,控制好怎样执行以及何处执行map任务显得至关重要.实现这种控制的方法之一就是自定义Inp ...

  8. 基于python的《Hadoop权威指南》一书中气象数据下载和map reduce化数据处理及其可视化

    文档内容: 1:下载<hadoop权威指南>中的气象数据 2:对下载的气象数据归档整理并读取数据 3:对气象数据进行map reduce进行处理 关键词:<Hadoop权威指南> ...

  9. Reduce Side Join实现

    关于reduce边join,其最重要的是使用MultipleInputs.addInputPath这个api对不同的表使用不同的Map,然后在每个Map里做一下该表的标识,最后到了Reduce端再根据 ...

随机推荐

  1. iOS11、iPhone X、Xcode9 适配指南

    更新iOS11后,发现有些地方需要做适配,整理后按照优先级分为以下三类: 1.单纯升级iOS11后造成的变化: 2.Xcode9 打包后造成的变化: 3.iPhoneX的适配 一.单纯升级iOS11后 ...

  2. Jewels and Stones

    题目如下 You're given strings J representing the types of stones that are jewels, and S representing the ...

  3. 分享一个在js中判断数据是undefined,NaN,null,的技巧

    教大家如何在js中判断一个值是否是undefined,null,NaN,以及如何单独判断 平常开发过程中大家可能遇到一种问题,就是取页面某个值的时候获取不到这个var就是undefined了,如果是数 ...

  4. angular常用属性大全

    Angular元素属性大全 addClass()-为每个匹配的元素添加指定的样式类名 after()-在匹配元素集合中的每个元素后面插入参数所指定的内容,作为其兄弟节点 append()-在每个匹配元 ...

  5. PredicateBuilder

    using System; using System.Linq; using System.Linq.Expressions; namespace Oyang.Tool { public static ...

  6. js 校验身份证号

    根据地区编码.身份证格式.18位身份证需要验证最后一位校验位 //校验身份证 function IdentityCodeValid(code) { var city = { 11: "北京& ...

  7. MySQL至TiDB复制延迟监控

    因生产环境mysql中有较多复杂sql且运行效率低,因此采用tidb作为生产环境的从库进行部分慢sql及报表的读写分离.其中MySQL至TIDB采用Syncer工具同步.关于TIDB的安装及Synce ...

  8. Java学习笔记二十七:Java中的抽象类

    Java中的抽象类 一:Java抽象类: 在面向对象的概念中,所有的对象都是通过类来描绘的,但是反过来,并不是所有的类都是用来描绘对象的,如果一个类中没有包含足够的信息来描绘一个具体的对象,这样的类就 ...

  9. Java学习笔记二十五:Java面向对象的三大特性之多态

    Java面向对象的三大特性之多态 一:什么是多态: 多态是同一个行为具有多个不同表现形式或形态的能力. 多态就是同一个接口,使用不同的实例而执行不同操作. 多态性是对象多种表现形式的体现. 现实中,比 ...

  10. 【blockly教程】第二章 Blockly编程基础

    2.1 Blockly的数据类型 2.1.1 数据的含义  在计算机程序的世界里,程序的基本任务就是处理数据,无论是数值还是文字.图像.图形.声音.视频等信息,如果要在计算机中处理的话,就必须将它们转 ...