We are going to explain how join works in MR , we will focus on reduce side join and map side join.

Reduce Side Join

Assuming we have 2 datasets , one is user information(id, name...) , the other is comments made by users(user id, content, date...). We want to join the 2 datasets to select the username and comment they posted. So, this is a typical join example.  You can implement all types of join including innter join/outer join/full outer join... As the name indicates, the join is done in reducer.

  • We use 2/n mappers for each dataset(table in RDBMS). So, we set this with code below.

     MultipleInputs.addInputPath(job,filePath,TextInputFormat.class,UserMapper.class)
    MultipleInputs.addInputPath(job,filePath,TextInputFormat.class,CommentsMapper.class)
    3 ....
    4 MultipleInputs.addInputPath(job,filePath,TextInputFormat.class,OtherMapper.class)
    ....
  • In each mapper, we just need to output the key/value pairs as the job is most done in reducer.  In reduce function, when it iterators the values for a given key, reduce function needs to know the value is from which dataset to perform the join. Reducer itself may not be able to distinguish which value is from which mapper(UserMapper or CommentsMapper) for a given key. So, in the map function, we have a chance to mark the value like prefix the value with the mapper name  something like that.
     outkey.set(userId);
    //mark this value so reduce function knows
    outvalue.set("UserMapper"+value.toString);
    context.write(outkey,outvalue)
  • In reducer, we get the join type from configuration, perform the join.  there can be multiple reducers and with multiple threads.
     public void setup(Context context){
    joinType = context.getConfiguration().get("joinType");
    }
    public void reduce(Text text, Iterable<Text> values, Context context)
    throws Exception {
    listUser.clear();
    listComments.clear();
    for (Text t: values){
    if(isFromUserMapper(t)){
    listUser.add(realContent(t));
    }else if (isFromCommentsMapper(t)){
    listUser.add(realContent(t));
    }
    }
    doJoin(context);
    }
    private void doJoin(Context context) throws Exception{
    if (joinType.equals("inner")){
    if(both are not empty){
    for (Text user:listUser){
    for (Text comm: listComments){
    context.write(user,comm);
    }
    }
    }
    }else if (){
    }.....
    }

In reducer side join, all data will be sent to reducer side, so, the overall network bandwith is required.

Map Side Join/Replicated Join

As the name indicates , the join operation is done in map side . So, there is no reducer.   It is very suitable for join datasets which has only 1 large dataset and others are small dataset and can be read into small memory in a single machine. It is faster than reduce side join (as no reduce phase, no intermediate output, no network transfer)

We still use the sample example that is to join user(small) and comments(large) datasets. How to implement it?

  • Set the number of reduce to 0.
job.setNumReduceTasks(0);
  • Add the small datasets to hadoop distribute cache.The first one is deprecated.
 DistributedCache.addCacheFile(new Path(args[]).toUri(),job.getConfiguration)
job.addCacheFile(new Path(filename).toUri());
  • In mapper setup function, get the cache by code below. The first one is deprecated. Read the file and put the the key / value in an instance variable like HashMap. This is single thread, so it is safe.
Path[] localPaths = context.getLocalCacheFiles();
URI[] uris = context.getCacheFiles()
  • In the mapper function, since, you have the entire user data set in the HashMap, you can try to get the key(comes from the split of comment dataset) from the HashMap. If it exists, you get a match. Because only one split of comments dataset goes into each mapper task, you can only perform an inner join or a left outer join.

What is Hadoop Distributed Cache?

"DistributedCache is a facility provided by the Map-Reduce framework to cache files needed by applications. Once you cache a file for your job, hadoop framework will make it available on(or broadcast to) each and every data nodes (in file system, not in memory) where you map/reduce tasks are running. Then you can access the cache file as local file in your Mapper Or Reducer job. Now you can easily read the cache file and populate some collection (e.g Array, Hashmap etc.) in your code"  The cache will be removed once the job is done as they are temporary files.

The size of the cache can be configured in mapred-site.xml.

How to use Distributed Cache(the API has changed)?

  • Add cache in driver.
    Note the # sign in the URI. Before it, you specify the absolute data path in HDFS. After it, you set a name(symlink) to specify the local file path in your mapper/reducer.
     job.addCacheFile(new URI("/user/ricky/user.txt#user"));
job.addCacheFile(new URI("/user/ricky/org.txt#org")); return job.waitForCompletion(true) ? 0 : 1;
  • Read cache in your task(mapper/reduce), probably in setup function.
 @Override
protected void setup(
Mapper<LongWritable, Text, Text, Text>.Context context)
throws IOException, InterruptedException {
if (context.getCacheFiles() != null
&& context.getCacheFiles().length > 0) { File some_file = new File("user");
File other_file = new File("org");
}
super.setup(context);
}

Reference:

https://www.youtube.com/user/pramodnarayana/videos

https://stackoverflow.com/questions/19678412/number-of-mappers-and-reducers-what-it-means

Map Reduce Application(Join)的更多相关文章

  1. Map Reduce Application(Partitioninig/Binning)

    Map Reduce Application(Partitioninig/Group data by a defined key) Assuming we want to group data by ...

  2. Map Reduce Application(Top 10 IDs base on their value)

    Top 10 IDs base on their value First , we need to set the reduce to 1. For each map task, it is not ...

  3. Map/Reduce中Join查询实现

    张表,分别较data.txt和info.txt,字段之间以/t划分. data.txt内容如下: 201001    1003    abc 201002    1005    def 201003  ...

  4. hadoop 多表join:Map side join及Reduce side join范例

    最近在准备抽取数据的工作.有一个id集合200多M,要从另一个500GB的数据集合中抽取出所有id集合中包含的数据集.id数据集合中每一个行就是一个id的字符串(Reduce side join要在每 ...

  5. hadoop的压缩解压缩,reduce端join,map端join

    hadoop的压缩解压缩 hadoop对于常见的几种压缩算法对于我们的mapreduce都是内置支持,不需要我们关心.经过map之后,数据会产生输出经过shuffle,这个时候的shuffle过程特别 ...

  6. HIVE 的MAP/REDUCE

    对于 JOIN 操作: Map: 以 JOIN ON 条件中的列作为 Key,如果有多个列,则 Key 是这些列的组合 以 JOIN 之后所关心的列作为 Value,当有多个列时,Value 是这些列 ...

  7. mapreduce: 揭秘InputFormat--掌控Map Reduce任务执行的利器

    随着越来越多的公司采用Hadoop,它所处理的问题类型也变得愈发多元化.随着Hadoop适用场景数量的不断膨胀,控制好怎样执行以及何处执行map任务显得至关重要.实现这种控制的方法之一就是自定义Inp ...

  8. 基于python的《Hadoop权威指南》一书中气象数据下载和map reduce化数据处理及其可视化

    文档内容: 1:下载<hadoop权威指南>中的气象数据 2:对下载的气象数据归档整理并读取数据 3:对气象数据进行map reduce进行处理 关键词:<Hadoop权威指南> ...

  9. Reduce Side Join实现

    关于reduce边join,其最重要的是使用MultipleInputs.addInputPath这个api对不同的表使用不同的Map,然后在每个Map里做一下该表的标识,最后到了Reduce端再根据 ...

随机推荐

  1. js取整、四舍五入等数学函数

    js只保留整数,向上取整,四舍五入,向下取整等函数1.丢弃小数部分,保留整数部分parseInt(5/2) 2.向上取整,有小数就整数部分加1 Math.ceil(5/2) 3,四舍五入. Math. ...

  2. Oracle split分区表引起ORA-01502错误

    继上次删除分区表的分区遇到ORA-01502错误后[详细见链接:Oracle分区表删除分区引发错误ORA-01502: 索引或这类索引的分区处于不可用状态],最近在split分区的时候又遇到了这个问题 ...

  3. Oracle 触发器(一)

    1)触发器是一种特殊的存储过程,触发器一般由事件触发并且不能接受参数,存储器由语句块去调用:触发器是当某个事件发生时自动地隐式运行. 2)触发器分类: 1.DML触发器: 创建在表上,由DML事件引发 ...

  4. Inconsistant light map between PC and Mobile under Unity3D

    Author: http://www.cnblogs.com/open-coder/p/3898159.html The light mapping effects between PC and Mo ...

  5. 聊聊c#与Python以及IronPython

    简单说说这个意义.做了很久的c#,突然发现Python火了.就看看,估计这篇博文有点长,有点长,尽量包括主要的东西,还有点杂,浏览吧,选择自己喜欢的看看. 先看比较.网上一堆各种比较.但是主要比较语法 ...

  6. 纯js轮播图练习-1

    偶尔练习,看视频自己学着做个简单的纯JS轮播. 简单的纯js轮播图练习-1. 样子就是上面图片那样,先不管好不好看,主要是学会运用和理解轮播的原理 掌握核心的理论知识和技术的操作,其他的都可以在这个基 ...

  7. shell中的死记硬背

    一.shell的引号们 1."" -> 双引号(不保留完整内容,比如遇到$, 反引号, \ 等就会执行相应的shell) echo "Today is `date` ...

  8. python应用:日期时间

    计算时间差时,注意天数差引发的问题,获取天数差为 (date2-date1).days 此处,需谨记date2>date1,以保证结果的正确性 具体应用如下: # -*-coding:utf8- ...

  9. fiddler请求报文的headers属性详解

    fiddler请求报文的headers属性详解 headers的属性包含以下几部分. (1)Cache头域 在Cache头域中,通常会出现以下属性. 1. Cache-Control 用来指定Resp ...

  10. Linux字符设备驱动--No.3

    字符驱动(按键)初始化函数分析: int charDrvInit(void) { devNum = MKDEV(reg_major, reg_minor); printk(KERN_EMERG&quo ...