MRJobConfig
     public static fina COMBINE_CLASS_ATTR
     属性COMBINE_CLASS_ATTR = "mapreduce.job.combine.class"
     ————子接口(F4) JobContent
           方法getCombinerClass
             ————子实现类 JobContextImpl
                 实现getCombinerClass方法:
                 public Class<? extends Reducer<?,?,?,?>> getCombinerClass()
                          throws ClassNotFoundException {
                      return (Class<? extends Reducer<?,?,?,?>>)
                        conf.getClass(COMBINE_CLASS_ATTR, null);
                 }
                 因为JobContextImpl是MRJobConfig子类
                 所以得到了父类MRJobConfig的COMBINE_CLASS_ATTR属性
                 ————子类Job
                     public void setCombinerClass(Class<? extends Reducer> cls
                               ) throws IllegalStateException {
                     ensureState(JobState.DEFINE);
                     conf.setClass(COMBINE_CLASS_ATTR, cls, Reducer.class);
                     }
                因为JobContextImpl是MRJobConfig子类,
                而Job是JobContextImpl的子类
                所以也有COMBINE_CLASS_ATTR属性
                通过setCombinerClass设置了父类MRJobConfig的属性
 
 
MRJobConfig
    ————子接口JobContent
        方法getCombinerClass
        ————子实现类 JobContextImpl
            ————子类 Job
        ————子实现类 TaskAttemptContext
            继承了方法getCombinerClass
 
Task   
   $CombinerRunner(Task的内部类)   
            该内部类有方法create:
            public static <K,V> CombinerRunner<K,V> create(JobConf job,
                               TaskAttemptID taskId,
                               Counters.Counter inputCounter,
                               TaskReporter reporter,
                               org.apache.hadoop.mapreduce.OutputCommitter committer
                              ) throws ClassNotFoundException
            {
                  Class<? extends Reducer<K,V,K,V>> cls =
                    (Class<? extends Reducer<K,V,K,V>>) job.getCombinerClass();
                  if (cls != null) {
                    return new OldCombinerRunner(cls, job, inputCounter, reporter);
                  }
                  // make a task context so we can get the classes
                  org.apache.hadoop.mapreduce.TaskAttemptContext taskContext =
                    new org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl(job, taskId,
                        reporter);
                  Class<? extends org.apache.hadoop.mapreduce.Reducer<K,V,K,V>> newcls =
                    (Class<? extends org.apache.hadoop.mapreduce.Reducer<K,V,K,V>>)
                       taskContext.getCombinerClass();
                  if (newcls != null) {
                    return new NewCombinerRunner<K,V>(newcls, job, taskId, taskContext,
                                                      inputCounter, reporter, committer);
                  }
                  return null;
            }
                  其中这一段应该是旧的API
                  Class<? extends Reducer<K,V,K,V>> cls =
                          (Class<? extends Reducer<K,V,K,V>>) job.getCombinerClass();
                  if (cls != null) {
                          return new OldCombinerRunner(cls, job, inputCounter, reporter);
                  }
                  而这个是新的API
                  org.apache.hadoop.mapreduce.TaskAttemptContext taskContext =
                    new org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl(job, taskId,
                        reporter);
                  Class<? extends org.apache.hadoop.mapreduce.Reducer<K,V,K,V>> newcls =
                    (Class<? extends org.apache.hadoop.mapreduce.Reducer<K,V,K,V>>)
                       taskContext.getCombinerClass();
                  if (newcls != null) {
                    return new NewCombinerRunner<K,V>(newcls, job, taskId, taskContext,
                                                      inputCounter, reporter, committer);
                  }
                  return null;
                  (不知道为什么要写全名,去掉那些包名、向上/下转型和各种泛型的话,看起来就会清晰很多?)
                  而TaskAttemptContext是JobContent的子实现类,所以继承了getCombinerClass方法
                  而且,这里用的是多态,其调用的是子实现类TaskAttemptContextImpl的getCombinerClass方法
                  (TaskAttemptContextImpl继承了JobContextImpl,而JobContextImpl实现了该方法)
                  所以最终get到了属性COMBINE_CLASS_ATTR,即得到了我们通过job.setCombinerClass的xxxC
                    而这个xxxC是给了newcls,而newcls是给了NewCombinerRunner的构造函数的reducerClassc参数
                      NewCombinerRunner(Class reducerClass,
                          JobConf job,
                          org.apache.hadoop.mapreduce.TaskAttemptID taskId,
                          org.apache.hadoop.mapreduce.TaskAttemptContext context,
                          Counters.Counter inputCounter,
                          TaskReporter reporter,
                          org.apache.hadoop.mapreduce.OutputCommitter committer)
                      {
                          super(inputCounter, job, reporter);
                          this.reducerClass = reducerClass;
                          this.taskId = taskId;
                          keyClass = (Class<K>) context.getMapOutputKeyClass();
                          valueClass = (Class<V>) context.getMapOutputValueClass();
                          comparator = (RawComparator<K>) context.getCombinerKeyGroupingComparator();
                          this.committer = committer;
                      }
Task          
  MapTask
        $MapOutputBuffer
            private CombinerRunner<K,V> combinerRunner;
            $SpillThread类($表示内部类)
                combinerRunner = CombinerRunner.create(job, getTaskID(),
                                             combineInputCounter,
                                             reporter, null);
                //此时,我们得到了设置好的合并类                            
                if (combinerRunner == null) {
                      // spill directly
                      DataInputBuffer key = new DataInputBuffer();
                      while (spindex < mend &&
                          kvmeta.get(offsetFor(spindex % maxRec) + PARTITION) == i) {
                        final int kvoff = offsetFor(spindex % maxRec);
                        int keystart = kvmeta.get(kvoff + KEYSTART);
                        int valstart = kvmeta.get(kvoff + VALSTART);
                        key.reset(kvbuffer, keystart, valstart - keystart);
                        getVBytesForOffset(kvoff, value);
                        writer.append(key, value);
                        ++spindex;
                      }
                } else {
                      int spstart = spindex;
                      while (spindex < mend &&
                          kvmeta.get(offsetFor(spindex % maxRec)
                                    + PARTITION) == i) {
                        ++spindex;
                      }
                      // Note: we would like to avoid the combiner if we've fewer
                      // than some threshold of records for a partition
                      if (spstart != spindex) {
                        combineCollector.setWriter(writer);
                        RawKeyValueIterator kvIter =
                          new MRResultIterator(spstart, spindex);
                        combinerRunner.combine(kvIter, combineCollector);
                      }
                }
            
            再查看combine函数
            在Task的内部类NewCombinerRunner下
            public void combine(RawKeyValueIterator iterator,
                                OutputCollector<K,V> collector)
                throws IOException, InterruptedException,ClassNotFoundException
            {
              // make a reducer
              org.apache.hadoop.mapreduce.Reducer<K,V,K,V> reducer =
                (org.apache.hadoop.mapreduce.Reducer<K,V,K,V>)
                  ReflectionUtils.newInstance(reducerClass, job);
              org.apache.hadoop.mapreduce.Reducer.Context
                   reducerContext = createReduceContext(reducer, job, taskId,
                                                        iterator, null, inputCounter,
                                                        new OutputConverter(collector),
                                                        committer,
                                                        reporter, comparator, keyClass,
                                                        valueClass);
              reducer.run(reducerContext);
            }
            上面的reducerClass就是我们传入的xxxC
            最终是通过反射创建了一个xxxC对象,并将其强制向上转型为Reducer实例对象,
            然后调用了向上转型后对象的run方法(当前的xxxC没有run方法,调用的是父类Reduce的run)
            在类Reducer中,run方法如下
            /**
           * Advanced application writers can use the
           * {@link #run(org.apache.hadoop.mapreduce.Reducer.Context)} method to
           * control how the reduce task works.
           */
          public void run(Context context) throws IOException, InterruptedException {
            setup(context);
            try {
              while (context.nextKey()) {
                reduce(context.getCurrentKey(), context.getValues(), context);
                // If a back up store is used, reset it
                Iterator<VALUEIN> iter = context.getValues().iterator();
                if(iter instanceof ReduceContext.ValueIterator) {
                  ((ReduceContext.ValueIterator<VALUEIN>)iter).resetBackupStore();       
                }
              }
            } finally {
              cleanup(context);
            }
          }
          有由于多态,此时调用的reduce是子类xxxC中的reduce方法
         (多态态性质:子类复写了该方法,则实际上执行的是子类中的该方法)
          所以说,我们自定义combine用的类的时候,应该继承Reducer类,并且复写reduce方法
          且其输入形式:(以wordcount为例)
       reduce(Text key, Iterable<IntWritable> values, Context context)
       其中key是单词个数,而values是个数列表,也就是value1、value2........
       注意,此时已经是列表,即<键,list<值1、值2、值3.....>>
       (之所以得到这个结论,是因为我当时使用的combine类是WCReduce,
        即Reduce和combine所用的类是一样的,通过对代码的分析,传入值的结构如果是<lkey,value>的话,是不可能做到combine的啊——即所谓的对相同值合并,求计数的累积和,这根本就是两个步骤,对key相同的键值对在map端就进行了一次合并了,合并成了<key,value list>,然后才轮到combine接受直接换个形式的输入,并处理——我们的处理是求和,然后再输出到context,进入reduce端的shuffle过程。
        然后我在reduce中遍历了用syso输出
        结果发现是0,而这实际上是因为经过一次遍历,我的指针指向的位置就不对了啊,
        )
嗯,自己反复使用以下的代码,不断的组合、注释,去测试吧~就会得出这样的结论了
  1. /reduce
  2.     publicstaticclassWCReduce extends Reducer<Text,IntWritable,Text,IntWritable>{
  3.         private final IntWritableValueOut=newIntWritable();
  4.         @Override
  5.         protectedvoid reduce(Text key,Iterable<IntWritable> values,
  6.                 Context context)  throws IOException,InterruptedException{
  7.             for(IntWritable value : values){
  8.                 System.out.println(value.get()+"--");
  9.             }
  10.  
  11. //            int total = 0 ;
  12. //            for (IntWritable value : values) {
  13. //                total += value.get();
  14. //            }
  15. //            ValueOut.set(total);
  16. //            context.write(key, ValueOut);
  17.         }
  18.  
  19.     }
  20.           
  21. job.setCombinerClass(WCReduce.class);
 
 

附件列表

关于MapReduce中自定义Combine类(一)的更多相关文章

  1. 关于MapReduce中自定义分区类(四)

    MapTask类 在MapTask类中找到run函数 if(useNewApi){       runNewMapper(job, splitMetaInfo, umbilical, reporter ...

  2. 关于MapReduce中自定义分组类(三)

    Job类  /**    * Define the comparator that controls which keys are grouped together    * for a single ...

  3. 关于MapReduce中自定义带比较key类、比较器类(二)——初学者从源码查看其原理

    Job类 /**   * Define the comparator that controls    * how the keys are sorted before they   * are pa ...

  4. python3.4中自定义数组类(即重写数组类)

    '''自定义数组类,实现数组中数字之间的四则运算,内积运算,大小比较,数组元素访问修改及成员测试等功能''' class MyArray: '''保证输入值为数字元素(整型,浮点型,复数)''' de ...

  5. flask中自定义日志类

    一:项目架构 二:自定义日志类 1. 建立log.conf的配置文件 log.conf [log] LOG_PATH = /log/ LOG_NAME = info.log 2. 定义日志类 LogC ...

  6. 读取SequenceFile中自定义Writable类型值

    1)hadoop允许程序员创建自定义的数据类型,如果是key则必须要继承WritableComparable,因为key要参与排序,而value只需要继承Writable就可以了.以下定义一个Doub ...

  7. Java中自定义注解类,并加以运用

    在Java框架中,经常会使用注解,而且还可以省很多事,来了解下自定义注解. 注解是一种能被添加到java代码中的元数据,类.方法.变量.参数和包都可以用注解来修饰.注解对于它所修饰的代码并没有直接的影 ...

  8. Haoop Mapreduce 中的FileOutputFormat类

    FileOutputFormat类继承OutputFormat,需要提供所有基于文件的OutputFormat实现的公共功能,主要有以下两点: (1)实现checkOutputSpecs方法 chec ...

  9. c#(winform)中自定义ListItem类方便ComboBox添加Item项

    1.定义ListItem类 public class ListItem { private string _key = string.Empty; private string _value = st ...

随机推荐

  1. Oracle学习笔记五 SQL命令(三):Group by、排序、连接查询、子查询、分页

    GROUP BY和HAVING子句 GROUP BY子句 用于将信息划分为更小的组每一组行返回针对该组的单个结果 --统计每个部门的人数: Select count(*) from emp group ...

  2. mysql报错: 1548-Cannot load from mysql.proc. The table is probably corrupted 解决办法

    use mysql: ALTER TABLE `proc` MODIFY COLUMN `comment` text CHARACTER SET utf8 COLLATE utf8_bin NOT N ...

  3. 【C++】多态性(函数重载与虚函数)

    多态性就是同一符号或名字在不同情况下具有不同解释的现象.多态性有两种表现形式: 编译时多态性:同一对象收到相同的消息却产生不同的函数调用,一般通过函数重载来实现,在编译时就实现了绑定,属于静态绑定. ...

  4. (转)浅谈Java中的equals和==

    原文地址: http://www.cnblogs.com/dolphin0520/p/3592500.html 在初学Java时,可能会经常碰到下面的代码: 1 String str1 = new S ...

  5. [LeetCode] Range Sum Query 2D - Immutable 二维区域和检索 - 不可变

    Given a 2D matrix matrix, find the sum of the elements inside the rectangle defined by its upper lef ...

  6. [LeetCode] Best Time to Buy and Sell Stock II 买股票的最佳时间之二

    Say you have an array for which the ith element is the price of a given stock on day i. Design an al ...

  7. [LeetCode] Search in Rotated Sorted Array 在旋转有序数组中搜索

    Suppose a sorted array is rotated at some pivot unknown to you beforehand. (i.e., 0 1 2 4 5 6 7 migh ...

  8. Redis集群~windows下搭建Sentinel环境及它对主从模式的实际意义

    回到目录 关于redis-sentinel出现的原因 Redis集群的主从模式有个最大的弊端,就是当主master挂了之前,它的slave从服务器无法提升为主,而在redis-sentinel出现之后 ...

  9. 琴弦文字 - wpf行为

    效果图: 此效果的设计和实现思路均来自:上位者的怜悯 详情见原文:http://www.cnblogs.com/lianmin/p/5940637.html 我所做的,只是将原作者的设计和思路封装成了 ...

  10. 谈一下如何设计Oracle 分区表

    在谈设计Oracle分区表之间先区分一下分区表和表空间的个概念: 表空间:表空间是一个或多个数据文件的集合,所有数据对象都存放在指定的表空间中,但主要存放表,故称表空间. 分区表:分区致力于解决支持极 ...