MRJobConfig
     public static fina COMBINE_CLASS_ATTR
     属性COMBINE_CLASS_ATTR = "mapreduce.job.combine.class"
     ————子接口(F4) JobContent
           方法getCombinerClass
             ————子实现类 JobContextImpl
                 实现getCombinerClass方法:
                 public Class<? extends Reducer<?,?,?,?>> getCombinerClass()
                          throws ClassNotFoundException {
                      return (Class<? extends Reducer<?,?,?,?>>)
                        conf.getClass(COMBINE_CLASS_ATTR, null);
                 }
                 因为JobContextImpl是MRJobConfig子类
                 所以得到了父类MRJobConfig的COMBINE_CLASS_ATTR属性
                 ————子类Job
                     public void setCombinerClass(Class<? extends Reducer> cls
                               ) throws IllegalStateException {
                     ensureState(JobState.DEFINE);
                     conf.setClass(COMBINE_CLASS_ATTR, cls, Reducer.class);
                     }
                因为JobContextImpl是MRJobConfig子类,
                而Job是JobContextImpl的子类
                所以也有COMBINE_CLASS_ATTR属性
                通过setCombinerClass设置了父类MRJobConfig的属性
 
 
MRJobConfig
    ————子接口JobContent
        方法getCombinerClass
        ————子实现类 JobContextImpl
            ————子类 Job
        ————子实现类 TaskAttemptContext
            继承了方法getCombinerClass
 
Task   
   $CombinerRunner(Task的内部类)   
            该内部类有方法create:
            public static <K,V> CombinerRunner<K,V> create(JobConf job,
                               TaskAttemptID taskId,
                               Counters.Counter inputCounter,
                               TaskReporter reporter,
                               org.apache.hadoop.mapreduce.OutputCommitter committer
                              ) throws ClassNotFoundException
            {
                  Class<? extends Reducer<K,V,K,V>> cls =
                    (Class<? extends Reducer<K,V,K,V>>) job.getCombinerClass();
                  if (cls != null) {
                    return new OldCombinerRunner(cls, job, inputCounter, reporter);
                  }
                  // make a task context so we can get the classes
                  org.apache.hadoop.mapreduce.TaskAttemptContext taskContext =
                    new org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl(job, taskId,
                        reporter);
                  Class<? extends org.apache.hadoop.mapreduce.Reducer<K,V,K,V>> newcls =
                    (Class<? extends org.apache.hadoop.mapreduce.Reducer<K,V,K,V>>)
                       taskContext.getCombinerClass();
                  if (newcls != null) {
                    return new NewCombinerRunner<K,V>(newcls, job, taskId, taskContext,
                                                      inputCounter, reporter, committer);
                  }
                  return null;
            }
                  其中这一段应该是旧的API
                  Class<? extends Reducer<K,V,K,V>> cls =
                          (Class<? extends Reducer<K,V,K,V>>) job.getCombinerClass();
                  if (cls != null) {
                          return new OldCombinerRunner(cls, job, inputCounter, reporter);
                  }
                  而这个是新的API
                  org.apache.hadoop.mapreduce.TaskAttemptContext taskContext =
                    new org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl(job, taskId,
                        reporter);
                  Class<? extends org.apache.hadoop.mapreduce.Reducer<K,V,K,V>> newcls =
                    (Class<? extends org.apache.hadoop.mapreduce.Reducer<K,V,K,V>>)
                       taskContext.getCombinerClass();
                  if (newcls != null) {
                    return new NewCombinerRunner<K,V>(newcls, job, taskId, taskContext,
                                                      inputCounter, reporter, committer);
                  }
                  return null;
                  (不知道为什么要写全名,去掉那些包名、向上/下转型和各种泛型的话,看起来就会清晰很多?)
                  而TaskAttemptContext是JobContent的子实现类,所以继承了getCombinerClass方法
                  而且,这里用的是多态,其调用的是子实现类TaskAttemptContextImpl的getCombinerClass方法
                  (TaskAttemptContextImpl继承了JobContextImpl,而JobContextImpl实现了该方法)
                  所以最终get到了属性COMBINE_CLASS_ATTR,即得到了我们通过job.setCombinerClass的xxxC
                    而这个xxxC是给了newcls,而newcls是给了NewCombinerRunner的构造函数的reducerClassc参数
                      NewCombinerRunner(Class reducerClass,
                          JobConf job,
                          org.apache.hadoop.mapreduce.TaskAttemptID taskId,
                          org.apache.hadoop.mapreduce.TaskAttemptContext context,
                          Counters.Counter inputCounter,
                          TaskReporter reporter,
                          org.apache.hadoop.mapreduce.OutputCommitter committer)
                      {
                          super(inputCounter, job, reporter);
                          this.reducerClass = reducerClass;
                          this.taskId = taskId;
                          keyClass = (Class<K>) context.getMapOutputKeyClass();
                          valueClass = (Class<V>) context.getMapOutputValueClass();
                          comparator = (RawComparator<K>) context.getCombinerKeyGroupingComparator();
                          this.committer = committer;
                      }
Task          
  MapTask
        $MapOutputBuffer
            private CombinerRunner<K,V> combinerRunner;
            $SpillThread类($表示内部类)
                combinerRunner = CombinerRunner.create(job, getTaskID(),
                                             combineInputCounter,
                                             reporter, null);
                //此时,我们得到了设置好的合并类                            
                if (combinerRunner == null) {
                      // spill directly
                      DataInputBuffer key = new DataInputBuffer();
                      while (spindex < mend &&
                          kvmeta.get(offsetFor(spindex % maxRec) + PARTITION) == i) {
                        final int kvoff = offsetFor(spindex % maxRec);
                        int keystart = kvmeta.get(kvoff + KEYSTART);
                        int valstart = kvmeta.get(kvoff + VALSTART);
                        key.reset(kvbuffer, keystart, valstart - keystart);
                        getVBytesForOffset(kvoff, value);
                        writer.append(key, value);
                        ++spindex;
                      }
                } else {
                      int spstart = spindex;
                      while (spindex < mend &&
                          kvmeta.get(offsetFor(spindex % maxRec)
                                    + PARTITION) == i) {
                        ++spindex;
                      }
                      // Note: we would like to avoid the combiner if we've fewer
                      // than some threshold of records for a partition
                      if (spstart != spindex) {
                        combineCollector.setWriter(writer);
                        RawKeyValueIterator kvIter =
                          new MRResultIterator(spstart, spindex);
                        combinerRunner.combine(kvIter, combineCollector);
                      }
                }
            
            再查看combine函数
            在Task的内部类NewCombinerRunner下
            public void combine(RawKeyValueIterator iterator,
                                OutputCollector<K,V> collector)
                throws IOException, InterruptedException,ClassNotFoundException
            {
              // make a reducer
              org.apache.hadoop.mapreduce.Reducer<K,V,K,V> reducer =
                (org.apache.hadoop.mapreduce.Reducer<K,V,K,V>)
                  ReflectionUtils.newInstance(reducerClass, job);
              org.apache.hadoop.mapreduce.Reducer.Context
                   reducerContext = createReduceContext(reducer, job, taskId,
                                                        iterator, null, inputCounter,
                                                        new OutputConverter(collector),
                                                        committer,
                                                        reporter, comparator, keyClass,
                                                        valueClass);
              reducer.run(reducerContext);
            }
            上面的reducerClass就是我们传入的xxxC
            最终是通过反射创建了一个xxxC对象,并将其强制向上转型为Reducer实例对象,
            然后调用了向上转型后对象的run方法(当前的xxxC没有run方法,调用的是父类Reduce的run)
            在类Reducer中,run方法如下
            /**
           * Advanced application writers can use the
           * {@link #run(org.apache.hadoop.mapreduce.Reducer.Context)} method to
           * control how the reduce task works.
           */
          public void run(Context context) throws IOException, InterruptedException {
            setup(context);
            try {
              while (context.nextKey()) {
                reduce(context.getCurrentKey(), context.getValues(), context);
                // If a back up store is used, reset it
                Iterator<VALUEIN> iter = context.getValues().iterator();
                if(iter instanceof ReduceContext.ValueIterator) {
                  ((ReduceContext.ValueIterator<VALUEIN>)iter).resetBackupStore();       
                }
              }
            } finally {
              cleanup(context);
            }
          }
          有由于多态,此时调用的reduce是子类xxxC中的reduce方法
         (多态态性质:子类复写了该方法,则实际上执行的是子类中的该方法)
          所以说,我们自定义combine用的类的时候,应该继承Reducer类,并且复写reduce方法
          且其输入形式:(以wordcount为例)
       reduce(Text key, Iterable<IntWritable> values, Context context)
       其中key是单词个数,而values是个数列表,也就是value1、value2........
       注意,此时已经是列表,即<键,list<值1、值2、值3.....>>
       (之所以得到这个结论,是因为我当时使用的combine类是WCReduce,
        即Reduce和combine所用的类是一样的,通过对代码的分析,传入值的结构如果是<lkey,value>的话,是不可能做到combine的啊——即所谓的对相同值合并,求计数的累积和,这根本就是两个步骤,对key相同的键值对在map端就进行了一次合并了,合并成了<key,value list>,然后才轮到combine接受直接换个形式的输入,并处理——我们的处理是求和,然后再输出到context,进入reduce端的shuffle过程。
        然后我在reduce中遍历了用syso输出
        结果发现是0,而这实际上是因为经过一次遍历,我的指针指向的位置就不对了啊,
        )
嗯,自己反复使用以下的代码,不断的组合、注释,去测试吧~就会得出这样的结论了
  1. /reduce
  2.     publicstaticclassWCReduce extends Reducer<Text,IntWritable,Text,IntWritable>{
  3.         private final IntWritableValueOut=newIntWritable();
  4.         @Override
  5.         protectedvoid reduce(Text key,Iterable<IntWritable> values,
  6.                 Context context)  throws IOException,InterruptedException{
  7.             for(IntWritable value : values){
  8.                 System.out.println(value.get()+"--");
  9.             }
  10.  
  11. //            int total = 0 ;
  12. //            for (IntWritable value : values) {
  13. //                total += value.get();
  14. //            }
  15. //            ValueOut.set(total);
  16. //            context.write(key, ValueOut);
  17.         }
  18.  
  19.     }
  20.           
  21. job.setCombinerClass(WCReduce.class);
 
 

附件列表

关于MapReduce中自定义Combine类(一)的更多相关文章

  1. 关于MapReduce中自定义分区类(四)

    MapTask类 在MapTask类中找到run函数 if(useNewApi){       runNewMapper(job, splitMetaInfo, umbilical, reporter ...

  2. 关于MapReduce中自定义分组类(三)

    Job类  /**    * Define the comparator that controls which keys are grouped together    * for a single ...

  3. 关于MapReduce中自定义带比较key类、比较器类(二)——初学者从源码查看其原理

    Job类 /**   * Define the comparator that controls    * how the keys are sorted before they   * are pa ...

  4. python3.4中自定义数组类(即重写数组类)

    '''自定义数组类,实现数组中数字之间的四则运算,内积运算,大小比较,数组元素访问修改及成员测试等功能''' class MyArray: '''保证输入值为数字元素(整型,浮点型,复数)''' de ...

  5. flask中自定义日志类

    一:项目架构 二:自定义日志类 1. 建立log.conf的配置文件 log.conf [log] LOG_PATH = /log/ LOG_NAME = info.log 2. 定义日志类 LogC ...

  6. 读取SequenceFile中自定义Writable类型值

    1)hadoop允许程序员创建自定义的数据类型,如果是key则必须要继承WritableComparable,因为key要参与排序,而value只需要继承Writable就可以了.以下定义一个Doub ...

  7. Java中自定义注解类,并加以运用

    在Java框架中,经常会使用注解,而且还可以省很多事,来了解下自定义注解. 注解是一种能被添加到java代码中的元数据,类.方法.变量.参数和包都可以用注解来修饰.注解对于它所修饰的代码并没有直接的影 ...

  8. Haoop Mapreduce 中的FileOutputFormat类

    FileOutputFormat类继承OutputFormat,需要提供所有基于文件的OutputFormat实现的公共功能,主要有以下两点: (1)实现checkOutputSpecs方法 chec ...

  9. c#(winform)中自定义ListItem类方便ComboBox添加Item项

    1.定义ListItem类 public class ListItem { private string _key = string.Empty; private string _value = st ...

随机推荐

  1. 安装linxu6.4

    RHEL6.3系统安装 进入安装界面 这里选择跳过 点击下一步 选择安装语言 选择键盘 选择系统储存方式 选择是否格式化储存设备 给安装的系统一个计算机名 选择时区 给root一个密码 可以忽略或给一 ...

  2. Junit mockito 测试Controller层方法有Pageable异常

    1.问题 在使用MockMVC+Mockito模拟Service层返回的时候,当我们在Controller层中参数方法调用有Pageable对象的时候,我们会发现,我们没办法生成一个Pageable的 ...

  3. CentOS7 安装Nginx

    由于需要,这段时间学一点“nginx”.关于nginx就不介绍了,http://wiki.nginx.org/Main有非常详细的介绍.安装等. 安装软件我习惯到官网下载源码,http://nginx ...

  4. Ural 1225. Flags 斐波那契DP

    1225. Flags Time limit: 1.0 secondMemory limit: 64 MB On the Day of the Flag of Russia a shop-owner ...

  5. 【C】.h头文件的重复包含问题

    .h头文件存在的意义就是封装,可以方便多个.c源文件使用,但要防止.h头文件被同一个.c源文件多次包含. 例如, io.h文件 #ifndef _IO_H_ #define _IO_H_ #defin ...

  6. splay最终模板

    来自wjmzbmr的splay模板 #include<cstdio> #include<iostream> #include<algorithm> using na ...

  7. [bzoj2653][middle] (二分 + 主席树)

    Description 一个长度为n的序列a,设其排过序之后为b,其中位数定义为b[n/2],其中a,b从0开始标号,除法取下整. 给你一个长度为n的序列s. 回答Q个这样的询问:s的左端点在[a,b ...

  8. C++ "+="等运算符使用bug

    昨晚写了一个程序,使用了"+="运算符,结果总不是我想要的,查了一晚没找到,今早才发现: timeInterval = tpImP.staTime - imgPara[serial ...

  9. [LeetCode] Matchsticks to Square 火柴棍组成正方形

    Remember the story of Little Match Girl? By now, you know exactly what matchsticks the little match ...

  10. hihocoder-1453-Rikka with Tree

    #Hihocoder 1453 : Rikka with Tree 时间限制:10000ms 单点时限:1000ms 内存限制:256MB   source: https://hihocoder.co ...