转自:http://www.cnblogs.com/dlutxm/archive/2011/09/30/2196653.html

在mapreduce程序运行的开始阶段,hadoop需要将待处理的输入文件进行分割,按预定义的格式对文件读取等操作,这些操作都在InputFormat中进行。主要工作有以下3个:

1. Validate the input-specification of the job.

2. Split-up the input file(s) into logical InputSplits, each of which is then assigned to an individual Mapper.

3. Provide the RecordReader implementation to be used to glean input records from the logicalInputSplit for processing by the Mapper.

InputFormat是一个抽象类,他含有getSplits()createRecordReader()抽象方法,在子类中必须被实现。这两个就是InputFormat的基本方法。getSplits()确定输入对象的切分原则,而createRecordReader()则可以按一定格式读取相应数据。通常默认情况下,不是直接实现InputFormat类,而是直接继承FileInputFormat类,这个类提供了很多对文件操作的方法,其中比较常用的就是isSpiitable()方法,该方法决定该文件是否进行分片操作。另外还有就是createRecordReader方法,该方法是为文件的分片定制一个recordreader,可以根据自己的需求来进行定制,只需要重写该函数。

下面我就以http://developer.yahoo.com/hadoop/tutorial/module5.html#types中的例子来实现自己的MyInputFormat,根据自己的需求定制自己的InputFormat。

比如数据格式如下:

ball  3.5,12.7,9.0
car 15,23.76,42.23
device 0.0,12.4,-67.1

下面我们以这样的一种形式读取数据,分割每一行数据,前面的比如ball作为key,后面的3个浮点数读入到Point3D对象中,那么该如何实现呢?以下是我在学习过程中的实现。

首先是对Point3D数据类型的定制,首先定制数据类型为了能够在网络中以流的形式进行传输,必须实现Writable接口,同时在mapreduce编程的过程中,需要根据key来对数据进行排序与分区,所以必须实现Comparable接口,因此就实现了一个更加高级的接口WritableComprable,同时可以满足上面两个要求。

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException; import org.apache.hadoop.io.WritableComparable; public class Point3D implements WritableComparable{
public float x;
public float y;
public float z;
public Point3D(float x, float y, float z) {
super();
this.x = x;
this.y = y;
this.z = z;
}
public Point3D(){
this(0.0f,0.0f,0.0f);
}
public void set(float x,float y,float z){
this.x=x;
this.y=y;
this.z=z;
}
@Override
public void readFields(DataInput in) throws IOException {
// TODO Auto-generated method stub
x=in.readFloat();
y=in.readFloat();
z=in.readFloat(); }
@Override
public void write(DataOutput out) throws IOException {
// TODO Auto-generated method stub
out.writeFloat(x);
out.writeFloat(y);
out.writeFloat(z); }
public float distanceFromOrigin(){
return (float)Math.sqrt(x*x+y*y+z*z); }
@Override
public boolean equals(Object obj) {
// TODO Auto-generated method stub
if(!(obj instanceof Point3D))
return false;
Point3D other=(Point3D)obj;
return this.x==other.x&&this.y==other.y&&this.z==other.z; }
@Override
public int hashCode() {
// TODO Auto-generated method stub
return Float.floatToIntBits(x)
^ Float.floatToIntBits(y)
^ Float.floatToIntBits(z);
}
@Override
public String toString() {
// TODO Auto-generated method stub
return Float.toString(x)+","+Float.toString(y)+","+Float.toString(z);
}
@Override
public int compareTo(Object ot) {
// TODO Auto-generated method stub
Point3D other=(Point3D)ot;
float myDistance=this.distanceFromOrigin();
float otherDistance=other.distanceFromOrigin();
return Float.compare(myDistance, otherDistance); } }

其次就是定制自己的Fortmat了

import java.io.IOException;

import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.util.LineReader; public class MyInputFormat extends FileInputFormat<Text, Point3D> { @Override
protected boolean isSplitable(JobContext context, Path filename) {
// TODO Auto-generated method stub
return false;
}
@Override
public RecordReader<Text, Point3D> createRecordReader(InputSplit inputsplit,
TaskAttemptContext context) throws IOException, InterruptedException {
// TODO Auto-generated method stub
return new objPosRecordReader();
}
public static class objPosRecordReader extends RecordReader<Text,Point3D>{ public LineReader in;
public Text lineKey;
public Point3D lineValue;
public StringTokenizer token=null; public Text line; @Override
public void close() throws IOException {
// TODO Auto-generated method stub } @Override
public Text getCurrentKey() throws IOException, InterruptedException {
// TODO Auto-generated method stub
System.out.println("key");
//lineKey.set(token.nextToken());
System.out.println("hello");
return lineKey;
} @Override
public Point3D getCurrentValue() throws IOException,
InterruptedException {
// TODO Auto-generated method stub return lineValue;
} @Override
public float getProgress() throws IOException, InterruptedException {
// TODO Auto-generated method stub
return 0;
} @Override
public void initialize(InputSplit input, TaskAttemptContext context)
throws IOException, InterruptedException {
// TODO Auto-generated method stub
FileSplit split=(FileSplit)input;
Configuration job=context.getConfiguration();
Path file=split.getPath();
FileSystem fs=file.getFileSystem(job); FSDataInputStream filein=fs.open(file);
in=new LineReader(filein,job); line=new Text();
lineKey=new Text();
lineValue=new Point3D(); } @Override
public boolean nextKeyValue() throws IOException, InterruptedException {
// TODO Auto-generated method stub
int linesize=in.readLine(line);
if(linesize==0)
return false; token=new StringTokenizer(line.toString());
String []temp=new String[2];
if(token.hasMoreElements()){
temp[0]=token.nextToken();
if(token.hasMoreElements()){
temp[1]=token.nextToken();
}
}
System.out.println(temp[0]);
System.out.println(temp[1]);
String []points=temp[1].split(",");
System.out.println(points[0]);
System.out.println(points[1]);
System.out.println(points[2]);
lineKey.set(temp[0]);
lineValue.set(Float.parseFloat(points[0]),Float.parseFloat(points[1]), Float.parseFloat(points[2]));
System.out.println("pp");
return true;
} } }

测试的时候写的map函数,没有reudce,必须要在后面设置job的时候设置reduce的个数为0,job.setNumReduceTasks(0)。

import java.io.IOException;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper; public class TestMapper extends Mapper<Text, Point3D, Text, Point3D> { @Override
protected void map(Text key, Point3D value,
org.apache.hadoop.mapreduce.Mapper.Context context)
throws IOException, InterruptedException {
// TODO Auto-generated method stub
context.write(key, value);
} }
import java.io.IOException;
import java.net.URI; import javax.xml.soap.Text; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class TestMyInputFormat { /**
* @param args
* @throws IOException
* @throws ClassNotFoundException
* @throws InterruptedException
*/
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
// TODO Auto-generated method stub
System.out.println("nihao");
Job job=new Job();
Configuration conf=new Configuration();
FileSystem fs=FileSystem.get(URI.create(args[1]), conf);
fs.delete(new Path(args[1]));
job.setJobName("测试MyInputFormat程序。。。。。");
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setInputFormatClass(MyInputFormat.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Point3D.class);
job.setMapperClass(TestMapper.class);
job.setNumReduceTasks(0);
job.waitForCompletion(false); } }

【转载】Mapreduce实现自定义的InputFormat的更多相关文章

  1. Spark设置自定义的InputFormat读取HDFS文件

    本文通过MetaWeblog自动发布,原文及更新链接:https://extendswind.top/posts/technical/problem_spark_reading_hdfs_serial ...

  2. 自定义实现InputFormat、OutputFormat、输出到多个文件目录中去、hadoop1.x api写单词计数的例子、运行时接收命令行参数,代码例子

    一:自定义实现InputFormat *数据源来自于内存 *1.InputFormat是用于处理各种数据源的,下面是实现InputFormat,数据源是来自于内存. *1.1 在程序的job.setI ...

  3. MapReduce之自定义InputFormat

    在企业开发中,Hadoop框架自带的InputFormat类型不能满足所有应用场景,需要自定义InputFormat来解决实际问题. 自定义InputFormat步骤如下: (1)自定义一个类继承Fi ...

  4. 关于MapReduce中自定义分区类(四)

    MapTask类 在MapTask类中找到run函数 if(useNewApi){       runNewMapper(job, splitMetaInfo, umbilical, reporter ...

  5. [转载] MapReduce工作原理讲解

    转载自http://www.aboutyun.com/thread-6723-1-1.html 有时候我们在用,但是却不知道为什么.就像苹果砸到我们头上,这或许已经是很自然的事情了,但是牛顿却发现了地 ...

  6. Hadoop(16)-MapReduce框架原理-自定义FileInputFormat

    1. 需求 将多个小文件合并成一个SequenceFile文件(SequenceFile文件是Hadoop用来存储二进制形式的key-value对的文件格式),SequenceFile里面存储着多个文 ...

  7. 关于MapReduce中自定义分组类(三)

    Job类  /**    * Define the comparator that controls which keys are grouped together    * for a single ...

  8. 关于MapReduce中自定义带比较key类、比较器类(二)——初学者从源码查看其原理

    Job类 /**   * Define the comparator that controls    * how the keys are sorted before they   * are pa ...

  9. 关于MapReduce中自定义Combine类(一)

    MRJobConfig      public static fina COMBINE_CLASS_ATTR      属性COMBINE_CLASS_ATTR = "mapreduce.j ...

随机推荐

  1. GYCTF Node game

    考点: NodeJS 代码审计 SSRF 请求夹带 复现: 不太懂js,先留着吧,学懂了再记录

  2. SpringMVC 和 Struts2的区别

    SpringMVC 和 Struts2的区别   1.Struts2是类级别的拦截, 一个类对应一个request上下文,SpringMVC是方法级别的拦截,一个方法对应一个request上下文,而方 ...

  3. D - How Many Answers Are Wrong HDU - 3038【带权并查集】

    How Many Answers Are Wrong Time Limit: 2000/1000 MS (Java/Others)    Memory Limit: 32768/32768 K (Ja ...

  4. opencv图像坐标

    原图: 尺寸:240 × 150 灰度化: 1. 程序中输出像素点的灰度值: 2. 用工具取得的灰度值: 按照如下的坐标(图像处理坐标系) 得到的灰度值: (35,82) (82,35) 换算后分别是 ...

  5. JUC之CountDownLatch和CyclicBarrier的区别 (转)

    CountDownLatch和CyclicBarrier的功能看起来很相似,不易区分,有一种谜之的神秘.本文将通过通俗的例子并结合代码讲解两者的使用方法和区别. CountDownLatch和Cycl ...

  6. JS高级---数组和伪数组

    数组和伪数组  伪数组和数组的区别 真数组的长度是可变的 伪数组的长度不可变 function f1() { var sum = 0; for (var i = 0; i < arguments ...

  7. 使用在react hooks+antd ListView简单实现移动端长列表功能

    import React, { useState, useEffect } from "react" import { ListView } from "antd-mob ...

  8. SpringBoot--SSM框架整合

    方式一 1 建立一个Spring Starter 2.在available中找到要用的包,配置mybatis 3.建立file,application.yml 文件 spring: datasourc ...

  9. Java 在程序中输入输出

    本周主要学习了Java如何在程序中进行输入和输出,主要分为以下三种: 一.文本界面的输入与输出 1. 使用 javva.util.Scanner 类 2. 使用 in 及 out 二.图形界面的输入与 ...

  10. CodeForces 1141B

    https://vjudge.net/problem/CodeForces-1141B #include<bits/stdc++.h> using namespace std; int m ...