转自:http://www.cnblogs.com/dlutxm/archive/2011/09/30/2196653.html

在mapreduce程序运行的开始阶段,hadoop需要将待处理的输入文件进行分割,按预定义的格式对文件读取等操作,这些操作都在InputFormat中进行。主要工作有以下3个:

1. Validate the input-specification of the job.

2. Split-up the input file(s) into logical InputSplits, each of which is then assigned to an individual Mapper.

3. Provide the RecordReader implementation to be used to glean input records from the logicalInputSplit for processing by the Mapper.

InputFormat是一个抽象类,他含有getSplits()createRecordReader()抽象方法,在子类中必须被实现。这两个就是InputFormat的基本方法。getSplits()确定输入对象的切分原则,而createRecordReader()则可以按一定格式读取相应数据。通常默认情况下,不是直接实现InputFormat类,而是直接继承FileInputFormat类,这个类提供了很多对文件操作的方法,其中比较常用的就是isSpiitable()方法,该方法决定该文件是否进行分片操作。另外还有就是createRecordReader方法,该方法是为文件的分片定制一个recordreader,可以根据自己的需求来进行定制,只需要重写该函数。

下面我就以http://developer.yahoo.com/hadoop/tutorial/module5.html#types中的例子来实现自己的MyInputFormat,根据自己的需求定制自己的InputFormat。

比如数据格式如下:

ball  3.5,12.7,9.0
car 15,23.76,42.23
device 0.0,12.4,-67.1

下面我们以这样的一种形式读取数据,分割每一行数据,前面的比如ball作为key,后面的3个浮点数读入到Point3D对象中,那么该如何实现呢?以下是我在学习过程中的实现。

首先是对Point3D数据类型的定制,首先定制数据类型为了能够在网络中以流的形式进行传输,必须实现Writable接口,同时在mapreduce编程的过程中,需要根据key来对数据进行排序与分区,所以必须实现Comparable接口,因此就实现了一个更加高级的接口WritableComprable,同时可以满足上面两个要求。

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException; import org.apache.hadoop.io.WritableComparable; public class Point3D implements WritableComparable{
public float x;
public float y;
public float z;
public Point3D(float x, float y, float z) {
super();
this.x = x;
this.y = y;
this.z = z;
}
public Point3D(){
this(0.0f,0.0f,0.0f);
}
public void set(float x,float y,float z){
this.x=x;
this.y=y;
this.z=z;
}
@Override
public void readFields(DataInput in) throws IOException {
// TODO Auto-generated method stub
x=in.readFloat();
y=in.readFloat();
z=in.readFloat(); }
@Override
public void write(DataOutput out) throws IOException {
// TODO Auto-generated method stub
out.writeFloat(x);
out.writeFloat(y);
out.writeFloat(z); }
public float distanceFromOrigin(){
return (float)Math.sqrt(x*x+y*y+z*z); }
@Override
public boolean equals(Object obj) {
// TODO Auto-generated method stub
if(!(obj instanceof Point3D))
return false;
Point3D other=(Point3D)obj;
return this.x==other.x&&this.y==other.y&&this.z==other.z; }
@Override
public int hashCode() {
// TODO Auto-generated method stub
return Float.floatToIntBits(x)
^ Float.floatToIntBits(y)
^ Float.floatToIntBits(z);
}
@Override
public String toString() {
// TODO Auto-generated method stub
return Float.toString(x)+","+Float.toString(y)+","+Float.toString(z);
}
@Override
public int compareTo(Object ot) {
// TODO Auto-generated method stub
Point3D other=(Point3D)ot;
float myDistance=this.distanceFromOrigin();
float otherDistance=other.distanceFromOrigin();
return Float.compare(myDistance, otherDistance); } }

其次就是定制自己的Fortmat了

import java.io.IOException;

import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FSDataInputStream;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.LongWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.InputSplit;
import org.apache.hadoop.mapreduce.JobContext;
import org.apache.hadoop.mapreduce.RecordReader;
import org.apache.hadoop.mapreduce.TaskAttemptContext;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.input.FileSplit;
import org.apache.hadoop.util.LineReader; public class MyInputFormat extends FileInputFormat<Text, Point3D> { @Override
protected boolean isSplitable(JobContext context, Path filename) {
// TODO Auto-generated method stub
return false;
}
@Override
public RecordReader<Text, Point3D> createRecordReader(InputSplit inputsplit,
TaskAttemptContext context) throws IOException, InterruptedException {
// TODO Auto-generated method stub
return new objPosRecordReader();
}
public static class objPosRecordReader extends RecordReader<Text,Point3D>{ public LineReader in;
public Text lineKey;
public Point3D lineValue;
public StringTokenizer token=null; public Text line; @Override
public void close() throws IOException {
// TODO Auto-generated method stub } @Override
public Text getCurrentKey() throws IOException, InterruptedException {
// TODO Auto-generated method stub
System.out.println("key");
//lineKey.set(token.nextToken());
System.out.println("hello");
return lineKey;
} @Override
public Point3D getCurrentValue() throws IOException,
InterruptedException {
// TODO Auto-generated method stub return lineValue;
} @Override
public float getProgress() throws IOException, InterruptedException {
// TODO Auto-generated method stub
return 0;
} @Override
public void initialize(InputSplit input, TaskAttemptContext context)
throws IOException, InterruptedException {
// TODO Auto-generated method stub
FileSplit split=(FileSplit)input;
Configuration job=context.getConfiguration();
Path file=split.getPath();
FileSystem fs=file.getFileSystem(job); FSDataInputStream filein=fs.open(file);
in=new LineReader(filein,job); line=new Text();
lineKey=new Text();
lineValue=new Point3D(); } @Override
public boolean nextKeyValue() throws IOException, InterruptedException {
// TODO Auto-generated method stub
int linesize=in.readLine(line);
if(linesize==0)
return false; token=new StringTokenizer(line.toString());
String []temp=new String[2];
if(token.hasMoreElements()){
temp[0]=token.nextToken();
if(token.hasMoreElements()){
temp[1]=token.nextToken();
}
}
System.out.println(temp[0]);
System.out.println(temp[1]);
String []points=temp[1].split(",");
System.out.println(points[0]);
System.out.println(points[1]);
System.out.println(points[2]);
lineKey.set(temp[0]);
lineValue.set(Float.parseFloat(points[0]),Float.parseFloat(points[1]), Float.parseFloat(points[2]));
System.out.println("pp");
return true;
} } }

测试的时候写的map函数,没有reudce,必须要在后面设置job的时候设置reduce的个数为0,job.setNumReduceTasks(0)。

import java.io.IOException;

import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Mapper; public class TestMapper extends Mapper<Text, Point3D, Text, Point3D> { @Override
protected void map(Text key, Point3D value,
org.apache.hadoop.mapreduce.Mapper.Context context)
throws IOException, InterruptedException {
// TODO Auto-generated method stub
context.write(key, value);
} }
import java.io.IOException;
import java.net.URI; import javax.xml.soap.Text; import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.FileSystem;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat; public class TestMyInputFormat { /**
* @param args
* @throws IOException
* @throws ClassNotFoundException
* @throws InterruptedException
*/
public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
// TODO Auto-generated method stub
System.out.println("nihao");
Job job=new Job();
Configuration conf=new Configuration();
FileSystem fs=FileSystem.get(URI.create(args[1]), conf);
fs.delete(new Path(args[1]));
job.setJobName("测试MyInputFormat程序。。。。。");
FileInputFormat.addInputPath(job, new Path(args[0]));
FileOutputFormat.setOutputPath(job, new Path(args[1]));
job.setInputFormatClass(MyInputFormat.class);
job.setMapOutputKeyClass(Text.class);
job.setMapOutputValueClass(Point3D.class);
job.setMapperClass(TestMapper.class);
job.setNumReduceTasks(0);
job.waitForCompletion(false); } }

【转载】Mapreduce实现自定义的InputFormat的更多相关文章

  1. Spark设置自定义的InputFormat读取HDFS文件

    本文通过MetaWeblog自动发布,原文及更新链接:https://extendswind.top/posts/technical/problem_spark_reading_hdfs_serial ...

  2. 自定义实现InputFormat、OutputFormat、输出到多个文件目录中去、hadoop1.x api写单词计数的例子、运行时接收命令行参数,代码例子

    一:自定义实现InputFormat *数据源来自于内存 *1.InputFormat是用于处理各种数据源的,下面是实现InputFormat,数据源是来自于内存. *1.1 在程序的job.setI ...

  3. MapReduce之自定义InputFormat

    在企业开发中,Hadoop框架自带的InputFormat类型不能满足所有应用场景,需要自定义InputFormat来解决实际问题. 自定义InputFormat步骤如下: (1)自定义一个类继承Fi ...

  4. 关于MapReduce中自定义分区类(四)

    MapTask类 在MapTask类中找到run函数 if(useNewApi){       runNewMapper(job, splitMetaInfo, umbilical, reporter ...

  5. [转载] MapReduce工作原理讲解

    转载自http://www.aboutyun.com/thread-6723-1-1.html 有时候我们在用,但是却不知道为什么.就像苹果砸到我们头上,这或许已经是很自然的事情了,但是牛顿却发现了地 ...

  6. Hadoop(16)-MapReduce框架原理-自定义FileInputFormat

    1. 需求 将多个小文件合并成一个SequenceFile文件(SequenceFile文件是Hadoop用来存储二进制形式的key-value对的文件格式),SequenceFile里面存储着多个文 ...

  7. 关于MapReduce中自定义分组类(三)

    Job类  /**    * Define the comparator that controls which keys are grouped together    * for a single ...

  8. 关于MapReduce中自定义带比较key类、比较器类(二)——初学者从源码查看其原理

    Job类 /**   * Define the comparator that controls    * how the keys are sorted before they   * are pa ...

  9. 关于MapReduce中自定义Combine类(一)

    MRJobConfig      public static fina COMBINE_CLASS_ATTR      属性COMBINE_CLASS_ATTR = "mapreduce.j ...

随机推荐

  1. Docker 使用记录

    1.Docker 镜像加载本地镜像 2.Docker 创建镜像: 创建dockerfile 文件: 进入到文件目录下: 输入命令 docker build -t xxxx . 注意:后面的小点要有, ...

  2. sqli-labs1-10基础掌握

    00x01基于错误的GET单引号字符型注入 首先and 1=2判断是否为数值型sql注入,页面正常,不是 然后’测试,发现页面报sql语句错误,存在字符型sql注入  猜测参数为单引号闭合,用注释语句 ...

  3. Android_侧滑菜单的实现

    1.创建侧滑菜单Fragment package com.example.didida_corder; import android.os.Bundle; import android.view.La ...

  4. Wannafly Camp 2020 Day 3A 黑色气球

    #include <bits/stdc++.h> using namespace std; int a[1005][1005],n,x[1005]; int main() { scanf( ...

  5. MFC线程间消息传递(转)

    原文地址:https://blog.csdn.net/qq_37059136/article/details/84972192

  6. 解决Bootstrap container样式左右内边距15px,导致屏幕不美观

    首先上问题:此问题为bootstrap的 container样式导致,该样式默认左右内边距15px为了栅栏效果而设计,具体看源码css样式,如下图,右侧黄色边框边距和30px,实为两个div左浮动,将 ...

  7. liunx 中设置zookeeper 自启动(service zookeeper does not support chkconfig)

    在liunx 上设置zookeeper 自启动 1.进入目录 cd /etc/init.d 2.创建一个文件 vim zookeeper 3.编辑zookeepr 文件 连接liunx使用的软件是fi ...

  8. 论Mac与windows的STS下的路径问题

    mac下的 <!-- javaBean生成在哪里 --> <javaModelGenerator targetPackage="com.atcrowdfunding.bea ...

  9. Codeforces Round #601 (Div. 2) E1 Send Boxes to Alice (Easy Version)

    #include <bits/stdc++.h> using namespace std; typedef long long ll; ; int a[N]; int n; bool pr ...

  10. Python获取当前文件路径及父文件路径

    import os # 当前文件的路径 1.os.getcwd(): 2.os.path.realpath(__file__) # 当前文件的父路径 1.pwd=os.getcwd()   os.pa ...