18 Nov 2014 by Fabian Hüske (@fhueske)

Apache Hadoop is an industry standard for scalable analytical data processing. Many data analysis applications have been implemented as Hadoop MapReduce jobs and run in clusters around the world. Apache Flink can be an alternative to MapReduce and improves it in many dimensions. Among other features, Flink provides much better performance and offers APIs in Java and Scala, which are very easy to use. Similar to Hadoop, Flink’s APIs provide interfaces for Mapper and Reducer functions, as well as Input- and OutputFormats along with many more operators. While being conceptually equivalent, Hadoop’s MapReduce and Flink’s interfaces for these functions are unfortunately not source compatible.

Flink’s Hadoop Compatibility Package

To close this gap, Flink provides a Hadoop Compatibility package to wrap functions implemented against Hadoop’s MapReduce interfaces and embed them in Flink programs. This package was developed as part of a Google Summer of Code 2014 project.

With the Hadoop Compatibility package, you can reuse all your Hadoop

  • InputFormats (mapred and mapreduce APIs)
  • OutputFormats (mapred and mapreduce APIs)
  • Mappers (mapred API)
  • Reducers (mapred API)

in Flink programs without changing a line of code. Moreover, Flink also natively supports all Hadoop data types (Writables and WritableComparable).

The following code snippet shows a simple Flink WordCount program that solely uses Hadoop data types, InputFormat, OutputFormat, Mapper, and Reducer functions.

// Definition of Hadoop Mapper function
public class Tokenizer implements Mapper<LongWritable, Text, Text, LongWritable> { ... }
// Definition of Hadoop Reducer function
public class Counter implements Reducer<Text, LongWritable, Text, LongWritable> { ... } public static void main(String[] args) {
final String inputPath = args[0];
final String outputPath = args[1]; final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment(); // Setup Hadoop’s TextInputFormat
HadoopInputFormat<LongWritable, Text> hadoopInputFormat =
new HadoopInputFormat<LongWritable, Text>(
new TextInputFormat(), LongWritable.class, Text.class, new JobConf());
TextInputFormat.addInputPath(hadoopInputFormat.getJobConf(), new Path(inputPath)); // Read a DataSet with the Hadoop InputFormat
DataSet<Tuple2<LongWritable, Text>> text = env.createInput(hadoopInputFormat);
DataSet<Tuple2<Text, LongWritable>> words = text
// Wrap Tokenizer Mapper function
.flatMap(new HadoopMapFunction<LongWritable, Text, Text, LongWritable>(new Tokenizer()))
.groupBy(0)
// Wrap Counter Reducer function (used as Reducer and Combiner)
.reduceGroup(new HadoopReduceCombineFunction<Text, LongWritable, Text, LongWritable>(
new Counter(), new Counter())); // Setup Hadoop’s TextOutputFormat
HadoopOutputFormat<Text, LongWritable> hadoopOutputFormat =
new HadoopOutputFormat<Text, LongWritable>(
new TextOutputFormat<Text, LongWritable>(), new JobConf());
hadoopOutputFormat.getJobConf().set("mapred.textoutputformat.separator", " ");
TextOutputFormat.setOutputPath(hadoopOutputFormat.getJobConf(), new Path(outputPath)); // Output & Execute
words.output(hadoopOutputFormat);
env.execute("Hadoop Compat WordCount");
}
 

As you can see, Flink represents Hadoop key-value pairs as Tuple2<key, value> tuples. Note, that the program uses Flink’s groupBy() transformation to group data on the key field (field 0 of the Tuple2<key, value>) before it is given to the Reducer function. At the moment, the compatibility package does not evaluate custom Hadoop partitioners, sorting comparators, or grouping comparators.

Hadoop functions can be used at any position within a Flink program and of course also be mixed with native Flink functions. This means that instead of assembling a workflow of Hadoop jobs in an external driver method or using a workflow scheduler such as Apache Oozie, you can implement an arbitrary complex Flink program consisting of multiple Hadoop Input- and OutputFormats, Mapper and Reducer functions. When executing such a Flink program, data will be pipelined between your Hadoop functions and will not be written to HDFS just for the purpose of data exchange.

What comes next?

While the Hadoop compatibility package is already very useful, we are currently working on a dedicated Hadoop Job operation to embed and execute Hadoop jobs as a whole in Flink programs, including their custom partitioning, sorting, and grouping code. With this feature, you will be able to chain multiple Hadoop jobs, mix them with Flink functions, and other operations such as Spargel operations (Pregel/Giraph-style jobs).

Summary

Flink lets you reuse a lot of the code you wrote for Hadoop MapReduce, including all data types, all Input- and OutputFormats, and Mapper and Reducers of the mapred-API. Hadoop functions can be used within Flink programs and mixed with all other Flink functions. Due to Flink’s pipelined execution, Hadoop functions can arbitrarily be assembled without data exchange via HDFS. Moreover, the Flink community is currently working on a dedicated Hadoop Job operation to supporting the execution of Hadoop jobs as a whole.

If you want to use Flink’s Hadoop compatibility package checkout our documentation.

Hadoop Compatibility in Flink的更多相关文章

  1. Hadoop,Spark,Flink 相关KB

    Hive: https://stackoverflow.com/questions/17038414/difference-between-hive-internal-tables-and-exter ...

  2. flink hadoop yarn

    新一代大数据处理引擎 Apache Flink https://www.ibm.com/developerworks/cn/opensource/os-cn-apache-flink/ 新一代大数据处 ...

  3. Flink学习笔记:Flink开发环境搭建

    本文为<Flink大数据项目实战>学习笔记,想通过视频系统学习Flink这个最火爆的大数据计算框架的同学,推荐学习课程: Flink大数据项目实战:http://t.cn/EJtKhaz ...

  4. flink学习笔记-各种Time

    说明:本文为<Flink大数据项目实战>学习笔记,想通过视频系统学习Flink这个最火爆的大数据计算框架的同学,推荐学习课程: Flink大数据项目实战:http://t.cn/EJtKh ...

  5. Flink Program Guide (1) -- 基本API概念(Basic API Concepts -- For Java)

    false false false false EN-US ZH-CN X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-n ...

  6. 新一代大数据处理引擎 Apache Flink

    https://www.ibm.com/developerworks/cn/opensource/os-cn-apache-flink/index.html 大数据计算引擎的发展 这几年大数据的飞速发 ...

  7. Flink知识点

    1. Flink.Storm.Sparkstreaming对比 Storm只支持流处理任务,数据是一条一条的源源不断地处理,而MapReduce.spark只支持批处理任务,spark-streami ...

  8. 什么是Apache Flink

    大数据计算引擎的发展 这几年大数据的飞速发展,出现了很多热门的开源社区,其中著名的有 Hadoop.Storm,以及后来的 Spark,他们都有着各自专注的应用场景.Spark 掀开了内存计算的先河, ...

  9. Flink 部署文档

    Flink 部署文档 1 先决条件 2 下载 Flink 二进制文件 3 配置 Flink 3.1 flink-conf.yaml 3.2 slaves 4 将配置好的 Flink 分发到其他节点 5 ...

随机推荐

  1. 『Asp.Net 组件』第一个 Asp.Net 服务器组件:自己的文本框控件

    代码: using System.Web; using System.Web.UI; using System.Web.UI.WebControls; namespace DemoWebControl ...

  2. Python列表的深浅复制

    概述 Python的列表可以复制,但是这里面有浅复制和深复制,我相信有些人不明白什么是深复制和浅复制,今天我们就来谈谈. = 号复制 #!/usr/bin/env python # -*- codin ...

  3. 我对C#的认知。

    关于开发者的技术水平到底该如何定义,到底一个人的技术水平应该定位在高.中.低的标准是什么呢?很多人觉得这是一个仁者见仁的问题,有人觉得根据公司的那个员工等级判断.答案是肯定不是,从纯开发技术的角度来分 ...

  4. Perl IO:文件锁

    文件锁 当多个进程或多个程序都想要修同一个文件的时候,如果不加控制,多进程或多程序将可能导致文件更新的丢失. 例如进程1和进程2都要写入数据到a.txt中,进程1获取到了文件句柄,进程2也获取到了文件 ...

  5. 第一个用eclipse打包APK时报错一个错误怎么解决

    这个问题也是我在android开发群里面解决的一个问题. 如果有什么想法或者想法可以在下面进行评论,我们可以一起交流一下! 我们在eclipse中开发完一个程序之后,需要将其打包为APK的安装包,我们 ...

  6. Restful API设计规范及实战

    Restful API的概念在此就不费口舌了,博友们网上查哈定义文章很多,直入正题吧: 首先抛出一个问题:判断id为 用户下,名称为 使命召唤14(COD14) 的产品是否存在(话说我还是很喜欢玩类似 ...

  7. java Calendar的学习分享

    前言: 在我们的日常生活中,常常能看见时间.如:在我们的手机里,在一些网站上也能随处看到时间.那我们在项目的开发中,也常常涉及到时间的处理,对于我们经常会遇到和处理的问题.Java中专门为我们处理时间 ...

  8. Redis面试点

      redis的数据结构有那些 字符串 String 字典:Hash 列表:List 集合:set 有序集合:sortedSet 如果大量的key设置在同一时间过期,一般需要注意什么 大量的key过期 ...

  9. 2019-02-10 扩展Python控制台实现中文反馈信息

    "中文编程"知乎专栏原文地址 参考了周蟒的实现, 运行效果如下: $ python3 解释器.py Python 3.6.5 (v3.6.5:f59c0932b4, Mar 28 ...

  10. iOS---------显示和隐藏状态栏的网络活动标志

    //在向服务端发送请求状态栏显示网络活动标志: [[UIApplication sharedApplication] setNetworkActivityIndicatorVisible:YES]; ...