来自:http://hadoopi.wordpress.com/2014/06/05/hadoop-add-third-party-libraries-to-mapreduce-job/

Anybody working with Hadoop should have already faced a same common issue: How to add third-party libraries to your MapReduce job.

Add libjars option

The first solution, maybe the most common one, consists on adding libraries using -libjars parameter on CLI. To make it work, your class MyClass must useGenericOptionsParser class. Easiest way is to implement the Hadoop Tool interface as described in post Hadoop: Implementing the Tool interface for MapReduce driver.

$ export LIBJARS=/path/jar1,/path/jar2
$ hadoop jar /path/to/my.jar com.wordpress.hadoopi.MyClass -libjars ${LIBJARS} value

This will obviously work only when playing with CLI, so how the heck can we add such external jar files when not using CLI ?

Add jar files to Hadoop classpath

You could certainly upload external jar files to each tasktracker and update HADOOOP_CLASSPATH accordingly, but are you really willing to bother Ops team each time you need to add a new jar ? Works well on a single server node, but are you going to upload such jar across all of the 10, 100 or even more Hadoop nodes ? This approach does not scale at all !

Create a fat jar

Another approach is to create a fat jar, which is a JAR that contains your classes as well as your third-party classes (see this Cloudera blog post for more details). Be aware this Jar will not only contain your classes, but might also include all your project dependencies (such as Hadoop libraries) unless you explicitly exclude them (using provided tag).
Here is an example of maven plugin you will need to set up

            <plugin>
<artifactId>maven-assembly-plugin</artifactId>
<configuration>
<archive>
<manifest>
<mainClass></mainClass>
</manifest>
</archive>
<descriptorRefs>
<descriptorRef>
jar-with-dependencies
</descriptorRef>
</descriptorRefs>
</configuration>
<executions>
<execution>
<id>make-assembly</id>
<phase>package</phase>
<goals>
<goal>single</goal>
</goals>
</execution>
</executions>
</plugin>

Following a “mvn clean package” command, your fat JAR will be located in maven project’s target directory as follows

drwxr-xr-x  2 antoine  staff        68 Jun 10 09:30 archive-tmp
drwxr-xr-x 3 antoine staff 102 Jun 10 09:29 classes
drwxr-xr-x 3 antoine staff 102 Jun 10 09:29 generated-sources
drwxr-xr-x 3 antoine staff 102 Jun 10 09:29 generated-test-sources
drwxr-xr-x 3 antoine staff 102 Jun 10 09:29 maven-archiver
drwxr-xr-x 4 antoine staff 136 Jun 10 09:29 myproject-1.0-SNAPSHOT
-rw-r--r-- 1 antoine staff 63880020 Jun 10 09:30 myproject-1.0-SNAPSHOT-jar-with-dependencies.jar
drwxr-xr-x 4 antoine staff 136 Jun 10 09:29 surefire-reports
drwxr-xr-x 4 antoine staff 136 Jun 10 09:29 test-classes

In above example, note the actual size of your JAR file (61MB). Quite fat, isn’t it ?
You can ensure all dependencies have been added by firing up below command

$ jar -tf myproject-1.0-SNAPSHOT-jar-with-dependencies.jar

META-INF/
META-INF/MANIFEST.MF
com/aamend/hadoop/allMyClasses.class
...
com/others/allMyDependencies.class
...

Use Distributed cache

I am always following such approach when using third-party libraries in my MapReduce jobs. One would say such approach is not elegant, but I can work without annoying anyone from Ops team :). I first create a directory “lib” in my HDFS home directory (“/user/hadoopi/”). You could even use “/tmp”, it does not matter. I then create a static method that

  1. Locate the jar file that includes the class I need
  2. Upload this jar to Hadoop HDFS
  3. Add the uploaded jar file to Hadoop distributed cache

Simply add the following lines to some Utils class.

    private static void addJarToDistributedCache(
Class classToAdd, Configuration conf)
throws IOException { // Retrieve jar file for class2Add
String jar = classToAdd.getProtectionDomain().
getCodeSource().getLocation().
getPath();
File jarFile = new File(jar); // Declare new HDFS location
Path hdfsJar = new Path("/user/hadoopi/lib/"
+ jarFile.getName()); // Mount HDFS
FileSystem hdfs = FileSystem.get(conf); // Copy (override) jar file to HDFS
hdfs.copyFromLocalFile(false, true,
new Path(jar), hdfsJar); // Add jar to distributed classPath
DistributedCache.addFileToClassPath(hdfsJar, conf);
}

The only thing you need to remember is to add this class prior to Job submission…

    public static void main(String[] args) throws Exception {

        // Create Hadoop configuration
Configuration conf = new Configuration(); // Add 3rd-party libraries
addJarToDistributedCache(MyFirstClass.class, conf);
addJarToDistributedCache(MySecondClass.class, conf); // Create my job
Job job = new Job(conf, "Hadoop-classpath");
.../...
}

Here you are, your MapReduce is now able to use any external JAR file.

Hadoop: Add third-party libraries to MapReduce job的更多相关文章

  1. Hadoop:使用Mrjob框架编写MapReduce

    Mrjob简介 Mrjob是一个编写MapReduce任务的开源Python框架,它实际上对Hadoop Streaming的命令行进行了封装,因此接粗不到Hadoop的数据流命令行,使我们可以更轻松 ...

  2. 【Cloud Computing】Hadoop环境安装、基本命令及MapReduce字数统计程序

    [Cloud Computing]Hadoop环境安装.基本命令及MapReduce字数统计程序 1.虚拟机准备 1.1 模板机器配置 1.1.1 主机配置 IP地址:在学校校园网Wifi下连接下 V ...

  3. 十九、Hadoop学记笔记————Hbase和MapReduce

    概要: hadoop和hbase导入环境变量: 要运行Hbase中自带的MapReduce程序,需要运行如下指令,可在官网中找到: 如果遇到如下问题,则说明Hadoop的MapReduce没有权限访问 ...

  4. hadoop源码分析(2):Map-Reduce的过程解析

    一.客户端 Map-Reduce的过程首先是由客户端提交一个任务开始的. 提交任务主要是通过JobClient.runJob(JobConf)静态函数实现的: public static Runnin ...

  5. Hadoop学习之旅三:MapReduce

    MapReduce编程模型 在Google的一篇重要的论文MapReduce: Simplified Data Processing on Large Clusters中提到,Google公司有大量的 ...

  6. Hadoop:使用原生python编写MapReduce

    功能实现 功能:统计文本文件中所有单词出现的频率功能. 下面是要统计的文本文件 [/root/hadooptest/input.txt] foo foo quux labs foo bar quux ...

  7. Hadoop学习记录(4)|MapReduce原理|API操作使用

    MapReduce概念 MapReduce是一种分布式计算模型,由谷歌提出,主要用于搜索领域,解决海量数据计算问题. MR由两个阶段组成:Map和Reduce,用户只需要实现map()和reduce( ...

  8. Hadoop 学习笔记 (十一) MapReduce 求平均成绩

    china:张三 78李四 89王五 96赵六 67english张三 80李四 82王五    84赵六 86math张三 88李四 99王五 66赵六 77 import java.io.IOEx ...

  9. Hadoop 学习笔记 (十) MapReduce实现排序 全局变量

    一些疑问:1 全排序的话,最后的应该sortJob.setNumReduceTasks(1);2 如果多个reduce task都去修改 一个静态的 IntWritable ,IntWritable会 ...

随机推荐

  1. 查看Oracle数据库名和实例名的命令

      查看数据库名 SQL> select name from v$database; NAME --------- ORCL SQL> desc v$database; 名称       ...

  2. 【C++】STL常用容器总结之五:双端队列deque

    6.双端队列deque 所谓的deque是”double ended queue”的缩写,双端队列不论在尾部或头部插入元素,都十分迅速.而在中间插入元素则会比较费时,因为必须移动中间其他的元素.双端队 ...

  3. AppStore中使用IDFA后提交应用的注意事项

    在ios7.0出来以前,我们都是通过wifi的mac来当作IOS设备的唯一标识符.如何在ios下获取设备的MAC,你可以参数这篇文章:获取ios的MAC地址 在没有使用IDFA之前,我们在ios7及以 ...

  4. Java并发编程的艺术(十二)——线程安全

    1. 什么是『线程安全』? 如果一个对象构造完成后,调用者无需额外的操作,就可以在多线程环境下随意地使用,并且不发生错误,那么这个对象就是线程安全的. 2. 线程安全的几种程度 线程安全性的前提:对『 ...

  5. [Web 前端] mockjs让前端开发独立于后端

    cp  from  : https://www.codercto.com/a/9839.html mock.js 可以模拟ajax数据,拦截ajax请求,返回模拟数据,无需后端返回就可以测试前端程序 ...

  6. GPS定位基本原理浅析

    位置服务已经成为越来越热的一门技术,也将成为以后所有移动设备(智能手机.掌上电脑等)的标配.而定位导航技术中,目前精度最高.应用最广泛的,自然非GPS莫属了.网络上介绍GPS原理的专业资料很多,而本文 ...

  7. 使用PHP生成二维码图像

    1.PHP生成二维码图像的类QRcode http://www.phper.org.cn/?post=128 QRcode是用于生成二维条形码的开放源码 (LGPL) 库.提供 API 创建条码图像. ...

  8. golang导入包的几个说明:import

    导入包: 标准包使用的是给定的短路径,如"fmt"."net/http" 自己的包,需要在工作目录(GOPATH)下指定一个目录,improt 导入包,实际上就 ...

  9. poj 1325 Machine Schedule 题解

    Machine Schedule Time Limit: 1000MS   Memory Limit: 10000K Total Submissions: 14479   Accepted: 6172 ...

  10. BERT的开源实现的使用

    参考这篇文章: 小数据福音!BERT在极小数据下带来显著提升的开源实现 https://mp.weixin.qq.com/s?__biz=MzIwMTc4ODE0Mw==&mid=224749 ...