Apache官方提供hadoop2的安装包是在32位机器下编译的,生产环境一般使用64的Linux,那么需要在64位机器下重新编译
可以查看hadoop-2.2.0-src下的BUILDING.txt
Build instructions for Hadoop

----------------------------------------------------------------------------------
Requirements:

* Unix System
* JDK 1.6+
* Maven 3.0 or later
* Findbugs 1.3.9 (if running findbugs)
* ProtocolBuffer 2.5.0
* CMake 2.6 or newer (if compiling native code)
* Internet connection for first build (to fetch all Maven and Hadoop dependencies)

----------------------------------------------------------------------------------
...
步骤:
1.安装JDK 1.6+ (验证:java -version)
2.安装Maven 3.0 or later (验证:mvn -version)
3.ProtocolBuffer 2.5.0 (验证:protoc --version)下载地址:https://protobuf.googlecode.com/files/protobuf-2.5.0.tar.gz
#为了编译安装protobuf,Linux需要上网,使用YUM在线安装依赖。如不能上网,就比较麻烦,需要一一下载每个依赖包再安装。
sudo yum install gcc
sudo yum install gcc-c++
sudo yum install make

#解压protobuf
sudo tar -zxvf protobuf-2.5.0.tar.gz
#进入到protobuf-2.5.0
cd protobuf-2.5.0
#编译安装
sudo ./configure
sudo make
sudo make install
4.安装CMake 2.6 or newer
sudo yum install cmake
sudo yum install openssl-devel
sudo yum install ncurses-devel
5.编译hadoop-2.2.0
#解压hadoop-2.2.0-src.tar.gz
tar -zxvf hadoop-2.2.0-src.tar.gz
#进入到hadoop-2.2.0-src
cd hadoop-2.2.0-src

#hadoop-2.2.0-src/hadoop-common-project/hadoop-auth/pom.xml有个bug,这里需要修改一下
vim hadoop-common-project/hadoop-auth/pom.xml
#在<dependencies>标签内添加如下内容
<dependency>
<groupId>org.mortbay.jetty</groupId>
<artifactId>jetty-util</artifactId>
<scope>test</scope>
</dependency>

#编译
mvn package -DskipTests -Pdist,native

#编译好的hadoop-2.2.0再hadoop-2.2.0-src/hadoop-dist/target目录下

[INFO] ------------------------------------------------------------------------
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main ................................ SUCCESS [ 1.228 s]
[INFO] Apache Hadoop Project POM ......................... SUCCESS [ 0.894 s]
[INFO] Apache Hadoop Annotations ......................... SUCCESS [ 1.809 s]
[INFO] Apache Hadoop Assemblies .......................... SUCCESS [ 0.222 s]
[INFO] Apache Hadoop Project Dist POM .................... SUCCESS [ 1.198 s]
[INFO] Apache Hadoop Maven Plugins ....................... SUCCESS [ 2.205 s]
[INFO] Apache Hadoop Auth ................................ SUCCESS [ 2.169 s]
[INFO] Apache Hadoop Auth Examples ....................... SUCCESS [ 1.583 s]
[INFO] Apache Hadoop Common .............................. SUCCESS [01:02 min]
[INFO] Apache Hadoop NFS ................................. SUCCESS [ 5.132 s]
[INFO] Apache Hadoop Common Project ...................... SUCCESS [ 0.038 s]
[INFO] Apache Hadoop HDFS ................................ SUCCESS [01:02 min]
[INFO] Apache Hadoop HttpFS .............................. SUCCESS [ 9.002 s]
[INFO] Apache Hadoop HDFS BookKeeper Journal ............. SUCCESS [ 4.995 s]
[INFO] Apache Hadoop HDFS-NFS ............................ SUCCESS [ 2.647 s]
[INFO] Apache Hadoop HDFS Project ........................ SUCCESS [ 0.058 s]
[INFO] hadoop-yarn ....................................... SUCCESS [ 0.138 s]
[INFO] hadoop-yarn-api ................................... SUCCESS [ 31.854 s]
[INFO] hadoop-yarn-common ................................ SUCCESS [ 20.121 s]
[INFO] hadoop-yarn-server ................................ SUCCESS [ 0.105 s]
[INFO] hadoop-yarn-server-common ......................... SUCCESS [ 5.776 s]
[INFO] hadoop-yarn-server-nodemanager .................... SUCCESS [ 10.490 s]
[INFO] hadoop-yarn-server-web-proxy ...................... SUCCESS [ 3.321 s]
[INFO] hadoop-yarn-server-resourcemanager ................ SUCCESS [ 8.311 s]
[INFO] hadoop-yarn-server-tests .......................... SUCCESS [ 0.510 s]
[INFO] hadoop-yarn-client ................................ SUCCESS [ 3.929 s]
[INFO] hadoop-yarn-applications .......................... SUCCESS [ 0.060 s]
[INFO] hadoop-yarn-applications-distributedshell ......... SUCCESS [ 1.720 s]
[INFO] hadoop-mapreduce-client ........................... SUCCESS [ 0.062 s]
[INFO] hadoop-mapreduce-client-core ...................... SUCCESS [ 16.204 s]
[INFO] hadoop-yarn-applications-unmanaged-am-launcher .... SUCCESS [ 1.779 s]
[INFO] hadoop-yarn-site .................................. SUCCESS [ 0.111 s]
[INFO] hadoop-yarn-project ............................... SUCCESS [ 2.287 s]
[INFO] hadoop-mapreduce-client-common .................... SUCCESS [ 11.855 s]
[INFO] hadoop-mapreduce-client-shuffle ................... SUCCESS [ 2.560 s]
[INFO] hadoop-mapreduce-client-app ....................... SUCCESS [ 6.985 s]
[INFO] hadoop-mapreduce-client-hs ........................ SUCCESS [ 3.319 s]
[INFO] hadoop-mapreduce-client-jobclient ................. SUCCESS [ 4.021 s]
[INFO] hadoop-mapreduce-client-hs-plugins ................ SUCCESS [ 1.508 s]
[INFO] Apache Hadoop MapReduce Examples .................. SUCCESS [ 4.176 s]
[INFO] hadoop-mapreduce .................................. SUCCESS [ 2.367 s]
[INFO] Apache Hadoop MapReduce Streaming ................. SUCCESS [ 2.902 s]
[INFO] Apache Hadoop Distributed Copy .................... SUCCESS [ 5.365 s]
[INFO] Apache Hadoop Archives ............................ SUCCESS [ 1.673 s]
[INFO] Apache Hadoop Rumen ............................... SUCCESS [ 4.095 s]
[INFO] Apache Hadoop Gridmix ............................. SUCCESS [ 2.962 s]
[INFO] Apache Hadoop Data Join ........................... SUCCESS [ 2.089 s]
[INFO] Apache Hadoop Extras .............................. SUCCESS [ 2.190 s]
[INFO] Apache Hadoop Pipes ............................... SUCCESS [ 5.887 s]
[INFO] Apache Hadoop Tools Dist .......................... SUCCESS [ 1.149 s]
[INFO] Apache Hadoop Tools ............................... SUCCESS [ 0.028 s]
[INFO] Apache Hadoop Distribution ........................ SUCCESS [ 6.514 s]
[INFO] Apache Hadoop Client .............................. SUCCESS [ 2.199 s]
[INFO] Apache Hadoop Mini-Cluster ........................ SUCCESS [ 0.121 s]
[INFO] ------------------------------------------------------------------------
[INFO] BUILD SUCCESS (####看到BUILD SUCCESS说明编译成功####)
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 05:39 min
[INFO] Finished at: 2014-05-23T13:37:06+08:00
[INFO] Final Memory: 135M/300M
[INFO] ------------------------------------------------------------------------

关于64位Linux编译hadoop2的更多相关文章

  1. 64位Linux编译hadoop-2.5.1

    Apache Hadoop生态系统安装包下载地址:http://archive.apache.org/dist/ 软件安装目录:~/app jdk: jdk-7u45-linux-x64.rpm ha ...

  2. 64位Linux编译C代码,crt1.o文件格式不对的问题

    今天在某台64位LInux下编译一个简单的hello world的C程序,报错: /usr/lib/gcc/x86_64-redhat-linux/4.4.7/../../../crt1.o: cou ...

  3. CentOS 64位上编译 Hadoop2.6.0

    由于hadoop-2.6.0.tar.gz安装包是在32位机器上编译的,64位的机器加载本地库.so文件时会出错,比如: java.lang.UnsatisfiedLinkError: org.apa ...

  4. 64位linux编译32位程序

    昨天接到的任务,编译64位和32位两个版本的.so动态库给其他部门,我的ubuntu虚拟机是64位的,编译32位时遇到了问题: /usr/bin/ld: cannot find -lstdc++ 最后 ...

  5. linux下hadoop2.6.1源码64位的编译

    linux下hadoop2.6.1源码64位的编译 一. 前言 Apache官网上提供的hadoop本地库是32位的,如果我们的Linux服务器是64位的话,就会现问题.我们在64位服务器执行Hado ...

  6. 在64位linux下编译32位程序

    在64位linux下编译32位程序 http://blog.csdn.net/xsckernel/article/details/38045783

  7. MiniCRT 64位 linux 系统移植记录:64位gcc的几点注意

    32位未修改源码与修改版的代码下载: git clone git@github.com:youzhonghui/MiniCRT.git MiniCRT 64位 linux 系统移植记录 MiniCRT ...

  8. /usr/local/lib/libz.a: could not read symbols: Bad value(64 位 Linux)

    /usr/local/lib/libz.a: could not read symbols: Bad value(64 位 Linux) /usr/bin/ld: /usr/local/lib/lib ...

  9. 64位linux报错Could not initialize class java.awt.image.BufferedImage

    最近碰到一个问题: 64位linux报错Could not initialize class java.awt.image.BufferedImage 在WIN平台下运行正常BufferedImage ...

随机推荐

  1. Java Thread.interrupt 害人! 中断JAVA线程(zz)

    http://www.blogjava.net/jinfeng_wang/archive/2012/04/22/196477.html#376322 ————————————————————————— ...

  2. mysql从一个表中拷贝数据到另一个表中sql语句

    这一段在找新的工作,今天面试时,要做一套题,其中遇到这么一句话,从一个表中拷贝所有的数据到另一个表中的sql是什么? 原来我很少用到,也没注意过这个问题,面试后我上网查查,回来自己亲手写了写,测试了下 ...

  3. 亲和串(HDU2203)

    http://acm.hdu.edu.cn/showproblem.php?pid=2203 题目意思很简单,求s1串所构成的环中是否有s2这个串 用CMP参考http://s.acmore.net/ ...

  4. HDU 4432 Sum of divisors (水题,进制转换)

    题意:给定 n,m,把 n 的所有因数转 m 进制,再把各都平方,求和. 析:按它的要求做就好,注意的是,是因数,不可能有重复的...比如4的因数只有一个2,还有就是输出10进制以上的,要用AB.. ...

  5. HDU1025:Constructing Roads In JGShining's Kingdom(LIS)

    Problem Description JGShining's kingdom consists of 2n(n is no more than 500,000) small cities which ...

  6. WinDbg x 64 使用 SOS: 无法找到运行时 DLL (clr.dll)

     http://www.datazx.cn/Forums/en-US/59aa78c9-dc05-43c8-9efe-e7b132056afc/action?threadDisplayName=win ...

  7. Hibernate映射解析——七种映射关系

    首先我们了解一个名词ORM,全称是(Object Relational Mapping),即对象关系映射.ORM的实现思想就是将关系数据库中表的数据映射成对象,以对象的形式展现,这样开发人员就可以把对 ...

  8. crm操作货币实体

    using System;     using Microsoft.Xrm.Sdk;     using Microsoft.Crm.Sdk.Messages; /// <summary> ...

  9. String类的实现

    1.在类中可以访问private成员包括两层含义:可以访问this指针的private成员:可以访问同类对象的private成员. 2.这里的String可以认为是个资源管理类,内部有个char指针, ...

  10. Codeforces Round #330 (Div. 1) A. Warrior and Archer 贪心 数学

    A. Warrior and Archer Time Limit: 20 Sec Memory Limit: 256 MB 题目连接 http://codeforces.com/contest/594 ...