预备条件:

1. 已安装JDK

Add Hadoop Group and User

$ sudo addgroup hadoop
$ sudo adduser --ingroup hadoop hduser
$ sudo adduser hduser sudo
 
 
切换到hduser账户下操作
 
 
SSH-server 安装
$ sudo apt-get install openssh-server
 

Setup SSH Certificate

 
$ ssh-keygen -t rsa -P ''
...
Your identification has been saved in /home/hduser/.ssh/id_rsa.
Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.
...
$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
$ ssh localhost
 
 
 
Hadoop安装
 

Download Hadoop 2.2.0

 
$ cd ~
$ wget http://www.trieuvan.com/apache/hadoop/common/hadoop-2.2.0/hadoop-2.2.0.tar.gz
$ sudo tar vxzf hadoop-2.2.0.tar.gz -C /usr/local
$ cd /usr/local
$ sudo mv hadoop-2.2.0 hadoop
$ sudo chown -R hduser:hadoop hadoop
 

Setup Hadoop Environment Variables

$cd ~
$sudo gedit .bashrc
 
paste following to the end of the file
 
#Hadoop variables
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45
export HADOOP_INSTALL=/usr/local/hadoop
export PATH=$PATH:$HADOOP_INSTALL/bin
export PATH=$PATH:$HADOOP_INSTALL/sbin
export HADOOP_MAPRED_HOME=$HADOOP_INSTALL
export HADOOP_COMMON_HOME=$HADOOP_INSTALL
export HADOOP_HDFS_HOME=$HADOOP_INSTALL
export YARN_HOME=$HADOOP_INSTALL
###end of paste
 
$ cd /usr/local/hadoop/etc/hadoop
$ vi hadoop-env.sh
 
#modify JAVA_HOME
export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45
 
Re-login into Ubuntu using hdser and check hadoop version
$ hadoop version
Hadoop 2.2.0
Subversion https://svn.apache.org/repos/asf/hadoop/common -r 1529768
Compiled by hortonmu on 2013-10-07T06:28Z
Compiled with protoc 2.5.0
From source with checksum 79e53ce7994d1628b240f09af91e1af4
This command was run using /usr/local/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar
 
At this point, hadoop is installed.
 
 
根据上面步骤,制作自动安装脚本如下:

#!/bin/bash

# 该脚本将在单个节点上安装 Hadoop 2.2.0

cd ~
sudo apt-get update

#### HADOOP 安装 ###

# Download java jdk
#if [ ! -f jdk-7u45-linux-i586.tar.gz ]; then
#    wget http://uni-smr.ac.ru/archive/dev/java/SDKs/sun/j2se/7/jdk-7u45-linux-i586.tar.gz
#fi
#sudo mkdir /usr/lib/jvm
#sudo tar zxvf jdk-7u45-linux-i586.tar.gz  -C /usr/lib/jvm

# 再修改环境变量
#sudo sh -c 'echo export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45 >> ~/.bashrc'
#sudo sh -c 'echo export JRE_HOME=${JAVA_HOME}/jre >> ~/.bashrc'
#sudo sh -c 'echo export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib >> ~/.bashrc'
#sudo sh -c 'echo export PATH=${JAVA_HOME}/bin:$PATH >> ~/.bashrc'

# 使之立即生效
#source ~/.bashrc

# 配置默认JDK版本
#sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.7.0_45/bin/java 300
#sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.7.0_45/bin/javac 300
#sudo update-alternatives --install /usr/bin/jar jar /usr/lib/jvm/jdk1.7.0_45/bin/jar 300
#sudo update-alternatives --install /usr/bin/javah javah /usr/lib/jvm/jdk1.7.0_45/bin/javah 300
#sudo update-alternatives --install /usr/bin/javap javap /usr/lib/jvm/jdk1.7.0_45/bin/javap 300
# 查看设置是否成功
#sudo update-alternatives --config java

#查看JDK版本,判断是否安装成功
#java -version

# 专门添加hadoop用户用来做hadoop计算
#sudo addgroup hadoop
#sudo adduser --ingroup hadoop hduser
#sudo adduser hduser sudo

# 安装openssh-server
sudo apt-get install openssh-server

# 产生ssh key 并添加到授权文件
sudo -u hduser ssh-keygen -t rsa -P ''
sudo sh -c 'cat /home/hduser/.ssh/id_rsa.pub >> /home/hduser/.ssh/authorized_keys'
#ssh localhost

# 下载安装Hadoop,并更改文件夹权限
cd ~
if [ ! -f hadoop-2.2.0.tar.gz ]; then
    #wget http://apache.osuosl.org/hadoop/common/hadoop-2.2.0.tar.gz
    wget http://xx.xx.xx.xx/downloads/hadoop-2.2.0.tar.gz
fi
sudo tar vxzf hadoop-2.2.0.tar.gz -C /usr/local
cd /usr/local
sudo mv hadoop-2.2.0 hadoop
sudo chown -R hduser:hadoop hadoop

# 将Hadoop 环境变量添加到.bashrc
cd ~cd ~
sudo sh -c "echo export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_45 >> /home/hduser/.bashrc"
sudo sh -c 'echo export HADOOP_INSTALL=/usr/local/hadoop >> /home/hduser/.bashrc'
sudo sh -c 'echo export PATH=\$PATH:\$JAVA_HOME/bin >> /home/hduser/.bashrc'
sudo sh -c 'echo export PATH=\$PATH:\$HADOOP_INSTALL/bin >> /home/hduser/.bashrc'
sudo sh -c 'echo export PATH=\$PATH:\$HADOOP_INSTALL/sbin >> /home/hduser/.bashrc'
sudo sh -c 'echo export HADOOP_MAPRED_HOME=\$HADOOP_INSTALL >> /home/hduser/.bashrc'
sudo sh -c 'echo export HADOOP_COMMON_HOME=\$HADOOP_INSTALL >> /home/hduser/.bashrc'
sudo sh -c 'echo export HADOOP_HDFS_HOME=\$HADOOP_INSTALL >> /home/hduser/.bashrc'
sudo sh -c 'echo export YARN_HOME=\$HADOOP_INSTALL >> /home/hduser/.bashrc'
sudo sh -c 'echo export HADOOP_COMMON_LIB_NATIVE_DIR=\$\{HADOOP_INSTALL\}/lib/native >> /home/hduser/.bashrc'
sudo sh -c 'echo export HADOOP_OPTS=\"-Djava.library.path=\$HADOOP_INSTALL/lib\" >> /home/hduser/.bashrc'

# 修改Hadoop 的JAVA_HOME,在hadoop-env.sh 文件中
cd /usr/local/hadoop/etc/hadoop
sudo -u hduser sed -i.bak s=\${JAVA_HOME}=/usr/lib/jvm/jdk1.7.0_45/=g hadoop-env.sh
pwd

# Check that Hadoop is installed
/usr/local/hadoop/bin/hadoop version

# Edit configuration files
sudo -u hduser sed -i.bak 's=<configuration>=<configuration>\<property>\<name>fs\.default\.name\</name>\<value>hdfs://localhost:9000\</value>\</property>=g' core-site.xml
sudo -u hduser sed -i.bak 's=<configuration>=<configuration>\<property>\<name>yarn\.nodemanager\.aux-services</name>\<value>mapreduce_shuffle</value>\</property>\<property>\<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>\<value>org\.apache\.hadoop\.mapred\.ShuffleHandler</value>\</property>=g' yarn-site.xml
 
sudo -u hduser cp mapred-site.xml.template mapred-site.xml
sudo -u hduser sed -i.bak 's=<configuration>=<configuration>\<property>\<name>mapreduce\.framework\.name</name>\<value>yarn</value>\</property>=g' mapred-site.xml
 
cd ~
sudo mkdir -p mydata/hdfs/namenode
sudo mkdir -p mydata/hdfs/datanode
sudo chown -R hduser:hadoop mydata

cd /usr/local/hadoop/etc/hadoop
sudo -u hduser sed -i.bak 's=<configuration>=<configuration>\<property>\<name>dfs\.replication</name>\<value>1\</value>\</property>\<property>\<name>dfs\.namenode\.name\.dir</name>\<value>file:/home/hduser/mydata/hdfs/namenode</value>\</property>\<property>\<name>dfs\.datanode\.data\.dir</name>\<value>file:/home/hduser/mydata/hdfs/datanode</value>\</property>=g' hdfs-site.xml

### Testing Hadoop

## Run the following commands as hduser to start and test hadoop
#sudo su hduser
# Format Namenode
#hdfs namenode -format
# Start Hadoop Service
#start-dfs.sh
#start-yarn.sh
# Check status
#hduser jps
# Example
#cd /usr/local/hadoop
#hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar pi 2 5

集群设置:

禁用IPV6

编辑文件/etc/sysctl.conf,在最后插入

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

重启生效

cat /proc/sys/net/ipv6/conf/all/disable_ipv6

若返回0则IPV6生效,返回1则说明没有生效

配置 hosts文件

将以下条目加入到 master以及每个slave的 /etc/hosts文件

xxx.xxx.xxx.xx0 master
xxx.xxx.xxx.xx1 b-w01
xxx.xxx.xxx.xx2 b-w02
xxx.xxx.xxx.xx3 b-w03
xxx.xxx.xxx.xx4 b-w04

配置ssh,以使master可以无密码连接slaves

ssh-copy-id -i $HOME/.ssh/id_rsa.pub hduser@slaveXXXX

在弹出密码提示时输入slaves的密码

然后再测试 ssh slave,即可看到成功登陆的消息

配置文件:slaves

b-w01
b-w02
b-w03
b-w04

配置文件core-site.XML

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
</property>
<property>
<name>io.file.buffer.size</name>
<value>131072</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/home/hduser/mydata/hadoop_temp</value>
</property>

</configuration>

配置文件:hdfs-site.xml

<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/hduser/mydata/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/hduser/mydata/hdfs/datanode</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>

配置文件:mapred-site.xml

<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>

配置文件:yarn-site.xml

<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>

<property>
  <name>yarn.resourcemanager.hostname</name>
  <value>master</value>
</property>

</configuration>

最后通过脚本把上述配置文件迁移到每个slave

#!/bin/bash
scp /usr/local/hadoop/etc/hadoop/slaves hduser@b-w01:/usr/local/hadoop/etc/hadoop/slaves
scp /usr/local/hadoop/etc/hadoop/slaves hduser@b-w02:/usr/local/hadoop/etc/hadoop/slaves
scp /usr/local/hadoop/etc/hadoop/slaves hduser@b-w03:/usr/local/hadoop/etc/hadoop/slaves
scp /usr/local/hadoop/etc/hadoop/slaves hduser@b-w04:/usr/local/hadoop/etc/hadoop/slaves

scp /usr/local/hadoop/etc/hadoop/core-site.xml hduser@b-w01:/usr/local/hadoop/etc/hadoop/core-site.xml
scp /usr/local/hadoop/etc/hadoop/core-site.xml hduser@b-w02:/usr/local/hadoop/etc/hadoop/core-site.xml
scp /usr/local/hadoop/etc/hadoop/core-site.xml hduser@b-w03:/usr/local/hadoop/etc/hadoop/core-site.xml
scp /usr/local/hadoop/etc/hadoop/core-site.xml hduser@b-w04:/usr/local/hadoop/etc/hadoop/core-site.xml

scp /usr/local/hadoop/etc/hadoop/hdfs-site.xml hduser@b-w01:/usr/local/hadoop/etc/hadoop/hdfs-site.xml
scp /usr/local/hadoop/etc/hadoop/hdfs-site.xml hduser@b-w02:/usr/local/hadoop/etc/hadoop/hdfs-site.xml
scp /usr/local/hadoop/etc/hadoop/hdfs-site.xml hduser@b-w03:/usr/local/hadoop/etc/hadoop/hdfs-site.xml
scp /usr/local/hadoop/etc/hadoop/hdfs-site.xml hduser@b-w04:/usr/local/hadoop/etc/hadoop/hdfs-site.xml

scp /usr/local/hadoop/etc/hadoop/mapred-site.xml hduser@b-w01:/usr/local/hadoop/etc/hadoop/mapred-site.xml
scp /usr/local/hadoop/etc/hadoop/mapred-site.xml hduser@b-w02:/usr/local/hadoop/etc/hadoop/mapred-site.xml
scp /usr/local/hadoop/etc/hadoop/mapred-site.xml hduser@b-w03:/usr/local/hadoop/etc/hadoop/mapred-site.xml
scp /usr/local/hadoop/etc/hadoop/mapred-site.xml hduser@b-w04:/usr/local/hadoop/etc/hadoop/mapred-site.xml

scp /usr/local/hadoop/etc/hadoop/yarn-site.xml hduser@b-w01:/usr/local/hadoop/etc/hadoop/yarn-site.xml
scp /usr/local/hadoop/etc/hadoop/yarn-site.xml hduser@b-w02:/usr/local/hadoop/etc/hadoop/yarn-site.xml
scp /usr/local/hadoop/etc/hadoop/yarn-site.xml hduser@b-w03:/usr/local/hadoop/etc/hadoop/yarn-site.xml
scp /usr/local/hadoop/etc/hadoop/yarn-site.xml hduser@b-w04:/usr/local/hadoop/etc/hadoop/yarn-site.xml

查看JOBMANAGER:

http://master:8088

查看namenode:

http://master:50070/

测试:

cd /usr/local/hadoop

hadoop fs -mkdir /output

hadoop fs -mkdir /input

hadoop dfs -put ~/wordCounterTest.txt /input/wordCounterTest.txt

hadoop jar ./share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /input/wordCounterTest.txt /output/wordcountresult

在Ubuntu 13.10 中安装配置 Hadoop 2.2.0的更多相关文章

  1. 在 Ubuntu 13.10 中搭建Java开发环境 - 懒人版

    本文记录我在Ubuntu 13.10中搭建Java开发环境. 本文环境: Ubuntu 13.10 x64运行在Win7下的VMware Workstation 10中. 1. 安装JDK与JRE s ...

  2. Linux中安装配置hadoop集群

    一. 简介 参考了网上许多教程,最终把hadoop在ubuntu14.04中安装配置成功.下面就把详细的安装步骤叙述一下.我所使用的环境:两台ubuntu 14.04 64位的台式机,hadoop选择 ...

  3. ubuntu 13.10 Ralink RT3290 无线与蓝牙4.0的驱动安装

    我的本是hp envy15, 蓝牙与无线的型号是Ralink RT3290, 装了Ubuntu 13.10 64bit后,蓝牙无法使用,无线几秒钟就会断开,查知,是因为驱动问题. ## 准备工作 首先 ...

  4. ubuntu 13.10 amd64安装ia32-libs

    很多软件只有32位的,有的依赖32位库还挺严重的:从ubuntu 13.10已经废弃了ia32-libs,但可以使用多架构,安装软件或包apt-get install program:i386.有的还 ...

  5. ubuntu 13.10 monodevelop3 安装

    版本 ubuntu 13.10 桌面模式默认:unity :文件管理器:nautilus

  6. Ubuntu 13.10下安装ns2 2.35遇到的小问题

    前面下载安装的环节我就不多说了,网上已经有很多的例子,最全的是一个新浪网友写的博客:http://blog.sina.com.cn/s/blog_785a23ae0100xraq.html.他使用的是 ...

  7. Ubuntu 13.10 下安装 eclipse

    Ubuntu软件社区用的3.8,个人想用最新版本,所有手动下载安装. 1.下载安装Jdk sudo apt-get install openjdk-7-jdk 2.查看系统JVM sudo updat ...

  8. Ubuntu 13.10 下安装搜狗输入法

    1.卸载ibus输入法: sudo apt-get remove ibus     sudo为取得root权限的意思,Ubuntu系统默认root账户关闭,很多操作需要取得root     权限才可以 ...

  9. Ubuntu 13.10 下安装node

    1.首先更新Ubuntu在线包:sudo apt-get update && sudo apt-get dist-upgrade, 2.默认Ubuntu已经安装python的,具体版本 ...

随机推荐

  1. 组件基础(插槽slot)—Vue学习笔记

    刚开始我们淡淡提过<slot></slot>现在深入了解一下. slot可以进行父组件传值到子组件. 比如:我们将hiboy通过<slot>传递到组件中. < ...

  2. iOS开发-带Placeholder的UITextView实现

    iOS中UITextField带有PlaceHolder属性,可以方便用于提示输入.但是同样可以进行文本输入的UITextView控件则没有PlaceHolder属性,还是有些不方便的,尤其是对于略带 ...

  3. 优化openfire服务器提升xmpp 效率的15个方法(原创)

    1.禁用原生xmpp搜索,使组织架构.人员数据本地化保存,并使客户端数据同步服务器,降低原生xmpp搜索的iq消耗,因为搜索是im应用的频繁操作: 2.禁用roster花名册.禁用presence包通 ...

  4. 08-01 java 帮助文档的制作和使用,使用jdk提供的帮助文档

    01_帮助文档的制作和使用 制作说明书的流程 如何制作一个说明书呢? A:写一个工具类 B:对这个类加入文档注释 怎么加呢? 加些什么东西呢? C:用工具解析文档注释 javadoc工具 D:格式 j ...

  5. Mahout使用(一)

    1.HelloMahout.java2.DistanceTest.java3.MahoutDemo.java 1.HelloMahout.java package cn.crxy.mahout; im ...

  6. (转)python通过paramiko实现,ssh功能

    python通过paramiko实现,ssh功能 1 import paramiko 2 3 ssh =paramiko.SSHClient()#创建一个SSH连接对象 4 ssh.set_missi ...

  7. 二、LINQ之查询表达式基础

    1.查询是什么? 查询是一组指令,描述要从给定数据源(或源)检索的数据以及返回的数据应具有的形状和组织.查询表达式和它所产生的结果不同.

  8. NIO基础之同步、异步、阻塞、非阻塞

    这里区分几个概念,也是常见但是容易混淆的概念,就是标题中的同步.异步.阻塞.非阻塞. 一.同步与异步 同步与异步,关心的是消息通信的机制.也就是调用者和被调用者之间,消息是如何进行通知的.如果是调用者 ...

  9. 生产者消费者模式中条件判断是使用while而不是if

    永远在循环(loop)里调用 wait 和 notify,不是在 If 语句现在你知道wait应该永远在被synchronized的背景下和那个被多线程共享的对象上调用,下一个一定要记住的问题就是,你 ...

  10. Hive执行过程中出现Caused by : java.lang.ClassNotFoundException: org.cloudera.htrace.Trace的错误解决办法(图文详解)

    不多说,直接上干货! 问题详情 如下 这个错误的意思是缺少 htrace-core-2.04.jar. 解决办法: 将$HBASE_HOME/lib下的htrace-core-2.04.jar拷贝到$ ...