Linux_hadoop_install
1、 Build Linux env
my env is VM RedHat Linux 6.5 64bit
set fixed IP
vim /etc/sysconfig/network-scripts/ifcfg-eth0
set IP to : 192.168.38.128
modify hostname: vim /etc/hosts
set hostname to : itbuilder1
2、install JDK
config JDK env variables
3、install Hadoop env
download Apache hadoop pkg
addr:http://archive.apache.org/dist/hadoop/core/stable2/hadoop-2.7.1.tar.gz
3.1 Extract the package to the specified directory
create a dir : mkdir /usr/local/hadoop
extract file to dir : /usr/local/hadoop :tar -zxvf hadoop-2.7.1.tar.gz -C /usr/local/hadoop
3.2 Modify the configuration file
hadoop2.7.1 version need to modify 5 config files :
1、hadoop-env.sh
2、core-site.xml
3、hdfs-site.xml
4、mapred-site.xml(mapred-site.xml.template)
5、yarn-site.xml
these file all under etc of hadoop, the detail dir is : /usr/local/hadoop/hadoop-2.7.1/etc/hadoop/
3.2.1 Modfiy env variable (hadoop-env.sh)
vim hadoop-env.sh
set up JDK root directory, as shown below:

export JAVA_HOME=/usr/java/jdk1.8.0_20
3.2.2 core-site.xml ,set namenode and temp file addr of HDFS.
<configuration>
<!--set HDFS addr (NameNode) -->
<property>
<name>fs.defaultFS</name>
<value>hdfs://itbuilder1:9000</value>
</property>
<!--set dir of Hadoop runtime file storage directory-->
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/local/hadoop/hadoop-2.7.1/tmp</value>
</property>
</configuration>
3.2.3 hdfs-site.xml (set duplicate quantity)
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
</configuration>
3.2.4 mapred-site.xml ( tell hadoop that later MR runs on yarn )
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
3.2.5 yarn-site.xml
<configuration>
<!-- tell nodemanager the way to get data is shuffle -->
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<!--set yarn addr (ResourceManager) -->
<property>
<name>yarn.resourcemanager.hostname</name>
<value>itbuilder1</value>
</property>
</configuration>
4、add hadoop to env variable
vim /etc/profile
export JAVA_HOME=/usr/java/jdk1.8.0_20
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.1
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin
#refresh /etc/profile
source /etc/profile
5、Initialize (format) file system (HDFS)
#hadoop namenode -format
hdfs namenode -format
6、start hadoop (hdfs yarn)
./start-all.sh (need to input linux password)
./start-hdfs.sh
./start-yarn.sh
View the current process of opening by JPs command
[root@linuxidc ~]# jps
3461 ResourceManager
3142 DataNode
3751 NodeManager
3016 NameNode
5034 Jps
3307 SecondaryNameNode
Access the management interface :
http://192.168.38.128:50070 (hdfs management interface)
http://192.168.38.128:8088 (mr management interface)
Linux_hadoop_install的更多相关文章
随机推荐
- poj1664 (递归)
放苹果 Time Limit: 1000MS Memory Limit: 10000K Total Submissions: 31295 Accepted: 19742 Description ...
- CentOS squid代理内网主机上网 openVpn配置
- pandas对象保存到mysql出错提示“BLOB/TEXT column used in key specification without a key length”解决办法
问题 将DataFrame数据保存到mysql中时,出现错误提示: BLOB/TEXT column used in key specification without a key length 原因 ...
- border、margin、padding属性的区别
可以先看下这个视频教程:http://my.tv.sohu.com/us/97014746/64226777.shtml 本文参考:http://www.cnblogs.com/chinhr/arch ...
- Android Matrix详解
Matrix的数学原理 平移变换 旋转变换 缩放变换 错切变换 对称变换 代码验证 Matrix的数学原理 在Android中,如果你用Matrix进行过图像处理,那么一定知道Matrix这个类.An ...
- ping and traceroute(tracert)
1.ping程序简单介绍 这个程序是Mike Muuss编写的.目的是測试另外一台机子是否可达. 运用的协议就是ICMP.运用的是ICMP的回显应答和请求回显两个类型.曾经呢.能ping通说明可以进行 ...
- 如何判断MSSQL数据库磁盘出现了瓶颈
问大神石沫:如何判断MSSQL数据库磁盘出现了瓶颈? 石沫(A1):您好,您的问题非常好,SQL SERVER提供了很多关于I/O压力的性能计数器,请选择性能计算器PhysicalDisk(Logic ...
- Linux如何关闭防火墙和查看防火墙的具体情况
1.Linux下关闭和开启防火墙 1) 重启后生效 开启: chkconfig iptables on 关闭: chkconfig iptables off 2) 即时生效,重启后失效 开启: ser ...
- 2015 UESTC Winter Training #10【Northeastern Europe 2009】
2015 UESTC Winter Training #10 Northeastern Europe 2009 最近集训都不在状态啊,嘛,上午一直在练车,比赛时也是刚吃过午饭,状态不好也难免,下次比赛 ...
- C#中接口和抽象类
1抽象类 (1) 抽象方法只作声明,而不包含实现,可以看成是没有实现体的虚方法 (2) 抽象类不能被实例化 (3) 抽象类可以但不是必须有抽象属性和抽象方法,但是一旦有了抽象方法,就一定要把这个类声明 ...