前期规划

192.168.100.231                  db01

192.168.100.232                  db02

192.168.100.233                  db03

一、安装java

[root@master ~]# vim /etc/profile

在末尾添加环境变量:

export JAVA_HOME=/usr/java/jdk1.7.0_67-cloudera

export PATH=$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH

export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib:$CLASSPATH

检查java是否安装成功:

[root@master ~]# java -version

二、创建hadoop用户用于安装软件

groupadd hadoop

useradd -g hadoop hadoop

echo "dbking588" | passwd --stdin hadoop

配置环境变量:

export HADOOP_HOME=/opt/cdh-5.3.6/hadoop-2.5.0

export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH:$HOME/bin

三、安装hadoop

# cd /opt/software

# tar -zxvf hadoop-2.5.0.tar.gz -C /opt/cdh-5.3.6/

# chown -R hadoop:hadoop /opt/cdh-5.3.6/hadoop-2.5.0

四、配置SSH

--配置方法:

$ ssh-keygen -t rsa

$ ssh-copy-id db07.chavin.king

(ssh-copy-id方式只能用于rsa加密秘钥配置,测试对于dsa加密配置无效)

--验证:

[hadoop@db01 ~]$ ssh db02 date

Wed Apr 19 09:57:34 CST 2017

五、编辑hadoop配置文件

需要配置的文件包括:

HDFS配置文件:

etc/hadoop/hadoop-env.sh

etc/hadoop/core-site.xml

etc/hadoop/hdfs-site.xml

etc/haoop/slaves

YARN配置文件:

etc/hadoop/yarn-env.sh

etc/hadoop/yarn-site.xml

etc/haoop/slaves

MapReduce配置文件:

etc/hadoop/mapred-env.sh

etc/hadoop/mapred-site.xml

配置文件内容如下:

[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/core-site.xml

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>fs.defaultFS</name>

<value>hdfs://db01:9000</value>

</property>

<property>

<name>hadoop.tmp.dir</name>

<value>/opt/cdh-5.3.6/hadoop-2.5.0/data/tmp</value>

</property>

<property>

<name>fs.trash.interval</name>

<value>7000</value>

</property>

</configuration>

[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>dfs.namenode.secondary.http-address</name>

<value>db03:50090</value>

</property>

</configuration>

[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/yarn-site.xml

<?xml version="1.0"?>

<!--

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

-->

<configuration>

<property>

<name>yarn.nodemanager.aux-services</name>

<value>mapreduce_shuffle</value>

</property>

<property>

<name>yarn.resourcemanager.hostname</name>

<value>db02</value>

</property>

<property>

<name>yarn.log-aggregation-enable</name>

<value>true</value>

</property>

<property>

<name>yarn.log-aggregation.retain-seconds</name>

<value>600000</value>

</property>

</configuration>

[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/mapred-site.xml

<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!--

Licensed under the Apache License, Version 2.0 (the "License");

you may not use this file except in compliance with the License.

You may obtain a copy of the License at

http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software

distributed under the License is distributed on an "AS IS" BASIS,

WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

See the License for the specific language governing permissions and

limitations under the License. See accompanying LICENSE file.

-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

<property>

<name>mapreduce.framework.name</name>

<value>yarn</value>

</property>

<property>

<name>mapreduce.jobhistory.address</name>

<value>db01:10020</value>

</property>

<property>

<name>mapreduce.jobhistory.webapp.address</name>

<value>db01:19888</value>

</property>

</configuration>

[hadoop@db01 hadoop-2.5.0]$ cat etc/hadoop/slaves

db01

db02

db03

在以下文件中修改Java环境变量:

etc/hadoop/hadoop-env.sh

etc/hadoop/yarn-env.sh

etc/hadoop/mapred-env.sh

创建数据目录:

/opt/cdh-5.3.6/hadoop-2.5.0/data/tmp

六、格式化HDFS

[hadoop@db01 hadoop-2.5.0]$ hdfs namenode -format

七、启动hadoop

*启动方式1:各个服务器逐一启动(比较常用,可编写shell脚本)
        hdfs:
            sbin/hadoop-daemon.sh start|stop namenode
            sbin/hadoop-daemon.sh start|stop datanode
            sbin/hadoop-daemon.sh start|stop secondarynamenode
        yarn:
            sbin/yarn-daemon.sh start|stop resourcemanager
               sbin/yarn-daemon.sh start|stop nodemanager
        mapreduce:
            sbin/mr-jobhistory-daemon.sh start|stop historyserver
    *启动方式2:各个模块分开启动:需要配置ssh对等性,需要在namenode上运行
        hdfs:
            sbin/start-dfs.sh
            sbin/start-yarn.sh
        yarn:
            sbin/stop-dfs.sh
            sbin/stop-yarn.sh
    *启动方式3:全部启动:不建议使用,这个命令需要在namenode上运行,但是会同时叫secondaryname节点也启动到namenode节点
            sbin/start-all.sh
            sbin/stop-all.sh

八、测试集群

[hadoop@db01 logs]$ cd ~/hadoop-2.5.2/share/hadoop/mapreduce/

[hadoop@db02 mapreduce]$ hadoop jar hadoop-mapreduce-examples-2.5.2.jar pi 10 10

Hadoop 2.x完全分布式安装的更多相关文章

  1. Hadoop单机和伪分布式安装

    本教程为单机版+伪分布式的Hadoop,安装过程写的有些简单,只作为笔记方便自己研究Hadoop用. 环境 操作系统 Centos 6.5_64bit   本机名称 hadoop001   本机IP ...

  2. hadoop+zookeeper+hbase伪分布式安装

    基本安装步骤 安装包下载 从大数据组件下载地址下载以下组件安装包 hadoop-2.6.0-cdh5.6.0.tar.gz hbase-1.0.0-cdh5.6.0.tar.gz zookeeper- ...

  3. hadoop最简伪分布式安装

    本次安装运行过程使用的是Ubuntu16.04 64位+Hadoop2.5.2+jdk1.7.0_75 Notice: Hadoop2.5.2版本默认只支持64位系统 使用的jdk可以为1.7和1.8 ...

  4. [大数据] hadoop全分布式安装

    一.准备工作 在伪分布式的搭建基础上修改配置,搭建全分布式hadoop环境,伪分布式安装参照 hadoop伪分布式安装. 首先准备4台虚拟机,信息如下: 192.168.1.11 namenode1 ...

  5. CentOS7 分布式安装 Hadoop 2.8

    1. 基本环境 1.1 操作系统 操作系统:CentOS7.3 1.2 三台虚拟机 172.20.20.100 master 172.20.20.101 slave1 172.20.20.102 sl ...

  6. 指导手册02:伪分布式安装Hadoop(ubuntuLinux)

    指导手册02:伪分布式安装Hadoop(ubuntuLinux)   Part 1:安装及配置虚拟机 1.安装Linux. 1.安装Ubuntu1604 64位系统 2.设置语言,能输入中文 3.创建 ...

  7. hadoop伪分布式安装之Linux环境准备

    Hadoop伪分布式安装之Linux环境准备 一.软件版本 VMare Workstation Pro 14 CentOS 7 32/64位 二.实现Linux服务器联网功能 网络适配器双击选择VMn ...

  8. hadoop 0.20.2伪分布式安装详解

    adoop 0.20.2伪分布式安装详解 hadoop有三种运行模式: 伪分布式不需要安装虚拟机,在同一台机器上同时启动5个进程,模拟分布式. 完全分布式至少有3个节点,其中一个做master,运行名 ...

  9. 【Hadoop学习之三】Hadoop全分布式安装

    环境 虚拟机:VMware 10 Linux版本:CentOS-6.5-x86_64 客户端:Xshell4 FTP:Xftp4 jdk8 hadoop3.1.1 全分布式就是集群,注意配置主机名. ...

随机推荐

  1. prestashop nginx rewrite rule

    server { listen *:; server_name www.mydomain.com *.mydomain.com; root /var/www/www.mydomain.com/web; ...

  2. nginx与apache的参考配置

    nginx与apache是两大最主流的服务器,功能强大,但配置起来也比较麻烦,对于初学者来讲可能有些地方并不完全清楚其作用,这里搜集了一些配置的作用及其使用方法.其中nginx提供了推荐配置,而apa ...

  3. 【转载整理】mysql权限分配详解

    原文:https://www.cnblogs.com/Csir/p/7889953.html MySQL权限级别 1)全局性的管理权限,作用于整个MySQL实例级别 2)数据库级别的权限,作用于某个指 ...

  4. SNFAutoupdater通用自动升级组件V2.0

    1.组件介绍 C/S构的特点是能充分发挥客户端的处理能力,很多工作可以由客户端处理后再提交给服务器,对应的优点就是客户端响应速度快模式客户端以其强大的功能,丰富的表现力受到相当大部分用户的青睐,但是客 ...

  5. 解决node使用中8080端口被占用

    1.首先按快捷键windows+R,在运行框里输入cmd,如图所示,进入黑色界面后,输入netstat -ano,查看端口. 2.找到8080端口,查看正在运行程序的pid,如图所示. 3.回到桌面, ...

  6. GitHub网站操作

    1.建立新的仓库 2.添加文件 3.新建一个分支 4.删除仓库

  7. python os详解

    1.os.getcwd()--起始执行目录 获取当前执行程序文件所在的目录,需要注意的是,getcwd不是获取代码所在文件的目录,也不是获取执行文件所在的目录,而是起始执行目录. 目录结构: test ...

  8. 【XMPP】XMPP类型

    1.ConnectionConfiguration 作为用于与XMPP服务建立连接的配置.它能配置:连接是否使用TLS,SASL加密. 包含内嵌类:ConnectionConfiguration.Se ...

  9. TCP中的KeepAlive与HTTP中的Keep-Alive

    KeepAlive 与 Keep-Alive 前言 昨天被问到了HTTP中Keep-Alive的概念,看名字我只知道是保持连接用的,但是对于他怎么结束连接,为什么要用他这些就不是很清楚了,今天查了一下 ...

  10. idea android 开发

    plugins 勾上 插件即可