三台虚拟机,centos6.5

127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
:: localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.59.130 m1
192.168.59.131 s1
192.168.59.132 s2

修改主机名

[root@m1 hadoop]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=m1

修改主机映射

[root@m1 hadoop]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.59.130 m1
192.168.59.131 s1
192.168.59.132 s2

ssh免密码登陆(注意! 要求每台机子互相都能ssh包括本机)

ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub m2

安装jdk

http://www.cnblogs.com/xiaojf/p/6568426.html

安装hadoop2.7.3

解压,重命名

[root@m1 soft]# ll
total
drwxr-xr-x. root root Aug hadoop
drwxr-xr-x. root root Mar : jar
drwxr-xr-x. uucp Dec : jdk
drwxr-xr-x. root root Mar : kafka
drwxrwxr-x. Mar scala-2.11.
drwxr-xr-x. root root Mar : tmp
drwxr-xr-x. Aug zookeeper-3.4.

创建目录存放日志文件还要有数据文件

mkdir -p /usr/local/soft/tmp/hadoop/tmp
mkdir -p /usr/local/soft/tmp/hadoop/dfs/name
mkdir -p /usr/local/soft/tmp/hadoop/dfs/data

修改配置文件

[root@m1 soft]# cd /usr/local/soft/hadoop/etc/hadoop/
[root@m1 hadoop]# ll
total
-rw-r--r--. root root Aug capacity-scheduler.xml
-rw-r--r--. root root Aug configuration.xsl
-rw-r--r--. root root Aug container-executor.cfg
-rw-r--r--. root root Aug core-site.xml
-rw-r--r--. root root Aug hadoop-env.cmd
-rw-r--r--. root root Aug hadoop-env.sh
-rw-r--r--. root root Aug hadoop-metrics2.properties
-rw-r--r--. root root Aug hadoop-metrics.properties
-rw-r--r--. root root Aug hadoop-policy.xml
-rw-r--r--. root root Aug hdfs-site.xml
-rw-r--r--. root root Aug httpfs-env.sh
-rw-r--r--. root root Aug httpfs-log4j.properties
-rw-r--r--. root root Aug httpfs-signature.secret
-rw-r--r--. root root Aug httpfs-site.xml
-rw-r--r--. root root Aug kms-acls.xml
-rw-r--r--. root root Aug kms-env.sh
-rw-r--r--. root root Aug kms-log4j.properties
-rw-r--r--. root root Aug kms-site.xml
-rw-r--r--. root root Aug log4j.properties
-rw-r--r--. root root Aug mapred-env.cmd
-rw-r--r--. root root Aug mapred-env.sh
-rw-r--r--. root root Aug mapred-queues.xml.template
-rw-r--r--. root root Aug mapred-site.xml.template
-rw-r--r--. root root Aug slaves
-rw-r--r--. root root Aug ssl-client.xml.example
-rw-r--r--. root root Aug ssl-server.xml.example
-rw-r--r--. root root Aug yarn-env.cmd
-rw-r--r--. root root Aug yarn-env.sh
-rw-r--r--. root root Aug yarn-site.xml

yarn-env.sh

[root@m1 hadoop]# vi yarn-env.sh 
# Licensed to the Apache Software Foundation (ASF) under one

# or more contributor license agreements.  See the NOTICE file

# distributed with this work for additional information

# regarding copyright ownership.  The ASF licenses this file

# to you under the Apache License, Version 2.0 (the

# "License"); you may not use this file except in compliance

# with the License.  You may obtain a copy of the License at

#

#     http://www.apache.org/licenses/LICENSE-2.0

#

# Unless required by applicable law or agreed to in writing, software

# distributed under the License is distributed on an "AS IS" BASIS,

# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.

# See the License for the specific language governing permissions and

# limitations under the License.

# Set Hadoop-specific environment variables here.

# The only required environment variable is JAVA_HOME. All others are

# optional.  When running a distributed configuration it is best to

# set JAVA_HOME in this file, so that it is correctly defined on

# remote nodes.

# The java implementation to use.

export JAVA_HOME=/usr/local/soft/jdk

slaves

[root@m1 hadoop]# vi slaves 
s1
s2

core-site.xml

<configuration>

      <property>

        <name>fs.defaultFS</name>

       <value>hdfs://m1:9000</value>

    </property>

    <property>

        <name>io.file.buffer.size</name>

        <value></value>

    </property>

    <property>

        <name>hadoop.tmp.dir</name>

       <value>file:/usr/local/soft/tmp/hadoop/tmp</value>

        <description>Abase for other temporary  directories.</description>

    </property>

</configuration>

hdfs-site.xml

<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>m1:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>

mapred-site.xml

<configuration>

      <property>                                                                 

        <name>mapreduce.framework.name</name>

                <value>yarn</value>

           </property>

          <property>

                 <name>mapreduce.jobhistory.address</name>

                  <value>m1:</value>

          </property>

          <property>

               <name>mapreduce.jobhistory.webapp.address</name>

               <value>m1:</value>

       </property>

</configuration>

yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->

        <property>

              <name>yarn.nodemanager.aux-services</name>

              <value>mapreduce_shuffle</value>

        </property>

        <property>                                                              

<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

              <value>org.apache.hadoop.mapred.ShuffleHandler</value>

        </property>

        <property>

              <name>yarn.resourcemanager.address</name>

              <value>m1:</value>

       </property>

       <property>

               <name>yarn.resourcemanager.scheduler.address</name>

              <value>m1:</value>

       </property>

       <property>

           <name>yarn.resourcemanager.resource-tracker.address</name>

             <value>m1:</value>

      </property>

      <property>

             <name>yarn.resourcemanager.admin.address</name>

              <value>m1:</value>

       </property>

       <property>

              <name>yarn.resourcemanager.webapp.address</name>

              <value>m1:</value>

       </property>

</configuration>

设置Hadoop环境变量

export HADOOP_HOME=/usr/local/soft/hadoop
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin

分发代码

[root@m1 soft]# scp -r hadoop root@s2:/usr/local/soft/

namenode format

[root@m1 soft]# hdfs namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. // :: INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = m1/192.168.59.130
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.3

启动

[root@m1 soft]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [m1]
m1: starting namenode, logging to /usr/local/soft/hadoop/logs/hadoop-root-namenode-m1.out
s1: starting datanode, logging to /usr/local/soft/hadoop/logs/hadoop-root-datanode-s1.out
s2: starting datanode, logging to /usr/local/soft/hadoop/logs/hadoop-root-datanode-s2.out
Starting secondary namenodes [master]
master: ssh: Could not resolve hostname master: Name or service not known
starting yarn daemons
starting resourcemanager, logging to /usr/local/soft/hadoop/logs/yarn-root-resourcemanager-m1.out
s1: starting nodemanager, logging to /usr/local/soft/hadoop/logs/yarn-root-nodemanager-s1.out
s2: starting nodemanager, logging to /usr/local/soft/hadoop/logs/yarn-root-nodemanager-s2.out

验证

[root@m1 soft]# hadoop dfs -ls /
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. [root@m1 soft]# hadoop dfs -mkdir /xiaojf
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. [root@m1 soft]# hadoop dfs -ls /
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. Found items
drwxr-xr-x - root supergroup -- : /xiaojf

完成

hadoop 2.7.3 集群安装的更多相关文章

  1. hadoop 2.2.0集群安装详细步骤(简单配置,无HA)

    安装环境操作系统:CentOS 6.5 i586(32位)java环境:JDK 1.7.0.51hadoop版本:社区版本2.2.0,hadoop-2.2.0.tar.gz 安装准备设置集群的host ...

  2. hadoop 2.2.0集群安装

    相关阅读: hbase 0.98.1集群安装 本文将基于hadoop 2.2.0解说其在linux集群上的安装方法,并对一些重要的设置项进行解释,本文原文链接:http://blog.csdn.net ...

  3. Hadoop 2.6.1 集群安装配置教程

    集群环境: 192.168.56.10 master 192.168.56.11 slave1 192.168.56.12 slave2 下载安装包/拷贝安装包 # 存放路径: cd /usr/loc ...

  4. Hive之 hive-1.2.1 + hadoop 2.7.4 集群安装

    一. 相关概念 Hive Metastore有三种配置方式,分别是: Embedded Metastore Database (Derby) 内嵌模式Local Metastore Server 本地 ...

  5. Hadoop完全高可用集群安装

    架构图(HA模型没有SNN节点) 用vm规划了8台机器,用到了7台,SNN节点没用   NN DN SN ZKFC ZK JNN RM NM node1 *     *         node2 * ...

  6. Hadoop 2.4.x集群安装配置问题总结

    配置文件:/etc/profile export JAVA_HOME=/usr/java/latest export HADOOP_PREFIX=/opt/hadoop-2.4.1 export HA ...

  7. CentOS系统下Hadoop 2.4.1集群安装配置(简易版)

    安装配置 1.软件下载 JDK下载:jdk-7u65-linux-i586.tar.gz http://www.oracle.com/technetwork/java/javase/downloads ...

  8. Hadoop 2.5.1集群安装配置

    本文的安装只涉及了hadoop-common.hadoop-hdfs.hadoop-mapreduce和hadoop-yarn,并不包含HBase.Hive和Pig等. http://blog.csd ...

  9. hadoop 1.0.1集群安装及配置

    1.hadoop下载地址:http://www.apache.org/dyn/closer.cgi/hadoop/core/ 2.下载java6软件包,分别在三台安装 3.三台虚拟机,一台作为mast ...

随机推荐

  1. 跟着刚哥梳理java知识点——IO(十五)

    凡是与输入.输出相关的类.接口都定义在java.io包下 java.io.File类 1.File是一个类,可以有构造器创建其对象.此对象对应着一个文件或者一个目录. 2.File中的类,仅涉及到如何 ...

  2. WPF中button按钮同时点击多次触发click解决方法

    DateTime lastClick = DateTime.Now; object obj = new object(); ; private void Button_Click(object sen ...

  3. angular 自定义filter

    用modul.filter .filter("fiilterCity",function(){ return function(obj){ var newObj = []; ang ...

  4. 详解Executor框架

    在Java中,使用线程来异步执行任务.Java线程的创建与销毁需要一定的开销,如果我们为每一个任务创建一个新线程来执行,这些线程的创建与销毁将消耗大量的计算资源.同时,为每一个任务创建一个新线程来执行 ...

  5. leetcode水题(一)

    Two Sum 1 public int[] twoSum(int[] numbers,int target){ Map<Integer,Integer> map = new HashMa ...

  6. SQLite中使用CTE巧解多级分类的级联查询

    在最近的项目中使用ActiveReports报表设计器设计一个报表模板时,遇到一个多级分类的难题:需要将某个部门所有销售及下属部门的销售金额汇总,因为下属级别的层次不确定,所以靠拼接子查询的方式显然是 ...

  7. 又拍云SSL证书全新上线,提供一站式HTTPS安全解决方案

    互联网快速发展,云服务早已融入每一个人的日常生活,而互联网安全与互联网的发展息息相关,这其中涉及到信息的保密性.完整性.可用性.真实性和可控性.又拍云上线了与多家国际顶级 CA 机构合作的数款OV & ...

  8. Oracle修改监听端口号1521[转]

    在oracle中,默认的监听端口号为1521,一旦有人扫描出这个端口号就会知道此服务器为oracle数据库服务器,存在极其大的安全隐患,在这里,教大家如何修改oracle默认端口号为9999: 1.查 ...

  9. Android ViewPager动画切换

    使用方法 setPageTransformer 例如: ViewPager.setPageTransformer(true, new ZoomOutPageTransformer()) package ...

  10. 从SQL Server数据库转到Oracle数据库的数据脚本处理

    在我们很多情况下的开发,为了方便或者通用性的考虑,都首先考虑SQL Server数据库进行开发,但有时候客户的生产环境是Oracle或者其他数据库,那么我们就需要把对应的数据结构和数据脚本转换为对应的 ...