hadoop 2.7.3 集群安装
三台虚拟机,centos6.5
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.59.130 m1
192.168.59.131 s1
192.168.59.132 s2
修改主机名
[root@m1 hadoop]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=m1
修改主机映射
[root@m1 hadoop]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
:: localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.59.130 m1
192.168.59.131 s1
192.168.59.132 s2
ssh免密码登陆(注意! 要求每台机子互相都能ssh包括本机)
ssh-keygen -t rsa
ssh-copy-id -i ~/.ssh/id_rsa.pub m2
安装jdk
http://www.cnblogs.com/xiaojf/p/6568426.html
安装hadoop2.7.3
解压,重命名
[root@m1 soft]# ll
total
drwxr-xr-x. root root Aug hadoop
drwxr-xr-x. root root Mar : jar
drwxr-xr-x. uucp Dec : jdk
drwxr-xr-x. root root Mar : kafka
drwxrwxr-x. Mar scala-2.11.
drwxr-xr-x. root root Mar : tmp
drwxr-xr-x. Aug zookeeper-3.4.
创建目录存放日志文件还要有数据文件
mkdir -p /usr/local/soft/tmp/hadoop/tmp
mkdir -p /usr/local/soft/tmp/hadoop/dfs/name
mkdir -p /usr/local/soft/tmp/hadoop/dfs/data
修改配置文件
[root@m1 soft]# cd /usr/local/soft/hadoop/etc/hadoop/
[root@m1 hadoop]# ll
total
-rw-r--r--. root root Aug capacity-scheduler.xml
-rw-r--r--. root root Aug configuration.xsl
-rw-r--r--. root root Aug container-executor.cfg
-rw-r--r--. root root Aug core-site.xml
-rw-r--r--. root root Aug hadoop-env.cmd
-rw-r--r--. root root Aug hadoop-env.sh
-rw-r--r--. root root Aug hadoop-metrics2.properties
-rw-r--r--. root root Aug hadoop-metrics.properties
-rw-r--r--. root root Aug hadoop-policy.xml
-rw-r--r--. root root Aug hdfs-site.xml
-rw-r--r--. root root Aug httpfs-env.sh
-rw-r--r--. root root Aug httpfs-log4j.properties
-rw-r--r--. root root Aug httpfs-signature.secret
-rw-r--r--. root root Aug httpfs-site.xml
-rw-r--r--. root root Aug kms-acls.xml
-rw-r--r--. root root Aug kms-env.sh
-rw-r--r--. root root Aug kms-log4j.properties
-rw-r--r--. root root Aug kms-site.xml
-rw-r--r--. root root Aug log4j.properties
-rw-r--r--. root root Aug mapred-env.cmd
-rw-r--r--. root root Aug mapred-env.sh
-rw-r--r--. root root Aug mapred-queues.xml.template
-rw-r--r--. root root Aug mapred-site.xml.template
-rw-r--r--. root root Aug slaves
-rw-r--r--. root root Aug ssl-client.xml.example
-rw-r--r--. root root Aug ssl-server.xml.example
-rw-r--r--. root root Aug yarn-env.cmd
-rw-r--r--. root root Aug yarn-env.sh
-rw-r--r--. root root Aug yarn-site.xml
yarn-env.sh
[root@m1 hadoop]# vi yarn-env.sh
# Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # "License"); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Set Hadoop-specific environment variables here. # The only required environment variable is JAVA_HOME. All others are # optional. When running a distributed configuration it is best to # set JAVA_HOME in this file, so that it is correctly defined on # remote nodes. # The java implementation to use. export JAVA_HOME=/usr/local/soft/jdk
slaves
[root@m1 hadoop]# vi slaves
s1
s2
core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://m1:9000</value> </property> <property> <name>io.file.buffer.size</name> <value></value> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/usr/local/soft/tmp/hadoop/tmp</value> <description>Abase for other temporary directories.</description> </property> </configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>m1:9001</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/hadoop/dfs/name</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/hadoop/dfs/data</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.webhdfs.enabled</name>
<value>true</value>
</property>
</configuration>
mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>m1:</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>m1:</value> </property> </configuration>
yarn-site.xml
<configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>m1:</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>m1:</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>m1:</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>m1:</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>m1:</value> </property> </configuration>
设置Hadoop环境变量
export HADOOP_HOME=/usr/local/soft/hadoop
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
分发代码
[root@m1 soft]# scp -r hadoop root@s2:/usr/local/soft/
namenode format
[root@m1 soft]# hdfs namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. // :: INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = m1/192.168.59.130
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 2.7.3
启动
[root@m1 soft]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [m1]
m1: starting namenode, logging to /usr/local/soft/hadoop/logs/hadoop-root-namenode-m1.out
s1: starting datanode, logging to /usr/local/soft/hadoop/logs/hadoop-root-datanode-s1.out
s2: starting datanode, logging to /usr/local/soft/hadoop/logs/hadoop-root-datanode-s2.out
Starting secondary namenodes [master]
master: ssh: Could not resolve hostname master: Name or service not known
starting yarn daemons
starting resourcemanager, logging to /usr/local/soft/hadoop/logs/yarn-root-resourcemanager-m1.out
s1: starting nodemanager, logging to /usr/local/soft/hadoop/logs/yarn-root-nodemanager-s1.out
s2: starting nodemanager, logging to /usr/local/soft/hadoop/logs/yarn-root-nodemanager-s2.out
验证
[root@m1 soft]# hadoop dfs -ls /
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. [root@m1 soft]# hadoop dfs -mkdir /xiaojf
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. [root@m1 soft]# hadoop dfs -ls /
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it. Found items
drwxr-xr-x - root supergroup -- : /xiaojf
完成
hadoop 2.7.3 集群安装的更多相关文章
- hadoop 2.2.0集群安装详细步骤(简单配置,无HA)
安装环境操作系统:CentOS 6.5 i586(32位)java环境:JDK 1.7.0.51hadoop版本:社区版本2.2.0,hadoop-2.2.0.tar.gz 安装准备设置集群的host ...
- hadoop 2.2.0集群安装
相关阅读: hbase 0.98.1集群安装 本文将基于hadoop 2.2.0解说其在linux集群上的安装方法,并对一些重要的设置项进行解释,本文原文链接:http://blog.csdn.net ...
- Hadoop 2.6.1 集群安装配置教程
集群环境: 192.168.56.10 master 192.168.56.11 slave1 192.168.56.12 slave2 下载安装包/拷贝安装包 # 存放路径: cd /usr/loc ...
- Hive之 hive-1.2.1 + hadoop 2.7.4 集群安装
一. 相关概念 Hive Metastore有三种配置方式,分别是: Embedded Metastore Database (Derby) 内嵌模式Local Metastore Server 本地 ...
- Hadoop完全高可用集群安装
架构图(HA模型没有SNN节点) 用vm规划了8台机器,用到了7台,SNN节点没用 NN DN SN ZKFC ZK JNN RM NM node1 * * node2 * ...
- Hadoop 2.4.x集群安装配置问题总结
配置文件:/etc/profile export JAVA_HOME=/usr/java/latest export HADOOP_PREFIX=/opt/hadoop-2.4.1 export HA ...
- CentOS系统下Hadoop 2.4.1集群安装配置(简易版)
安装配置 1.软件下载 JDK下载:jdk-7u65-linux-i586.tar.gz http://www.oracle.com/technetwork/java/javase/downloads ...
- Hadoop 2.5.1集群安装配置
本文的安装只涉及了hadoop-common.hadoop-hdfs.hadoop-mapreduce和hadoop-yarn,并不包含HBase.Hive和Pig等. http://blog.csd ...
- hadoop 1.0.1集群安装及配置
1.hadoop下载地址:http://www.apache.org/dyn/closer.cgi/hadoop/core/ 2.下载java6软件包,分别在三台安装 3.三台虚拟机,一台作为mast ...
随机推荐
- JavaScript中的6种运算符总结
JavaScript 运算符主要包括: 算术运算符 赋值运算符 比较运算符 三元运算符 逻辑运算符 字符串连接运算符 运算符 说明 例子 运算结果 + 加 y = 2+1 y = 3 - 减 y = ...
- Git托管
前面的话 本文将主要介绍如何使用Github来托管Git服务 SSH 大多数Git服务器都会选择使用SSH公钥来进行授权.系统中的每个用户都必须提供一个公钥用于授权 首先先确认一下是否已经有一个公钥了 ...
- 使用PCA + KNN对MNIST数据集进行手写数字识别
首先引入需要的包 %matplotlib inline import numpy as np import scipy as sp import pandas as pd import matplot ...
- 分享一本书<<谁都不敢欺负你>>
有些人,不管在工作还是生活上,总是被人欺负. 分享这本书给大家,能给大家带来正能量.你强大了,就没人敢欺负你. 有的时候,感到为什么倒霉的总是我?为什么我的命运是这样?为什么总欺负我? 也许有很多人会 ...
- Swift try try! try?使用和区别
Swift try try! try?使用和区别 一.异常处理try catch的使用 1. swift异常处理 历史由来 Swift1.0版本 Cocoa Touch 的 NSError ,Swif ...
- Redisson入门
Redisson入门 Author:Ricky Date:2017-04-24 Redisson概述 Redisson是架设在Redis基础上的一个Java驻内存数据网格(In-Memory Dat ...
- Array和ArrayCollection作为数据源的一个应用区别
在不用[Enabled]元标签的前提下,将一个Array赋值给DataGrid.DataList等控件的DataProvider后,当Array值发生改变时,控件显示内容不会及时更新(可调用控件的in ...
- junit测试Android项目
关于junit测试Android项目方法主要有一下步骤: 1.导入junit4的jar包 在工厂中Build Path中Add Library->JUnit->JUnit4->Fin ...
- java并发程序——并发容器
概述 java cocurrent包提供了很多并发容器,在提供并发控制的前提下,通过优化,提升性能.本文主要讨论常见的并发容器的实现机制和绝妙之处,但并不会对所有实现细节面面俱到. 为什么JUC需要提 ...
- endsWith is not a function解决方案
在写javascript脚本时,用某些方法,有时候会碰到"XXX is not a function"之类的报错. 出现这种情况,主要是因为某些方法在低版本浏览器上不支持.比如说& ...