本章主要介绍下在Linux系统下的Hadoop2.5.0伪分布式环境搭建步骤。首先要搭建Hadoop伪分布式环境,需要完成一些前置依赖工作,包括创建用户、安装JDK、关闭防火墙等。

一、创建hadoop用户

使用root账户创建hadoop用户,为了在实验环境下便于操作,赋予hadoop用户sudo权限。具体操作代码如下:

useradd hadoop # 添加hadoop用户
passwd hadoop # 设置密码
visudo
hadoop ALL=(root)NOPASSWD:ALL

二、Hadoop伪分布式环境搭建

1、关闭Linux中的防火墙和selinux

禁用selinux,代码如下:

sudo vi /etc/sysconfig/selinux # 打开selinux配置文件
SELINUX=disabled # 修改SELINUX属性值为disabled

关闭防火墙,代码如下:

sudo service iptables status # 查看防火墙状态
sudo service iptables stop # 关闭防火墙
sudo chkconfig iptables off # 关闭防火墙开机启动设置

2、安装jdk

首先,查看系统中是否有安装自带的jdk,如果存在,则先卸载,代码如下:

rpm -qa | grep java # 查看是否有安装jdk
-openjdk-.el6_3.x86_64 tzdata-java-2012j-.el6.noarch java--openjdk-1.7.0.9-2.3.4.1.el6_3.x86_64 # 卸载自带jdk

接着,安装jdk,步骤如下:

step1.解压安装包:

tar -zxf jdk-7u67-linux-x64.tar.gz -C /usr/local/

step2.配置环境变量及检查是否安装成功:

sudo vi /etc/profile # 打开profile文件
##JAVA_HOME
export JAVA_HOME=/usr/local/jdk1..0_67
export PATH=$PATH:$JAVA_HOME/bin

# 生效文件
source /etc/profile # 使用root用户操作

# 查看是否配置成功
java -version

3、安装hadoop

step1:解压hadoop安装包

.tar.gz -C /opt/software/

建议:将/opt/software/hadoop-2.5.0/share下的doc目录删除。

step2:修改etc/hadoop目录下hadoop-env.sh、mapred-env.sh、yarn-env.sh三个配置文件中的JAVA_HOME

export JAVA_HOME=/usr/local/jdk1..0_67

step3:修改core-site.xml

 <?xml version="1.0" encoding="UTF-8"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <!--
   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

     http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License. See accompanying LICENSE file.
 -->

 <!-- Put site-specific property overrides in this file. -->

 <configuration>
     <property>
         <name>name</name>
         <value>my-study-cluster</value>
     </property>
     <property>
         <name>fs.defaultFS</name>
         <value>hdfs://bigdata01:8020</value>
     </property>
         <!-- 指定Hadoop系统生成文件的临时目录地址 -->
     <property>
         <name>hadoop.tmp.dir</name>
         <value>/opt/software/hadoop-2.5.0/data/tmp</value>
     </property>
     <property>
         <name>fs.trash.interval</name>
         <value>1440</value>
     </property>
     <property>
         <name>hadoop.http.staticuser.user</name>
         <value>hadoop</value>
     </property>
         <property>
                 <name>hadoop.proxyuser.hadoop.hosts</name>
                 <value>bigdata01</value>
         </property>
         <property>
                 <name>hadoop.proxyuser.hadoop.groups</name>
                 <value>*</value>
         </property>
 </configuration>

step4:修改hdfs-site.xml

 <?xml version="1.0" encoding="UTF-8"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <!--
   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

     http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License. See accompanying LICENSE file.
 -->

 <!-- Put site-specific property overrides in this file. -->

 <configuration>
     <property>
         <name>dfs.replication</name>
         <value>1</value>
     </property>
     <property>
         <name>dfs.permissions.enabled</name>
         <value>false</value>
     </property>
     <property>
         <name>dfs.namenode.name.dir</name>
         <value>/opt/software/hadoop-2.5.0/data/name</value>
     </property>
     <property>
         <name>dfs.datanode.data.dir</name>
         <value>/opt/software/hadoop-2.5.0/data/data</value>
     </property>
 </configuration>

step5:修改mapred-site.xml

 <?xml version="1.0"?>
 <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
 <!--
   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

     http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License. See accompanying LICENSE file.
 -->

 <!-- Put site-specific property overrides in this file. -->

 <configuration>
     <property>
         <name>mapreduce.framework.name</name>
         <value>yarn</value>
     </property>
     <property>
         <name>mapreduce.jobhistory.address</name>
         <value>bigdata01:10020</value>
     </property>
     <property>
         <name>mapreduce.jobhistory.webapp.address</name>
         <value>bigdata01:19888</value>
     </property>
 </configuration>

step6:修改yarn-site.xml

 <?xml version="1.0"?>
 <!--
   Licensed under the Apache License, Version 2.0 (the "License");
   you may not use this file except in compliance with the License.
   You may obtain a copy of the License at

     http://www.apache.org/licenses/LICENSE-2.0

   Unless required by applicable law or agreed to in writing, software
   distributed under the License is distributed on an "AS IS" BASIS,
   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
   See the License for the specific language governing permissions and
   limitations under the License. See accompanying LICENSE file.
 -->
 <configuration>

 <!-- Site specific YARN configuration properties -->

     <property>
         <name>yarn.nodemanager.aux-services</name>
         <value>mapreduce_shuffle</value>
     </property>
     <property>
         <name>yarn.resourcemanager.hostname</name>
         <value>bigdata01</value>
     </property>
     <property>
         <name>yarn.log-aggregation-enable</name>
         <value>true</value>
     </property>
     <property>
         <name>yarn.log-aggregation.retain-seconds</name>
         <value>106800</value>
     </property>
     <property>
         <name>yarn.log.server.url</name>
         <value>http://bigdata01:19888/jobhistory/job/</value>
     </property>
 </configuration>

step7:修改slaves文件

bigdata01

step8:格式化namenode

bin/hdfs namenode -format

step9:启动进程

 ## 方式一:单独启动一个进程
 # 启动namenode
 sbin/hadoop-daemon.sh start namenode
 # 启动datanode
 sbin/hadoop-daemon.sh start datanode
 # 启动resourcemanager
 sbin/yarn-daemon.sh start resourcemanager
 # 启动nodemanager
 sbin/yarn-daemon.sh start nodemanager
 # 启动secondarynamenode
 sbin/hadoop-daemon.sh start secondarynamenode
 # 启动历史服务器
 sbin/mr-jobhistory-daemon.sh start historyserver

 ## 方式二:
 sbin/start-dfs.sh # 启动namenode、datanode、secondarynamenode
 sbin/start-yarn.sh # 启动resourcemanager、nodemanager
 sbin/mr-jobhistory-daemon.sh start historyserver # 启动历史服务器

step10:检查

1.通过浏览器访问HDFS的外部UI界面,加上外部交互端口号:50070

  http://bigdata01:50070

2.通过浏览器访问YARN的外部UI界面,加上外部交互端口号:8088

  http://bigdata01:8088

3.执行Wordcount程序

  bin/yarn jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.5.0.jar wordcount input output

  注:输入输出目录自定义

结束!

以上为Hadoop2.5.0伪分布式环境搭建步骤,如有问题,请指出,谢谢!

Hadoop2.5.0伪分布式环境搭建的更多相关文章

  1. 在Win7虚拟机下搭建Hadoop2.6.0伪分布式环境

    近几年大数据越来越火热.由于工作需要以及个人兴趣,最近开始学习大数据相关技术.学习过程中的一些经验教训希望能通过博文沉淀下来,与网友分享讨论,作为个人备忘. 第一篇,在win7虚拟机下搭建hadoop ...

  2. hive-2.2.0 伪分布式环境搭建

    一,实验环境: 1, ubuntu server 16.04 2, jdk,1.8 3, hadoop 2.7.4 伪分布式环境或者集群模式 4, apache-hive-2.2.0-bin.tar. ...

  3. Spark2.4.0伪分布式环境搭建

    一.搭建环境的前提条件 环境:ubuntu-16.04 hadoop-2.6.0  jdk1.8.0_161. spark-2.4.0-bin-hadoop2.6.这里的环境不一定需要和我一样,基本版 ...

  4. Ubuntu15.10下Hadoop2.6.0伪分布式环境安装配置及Hadoop Streaming的体验

    Ubuntu用的是Ubuntu15.10Beta2版本,正式的版本好像要到这个月的22号才发布.参考的资料主要是http://www.powerxing.com/install-hadoop-clus ...

  5. Hadoop2.6.0伪分布环境搭建

    用到的软件: 一.安装jdk: 1.要安装的jdk,我把它拷在了共享文件夹里面.   (用优盘拷也可以) 2.我把jdk拷在了用户文件夹下面. (其他地方也可以,不过路径要相应改变) 3.执行复制安装 ...

  6. ubuntu14.04搭建Hadoop2.9.0伪分布式环境

    本文主要参考 给力星的博文——Hadoop安装教程_单机/伪分布式配置_Hadoop2.6.0/Ubuntu14.04 一些准备工作的基本步骤和步骤具体说明本文不再列出,文章中提到的“见参考”均指以上 ...

  7. 安装hadoop2.6.0伪分布式环境

    集群环境搭建请见:http://blog.csdn.net/jediael_lu/article/details/45145767 一.环境准备 1.安装linux.jdk 2.下载hadoop2.6 ...

  8. 安装hadoop2.6.0伪分布式环境 分类: A1_HADOOP 2015-04-27 18:59 409人阅读 评论(0) 收藏

    集群环境搭建请见:http://blog.csdn.net/jediael_lu/article/details/45145767 一.环境准备 1.安装linux.jdk 2.下载hadoop2.6 ...

  9. hadoop2.4.1伪分布式环境搭建

    注意:所有的安装用普通哟用户安装,所以首先使普通用户可以以sudo执行一些命令: 0.虚拟机中前期的网络配置参考: http://www.cnblogs.com/qlqwjy/p/7783253.ht ...

随机推荐

  1. 深入浅出python系列(一)包与模块

    一.包 包是由一系列模块组成的,模块简单就说是一个.py文件.比如说,现在有一个数学功能组,可以计算加.减.乘.除.幂运算等等,假定把这几个功能分成几个模块,一个模块就是一个.py文件.由这些不同的模 ...

  2. linux每天一小步---find命令详解

    1 命令功能 find命令用于搜索指定目录下的文件,并配合参数做出相应的处理. 2 命令语法      find  搜索路径pathname 选项option [-exec -ok -print  执 ...

  3. 学生信息系统(json模块解决数据持久化)

    将学生管理的案例,学生信息由原来的只有姓名,拓展为包含,姓名,年龄,两个属性:完成对应的增.删.查.改,操作 import json,os,time,sys student_list = [] Fil ...

  4. Java Web系列:JAAS认证和授权基础

    1.认证和授权概述 (1)认证:对用户的身份进行验证. .NET基于的RBS(参考1)的认证和授权相关的核心是2个接口System.Security.Principal.IPrincipal和Syst ...

  5. 使用python登录CNZZ访问量统计网站,然后获取相应的数据

    思路: 第一步:使用pypeteer.launcher打开浏览器, 第二步:向CNZZ的登录(通过使用iframe嵌入的阿里巴巴单点登录页面),向iframe页面中自动输入用户名和密码,然后点击登录按 ...

  6. pip is configured with locations that require TLS/SSL, however the ssl module in Python is not available.

    # 背景 安装pip后发现执行pip install pytest,提示下面错误 pip is configured with locations that require TLS/SSL, howe ...

  7. 使用Toolbar + DrawerLayou实现菜单侧滑,改变toolbar左上角图标

    侧边栏具体实现可以参照http://www.jcodecraeer.com/a/anzhuokaifa/androidkaifa/2015/0303/2522.html getSupportActio ...

  8. 什么是ODBC和JDBC?

    jdbc是使用通过JAVA的数据库驱动直接和数据库相连,而jdbc-odbc连接的是ODBC的数据源,真正与数据库建立连接的是ODBC! 建议使用JDBC直接连接,同时最好使用连接池! JDBC 是 ...

  9. windows服务安装记录

    首先打开cmd. 进入这个地址 C:\Windows\Microsoft.NET\Framework\v4.0.30319 执行操作  InstallUtil.exe E:\QueueWinServi ...

  10. 初步理解IOC和DI和AOP模式

    初步理解IOC和DI和AOP模式 控制反转(IOC) 控制反转(IOC,Inversion of Control)是一种转主动为被动关系的一种编程模式,有点类似于工厂模式,举个栗子, 下面这个这不是I ...