一共三个节点,在安装完hadoop之后直接安装spark、下载的spark版本是不带hadoop的,注意节点配置

Hadoop multi-nodes Installation

Environment:

Hadoop 2.7.2

Ubuntu 14.04 LTS

ssh-keygen

Java version 1.8.0

Scala 2.11.7

Servers:

Master: 192.168.199.80 (hadoopmaster)

Hadoopslave: 192.168.199.81(hadoopslave1)

Hadoopslave: 192.168.199.82(hadoopslve2)

Install Java 8:

sudo add-apt-repository ppa:openjdk-r/ppa

sudo apt-get update

sudo apt-get install openjdk-8-jdk

sudo update-alternatives --config java

sudo update-alternatives --config javac

Add JAVA_HOME to ~/.bashrc

$ sudo vi ~/.bashrc

//add two lines at the end of .bashrc

export JAVA_HOME=/usr/lib/java-8-openjdk-amd64

export PATH=PATH:$JAVA_HOME/bin

Then source it

$ source  ~/.bashrc

Tips:

Don't forget it is a hidden file inside your home directory (you would not be the first to do a ls -l and thinking it is not there).

ls -la ~/ | more

ADD Hosts

# vi /etc/hosts
enter the following lines in the /etc/hosts file.
192.168.199.80 hadoopmaster 
192.168.199.81 hadoopslave1 
192.168.199.82 hadoopslave2

Setup SSH in every node

So they can communicate without password ( do the same in three nodes)

$ ssh-keygen -t rsa 
$ ssh-copy-id -i ~/.ssh/id_rsa.pub cmtadmin@hadoopmaster 
$ ssh-copy-id -i ~/.ssh/id_rsa.pub cmtadmin@hadoopslave1 
$ ssh-copy-id -i ~/.ssh/id_rsa.pub cmtadmin@hadoopslave2 
$ chmod 0600 ~/.ssh/authorized_keys 
$ exit

Install Hadoop 2.7.2 ( to /opt/Hadoop)

Download from Hadoop 2.7.2(Hadoop-2.7.2.tar.gz)

Hadoop-2.7.2-src.tar.gz is the version you need to build by yourself

$ tar xvf Hadoop-2.7.2.tar.gz  /opt
$ cd /opt/hadoop

Configuring Hadoop

core-site.xml

Open the core-site.xml file and edit it as shown below.

<configuration>
   <property> 
      <name>fs.default.name</name> 
      <value>hdfs://hadoopmaster:9000/</value> 
   </property> 
   <property> 
      <name>dfs.permissions</name> 
      <value>false</value> 
   </property> 
</configuration>

hdfs-site.xml

Open the hdfs-site.xml file and edit it as shown below.

<configuration>
   <property> 
      <name>dfs.data.dir</name> 
      <value>/media/hdfs/name/data</value> 
      <final>true</final> 
   </property> 
   <property> 
      <name>dfs.name.dir</name> 
      <value>/media/hdfs/name</value> 
      <final>true</final> 
   </property> 
   <property> 
      <name>dfs.replication</name> 
      <value>1</value> 
   </property> 
</configuration>

mapred-site.xml

Open the mapred-site.xml file and edit it as shown below.

<configuration>
   <property> 
      <name>mapred.job.tracker</name> 
      <value>hadoopmaster:9001</value> 
   </property> 
</configuration>

hadoop-env.sh

Open the hadoop-env.sh file and edit JAVA_HOME

Installing Hadoop on Slave Servers

$ cd /opt
$ scp -r hadoop hadoopslave1:/opt/
$ scp -r hadoop hadoopslave2:/opt/

Configuring Hadoop on Master Server

$ cd /opt/hadoop
$ vi etc/hadoop/masters
hadoopmaster
$ vi etc/hadoop/slaves
hadoopslave1 
hadoopslave2

Add HADOOP_HOME, PATH

export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin 

Format Name Node on Hadoop Master

$ cd /opt/hadoop/hadoop 
$ bin/hadoop namenode –format

Start Hadoop services

$ cd /opt/hadoop/sbin
$ start-all.sh

Stop all the services

$ cd /opt/hadoop/sbin
$ stop-all.sh

Installation Spark 1.6 based on user-provided Hadoop

Step 1 install scala

Install Scala 2.11.7 download from website

$ tar xvf scala-2.11.7.tgz
$ mv scala-2.11.7/ /usr/opt/scala

Set PATH for Scala in ~/.bashrc

$ sudo vi ~/.bashrc
 export SCALA_HOME=/usr/opt/scala
 export PATH = $PATH:$SCALA_HOME/bin

 

Download Spark 1.6 from apache server

Install Spark

$ tar xvf spark-1.6.0-bin-without-hadoop.tgz 
$ mv spark-1.6.0-bin-without-hadoop/  /opt/spark

Set up environment for spark

$ sudo vi ~/.bashrc
 export SPARK_HOME=/usr/opt/spark
 export PATH = $PATH:$SPARK_HOME/bin

Add entity to configuration

$ cd /opt/spark/conf
$ cp spark_env.sh.template spark_env.sh
$ vi spark_env.sh
HADOOP_CONF_DIR=/opt/hadoop/etc/hadoop
export SPARK_DIST_CLASSPATH=$(hadoop classpath)

 

Add slaves to configuration

$ cd /opt/spark/conf
$ cp slaves.template slaves
$ vi slaves
hadoopslave1
hadoopslave2

Run spark

$ cd /opt/spark/bin
$ spark-shell

转载请附上原创地址:http://www.cnblogs.com/tonylp/

hadoop 2.7.2 和 spark1.6 多节点安装的更多相关文章

  1. Hadoop介绍及最新稳定版Hadoop 2.4.1下载地址及单节点安装

     Hadoop介绍 Hadoop是一个能对大量数据进行分布式处理的软件框架.其基本的组成包括hdfs分布式文件系统和可以运行在hdfs文件系统上的MapReduce编程模型,以及基于hdfs和MapR ...

  2. Hadoop集群(第5期)_Hadoop安装配置

    1.1 Hadoop简介 Hadoop是Apache软件基金会旗下的一个开源分布式计算平台.以Hadoop分布式文件系统(HDFS,Hadoop Distributed Filesystem)和Map ...

  3. 一、hadoop单节点安装测试

    一.hadoop简介 相信你或多或少都听过hadoop这个名字,hadoop是一个开源的.分布式软件平台.它主要解决了分布式存储(hdfs)和分布式计算(mapReduce)两个大数据的痛点问题,在h ...

  4. hadoop入门手册2:hadoop【2.7.1】【多节点】集群配置【必知配置知识2】

    问题导读 1.如何实现检测NodeManagers健康?2.配置ssh互信的作用是什么?3.启动.停止hdfs有哪些方式? 上篇: hadoop[2.7.1][多节点]集群配置[必知配置知识1]htt ...

  5. hadoop入门手册1:hadoop【2.7.1】【多节点】集群配置【必知配置知识1】

    问题导读 1.说说你对集群配置的认识?2.集群配置的配置项你了解多少?3.下面内容让你对集群的配置有了什么新的认识? 目的 目的1:这个文档描述了如何安装配置hadoop集群,从几个节点到上千节点.为 ...

  6. Hadoop Yarn(一)—— 单机伪分布式环境安装

    HamaWhite(QQ:530422429)原创作品,转载请注明出处:http://write.blog.csdn.net/postedit/40556267. 本文是依据Hadoop官网安装教程写 ...

  7. Hadoop 3.1.1 - 概述 - 单节点安装

    Hadoop: 单节点安装 目标 本文描述了如何安装和配置单机的 Hadoop,这样你可以使用 Hadoop MapReduce 和 Hadoop 分布式文件系统(HDFS)快速地尝试简单的操作. 前 ...

  8. Hadoop学习笔记—13.分布式集群中节点的动态添加与下架

    开篇:在本笔记系列的第一篇中,我们介绍了如何搭建伪分布与分布模式的Hadoop集群.现在,我们来了解一下在一个Hadoop分布式集群中,如何动态(不关机且正在运行的情况下)地添加一个Hadoop节点与 ...

  9. hadoop入门(3)——hadoop2.0理论基础:安装部署方法

    一.hadoop2.0安装部署流程         1.自动安装部署:Ambari.Minos(小米).Cloudera Manager(收费)         2.使用RPM包安装部署:Apache ...

随机推荐

  1. JS 获取地址栏三级域名

    <script type="text/javascript"> function Char(str) { var uchars = {}; str.replace(/\ ...

  2. [NOIP2013] 火柴排队(归并排序)

    题目描述 涵涵有两盒火柴,每盒装有 n 根火柴,每根火柴都有一个高度. 现在将每盒中的火柴各自排成一列, 同一列火柴的高度互不相同, 两列火柴之间的距离定义为: ∑(ai-bi)^2 其中 ai 表示 ...

  3. 《Matrix Computation 3rd》读书笔记——第2章 矩阵分析

  4. TCP/IP BOOKS

    TCP/IP Fundamentals for Microsoft Windows: Overview https://technet.microsoft.com/en-us/library/bb72 ...

  5. Web前端MVC框架

    MVC: 模型层(model).视图层(view).控制层(controller) Model:即数据模型,用来包装和应用程序的业务逻辑相关的数据或者对数据进行处理,模型可以直接访问数据. View: ...

  6. PHP使用JSON通信

    PHP使用JSON通信 php中使用JSON的Code如下 <?php header("Content-type: text/html; charset=utf-8"); $ ...

  7. rails: 的cookie小结

    cookie会随着浏览器每次发起的请求(request)传给服务器进行读取,而服务器则会在应答(response)中携带cookie写在本机上.因此,cookie是存储在本地的.而且由于cookie的 ...

  8. windows下IIS+PHP解决大文件上传500错问题

    linux下改到iis+php后,上传大于2M就出500错,改了php.ini中的upload_max_filesize也不行,最后解决如下: 第一步:修改php.ini 上传大小限制 (以上传500 ...

  9. 文章汇总(包括NVMe SPDK vSAN Ceph xfs等)

    基础部分 NVMe驱动解析-前言 NVMe驱动解析-注册设备 NVMe驱动解析-关键的BAR空间 NVMe驱动解析-DMA传输(热门) NVMe驱动解析-响应I/O请求 用一个简单的例子窥探NVMe的 ...

  10. OAF_开发系列04_实现OAF查询4种不同的实现方式的比较和实现(案例)

    2014-06-02 Created By BaoXinjian