Precondition:jdk、Scala安装,/etc/profile文件部分内容如下:

JAVA_HOME=/home/Spark/husor/jdk
CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME
export CLASSPATH HADOOP_HOME=/home/Spark/husor/hadoop
HBASE_HOME=/home/Spark/husor/hbase
SCALA_HOME=/home/Spark/husor/scala
SPARK_HOME=/home/Spark/husor/spark
PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HBASE_HOME/bin:$SCALA_HOME/bin:$SPARK_HOME/bin:$PATH
export HADOOP_HOME
export HBASE_HOME
export SCALA_HOME
export SPARK_HOME
"/etc/profile" 99L, 2415C written
[root@Master husor]# source /etc/profile
[root@Master husor]# echo $SPARK_HOME
/home/Spark/husor/spark
[root@Master husor]# echo $SCALA_HOME
/home/Spark/husor/scala
[root@Master husor]# scala -version
Scala code runner version 2.10.4 -- Copyright 2002-2013, LAMP/EPFL

  

1. expect安装

Expect是基于Tcl语言的一种脚本语言,其实无论是交互还是非交互的应用场合,Expect都可以大显身手,但是对于交互式的特定场合,还非Except莫属。

第1步:使用root用户登录 
    
第2步:下载安装文件expect-5.43.0.tar.gz 和 tcl8.4.11-src.tar.gz 
       
第3步:解压安装包 
       解压tcl8.4.11-src.tar.gz 
       tar –xvf tcl8.4.11-src.tar.gz 
       解压后将创建tcl8.4.11 文件夹

解压expect-5.43.0.tar.gz 
       tar –xvf expect-5.43.0.tar.gz 
       解压后将创建expect-5.43 文件夹 
       
第4步:安装tcl 
       进入tcl8.4.11/unix 目录 
        a.执行sed -i "s/relid'/relid/" configure 
        b.执行./configure --prefix=/expect 
        c.执行make 
        d.执行make install 
        e.执行mkdir -p /tools/lib 
        f.执行cp tclConfig.sh /tools/lib/ 
        g. 将/tools/bin目录export到环境变量 
           tclpath=/tools/bin 
           export tclpath

第5步:安装Expect 
        进入/soft/expect-5.43目录 
        执行./configure --prefix=/tools --with-tcl=/tools/lib --with-x=no 
        如果最后一行提示: 
        configure: error: Can't find Tcl private headers 
        需要添加一个头文件目录参数 
        --with-tclinclude=../tcl8.4.11/generic,即 
        ./configure --prefix=/tools --with-tcl=/tools/lib --with-x=no --with-tclinclude=../tcl8.4.11/generic 
        ../tcl8.4.11/generic 就是tcl解压安装后的路径,一定确保该路径存在 
        执行make 
        执行make install 
        编译完成后会生在/tools/bin内生成expect命令 
        执行/tools/bin/expect出现expect1.1>提示符说明expect安装成功.

第6步:创建一个符号链接 
        ln -s /tools/bin/expect /usr/bin/expect 
        查看符号连接 
        ls -l /usr/bin/expect 
        lrwxrwxrwx 1 root root 17 06-09 11:38 /usr/bin/expect -> /tools/bin/expect

这个符号链接将在编写expect脚本文件时用到,例如在expect文件头部会指定用于执行该脚本的shell 
        #!/usr/bin/expect

2. SSH免输入密码登陆

主机Master操作如下:

[Spark@Master ~]$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /home/Spark/.ssh/id_rsa.
Your public key has been saved in /home/Spark/.ssh/id_rsa.pub.
The key fingerprint is:
c9:d0:1f:92:43:42:85:f1:c5:23:76:f8:df:80:e5:66 Spark@Master
The key's randomart image is:
+--[ RSA 2048]----+
| .++oo. |
| .=+o+ . |
| ..*+.= |
| o =o.E |
| S .+ o |
| . . |
| |
| |
| |
+-----------------+
[Spark@Master ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

3. 然后执行如下自动化传输公钥脚本SSH.sh,将主机Master上的公钥传输给各个从节点Slave1,Slave2......

Note:将SSH.sh和NoPwdAccessSSH.exp脚本文件添加执行权限,如下:

[Spark@Master test]$ chmod +x SSH.sh

[Spark@Master test]$ chmod +x NoPwdAccessSSH.exp

//执行自动化无密码访问脚本SSH.sh

[Spark@Master test]$ ./SSH.sh
spawn ssh-copy-id -i /home/Spark/.ssh/id_rsa.pub Spark@Master
The authenticity of host 'master (192.168.8.29)' can't be established.
RSA key fingerprint is f0:3f:04:51:36:b5:91:c7:fa:47:5a:49:bc:fd:fe:40.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'master,192.168.8.29' (RSA) to the list of known hosts.
Now try logging into the machine, with "ssh 'Spark@Master'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

No Password Access Master is Succeed!!!
spawn ssh-copy-id -i /home/Spark/.ssh/id_rsa.pub Spark@Slave1
Spark@slave1's password:
Now try logging into the machine, with "ssh 'Spark@Slave1'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

No Password Access Slave1 is Succeed!!!
spawn ssh-copy-id -i /home/Spark/.ssh/id_rsa.pub Spark@Slave2
Spark@slave2's password:
Now try logging into the machine, with "ssh 'Spark@Slave2'", and check in:

.ssh/authorized_keys

to make sure we haven't added extra keys that you weren't expecting.

No Password Access Slave2 is Succeed!!!
[Spark@Master test]$ ssh Slave1
Last login: Wed Nov 19 02:35:28 2014 from 192.168.8.29
Welcome to your pre-built HUSOR STANDARD WEB DEVELOP VM.

PHP5.3 (/usr/local/php-cgi) service:php-fpm
PHP5.4 (/usr/local/php-54) service:php54-fpm
Tengine1.4.6, mysql-5.5.29, memcached 1.4.15, tokyocabinet-1.4.48, tokyotyrant-1.1.41, httpsqs-1.7, coreseek-4.1

WEBROOT: /data/webroot/www/

[Spark@Slave1 ~]$ exit
logout
Connection to Slave1 closed.
[Spark@Master test]$ ssh Slave2
Last login: Wed Nov 19 01:48:01 2014 from 192.168.8.1
Welcome to your pre-built HUSOR STANDARD WEB DEVELOP VM.

PHP5.3 (/usr/local/php-cgi) service:php-fpm
PHP5.4 (/usr/local/php-54) service:php54-fpm
Tengine1.4.6, mysql-5.5.29, memcached 1.4.15, tokyocabinet-1.4.48, tokyotyrant-1.1.41, httpsqs-1.7, coreseek-4.1

WEBROOT: /data/webroot/www/

[Spark@Slave2 ~]$

以上自动化执行脚本文件如下:

SSH.sh

#!/bin/bash

bin=`which $0`

bin=`dirname ${bin}`
bin=`cd "$bin"; pwd` if [ ! -x "$bin/NoPwdAccessSSH.exp" ]; then
echo "Sorry, $bin/NoPwdAccessSSH.exp is not executable file,please chmod +x $bin/NoPwdAccessSSH.exp."
exit 1
fi for hostInfo in $(cat $bin/SparkCluster);do host_name=$(echo "$hostInfo"|cut -f1 -d":")
user_name=$(echo "$hostInfo"|cut -f2 -d":")
user_pwd=$(echo "$hostInfo"|cut -f3 -d":") local_host=`ifconfig eth0 | grep "Mask" | cut -d: -f2 | awk '{print $1}'`
if [ $host_name = $local_host ]; then
continue;
else
expect $bin/NoPwdAccessSSH.exp $host_name $user_name $user_pwd //调用expect应答式脚本NoPwdAccessSSH.exp
fi if [ $? -eq 0 ]
then
echo "No Password Access $host_name is Succeed!!!"
else
echo "No Password Access $host_name is failed!!!"
fi done

NoPwdAccessSSH.exp

#!/usr/bin/expect -f

# auto ssh login

if { $argc<3} {
puts stderr "Usage: $argv0(hostname) $argv1(username) $argv2(userpwd).\n "
 exit 1
} set hostname [lindex $argv 0]
set username [lindex $argv 1]
set userpwd [lindex $argv 2] spawn ssh-copy-id -i /home/Spark/.ssh/id_rsa.pub $username@$hostname expect {
"*yes/no*" { send "yes\r";exp_continue }
"*password*" { send "$userpwd\r";exp_continue }
"*password*" { send "$userpwd\r"; }
}

其中的SparkCluster文件内容如下:

Master:Spark:111111
Slave1:Spark:111111
Slave2:Spark:111111

3. 安装hadoop2.4.1(呵呵,我博客上有的。。。。。。)

Note:

1> 将hadoop,jdk安装到统一新添用户Spark相应目录下:/home/Spark)(不然会引起一系列权限问题)

2> 将hadoop安装目录bin和sbin下添加执行权限(chmod 777 *)

3> 将主机Master上配置好的hadoop安装目录scp到所有从机Slave相同的新增用户Spark相同目录下:(/home/Spark) -> scp -r /home/Spark/* Spark@SlaveX:/home/Spark

4> 统一使用root用户修改/etc/hosts,添加相关hostname识别(192.168.8.29 Master 192.168.8.30 Slave1 192.168.8.31 Slave2)

所遇异常1:

Hadoop 2.2.0 - warning: You have loaded library /home/hadoop/2.2.0/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard.

Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /home/Spark/hadoop2.4.1/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
localhost]
sed: -e expression #1, char 6: unknown option to `s'
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
Java: ssh: Could not resolve hostname Java: Name or service not known
Server: ssh: Could not resolve hostname Server: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
 
Reason:
因为官网下载的prebuild hadoop中使用的本地库文件(例如lib/native/libhadoop.so.1.0.0)都是基于32位编译的,运行在64位系统上就会出现上述错误。
 
解决方案1:
在64位系统上重新编译hadoop
 
解决方案2:
以root用户在/etc/profile中添加:
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
使对应配置立即生效:
source /etc/profile
 
解决方案3:
在hadoop-env.sh和yarn-env.sh中添加如下两行:
export HADOOP_HOME=/home/Spark/hadoop
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
 
namenode格式化:
bin/hdfs namenode -format
启动或停止namenode,datanode:
sbin/start-dfs.sh    -> sbin/stop-dfs.sh
启动或停止resourcemanage和nodemanager资源管理器
sbin/start-yarn.sh  -> sbin/stop-yarn.sh
 
所遇异常2:
当用window 7浏览器查看Hadoop相关界面时,即http://Master:50070时,window 7无法识别Master,当使用Master对应的IP地址时,是可以查看相关界面的。
Reason:
window 7无法识别Master。
解决方案:
在win7地址栏上输入 %systemroot%\system32\drivers\etc 内容,回车后就能看到 hosts 文件了,添加对应主机名识别即可(192.168.8.29 Master 192.168.8.30 Slave1 192.168.8.31 Slave2)。
 
4. 验证界面
 
 
5. Spark集群安装
 
配置spark-env.sh文件
添加如下内容:

export JAVA_HOME=/home/Spark/husor/jdk
export HADOOP_HOME=/home/Spark/husor/hadoop
export HADOOP_CONF_DIR=/home/Spark/husor/hadoop/etc/hadoop
export SCALA_HOME=/home/Spark/husor/scala
export SPARK_MASTER_IP=Master
export SPARK_WORKER_MEMORY=512m

配置slaves文件

删除localhost,添加相关内容:

Slave1

Slave2

验证Spark启动
 
Spark Shell启动
[Spark@Master spark]$ bin/spark-shell
Spark assembly has been built with Hive, including Datanucleus jars on classpath
// :: INFO spark.SecurityManager: Changing view acls to: Spark,
// :: INFO spark.SecurityManager: Changing modify acls to: Spark,
// :: INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Spark, ); users with modify permissions: Set(Spark, )
// :: INFO spark.HttpServer: Starting HTTP Server
// :: INFO server.Server: jetty-.y.z-SNAPSHOT
// :: INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:
// :: INFO util.Utils: Successfully started service 'HTTP class server' on port .
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 1.1.
/_/ Using Scala version 2.10. (Java HotSpot(TM) -Bit Server VM, Java 1.7.0_71)
Type in expressions to have them evaluated.
Type :help for more information.
// :: INFO spark.SecurityManager: Changing view acls to: Spark,
// :: INFO spark.SecurityManager: Changing modify acls to: Spark,
// :: INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(Spark, ); users with modify permissions: Set(Spark, )
// :: INFO slf4j.Slf4jLogger: Slf4jLogger started
// :: INFO Remoting: Starting remoting
// :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@Master:38507]
// :: INFO Remoting: Remoting now listens on addresses: [akka.tcp://sparkDriver@Master:38507]
// :: INFO util.Utils: Successfully started service 'sparkDriver' on port .
// :: INFO spark.SparkEnv: Registering MapOutputTracker
// :: INFO spark.SparkEnv: Registering BlockManagerMaster
// :: INFO storage.DiskBlockManager: Created local directory at /tmp/spark-local--651a
// :: INFO util.Utils: Successfully started service 'Connection manager for block manager' on port .
// :: INFO network.ConnectionManager: Bound socket to port with id = ConnectionManagerId(Master,)
// :: INFO storage.MemoryStore: MemoryStore started with capacity 267.3 MB
// :: INFO storage.BlockManagerMaster: Trying to register BlockManager
// :: INFO storage.BlockManagerMasterActor: Registering block manager Master: with 267.3 MB RAM
// :: INFO storage.BlockManagerMaster: Registered BlockManager
// :: INFO spark.HttpFileServer: HTTP File server directory is /tmp/spark-7decc3d6-acce--98c3-172c680de719
// :: INFO spark.HttpServer: Starting HTTP Server
// :: INFO server.Server: jetty-.y.z-SNAPSHOT
// :: INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:
// :: INFO util.Utils: Successfully started service 'HTTP file server' on port .
// :: INFO server.Server: jetty-.y.z-SNAPSHOT
// :: INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:
// :: INFO util.Utils: Successfully started service 'SparkUI' on port .
// :: INFO ui.SparkUI: Started SparkUI at http://Master:4040
// :: WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
// :: INFO executor.Executor: Using REPL class URI: http://192.168.8.29:34246
// :: INFO util.AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@Master:38507/user/HeartbeatReceiver
// :: INFO repl.SparkILoop: Created spark context..
Spark context available as sc. scala>

Spark安装过程的更多相关文章

  1. Spark安装过程纪录

    1 Scala安装 1.1 master 机器 修改 scala 目录所属用户和用户组. sudo chown -R hadoop:hadoop scala 修改环境变量文件 .bashrc , 添加 ...

  2. Hive on Spark安装配置详解(都是坑啊)

    个人主页:http://www.linbingdong.com 简书地址:http://www.jianshu.com/p/a7f75b868568 简介 本文主要记录如何安装配置Hive on Sp ...

  3. spark 安装配置

    最佳参考链接 https://opensourceteam.gitbooks.io/bigdata/content/spark/install/spark-160-bin-hadoop26an_zhu ...

  4. [转] Spark快速入门指南 – Spark安装与基础使用

    [From] https://blog.csdn.net/w405722907/article/details/77943331 Spark快速入门指南 – Spark安装与基础使用 2017年09月 ...

  5. spark实验(一)--spark安装(1)

    一.实验目的 (1)掌握 Linux 虚拟机的安装方法.Spark 和 Hadoop 等大数据软件在 Linux 操作系统 上运行可以发挥最佳性能,因此,本教程中,Spark 都是在 Linux 系统 ...

  6. Spark 安装部署与快速上手

    Spark 介绍 核心概念 Spark 是 UC Berkeley AMP lab 开发的一个集群计算的框架,类似于 Hadoop,但有很多的区别. 最大的优化是让计算任务的中间结果可以存储在内存中, ...

  7. webstorm下载&&安装过程&&打开项目

    一.webstorm下载 WebStorm 是jetbrains公司旗下一款JavaScript 开发工具.被广大中国JS开发者誉为"Web前端开发神器"."最强大的HT ...

  8. vagrant 1.8.6 安装过程及总结遇到的坑

    下面先总结遇到的问题,这些问题如果你也遇到,可能需要搜索很多次才能找到原因. 如果想看安装过程,可以先直接跳到后面第二部分部分. 1 问题汇总: 1.1 vagrant版本过高问题. vagrant ...

  9. 安装过程错误[INS-30131]

    问题:Oracle Database 安装过程错误[INS-30131]   原因:安装用户没有对临时文件夹的读写权限   解决方案:   1.以管理员身份运行cmd.exe 2.输入命令(需启动Se ...

随机推荐

  1. 2018.08.28 codeforces600E(dsu on tree)

    传送门 一道烂大街的dsu on tree板题. 感觉挺有趣的^_^ 代码真心简单啊! 就是先处理轻儿子,然后处理重儿子,其中处理轻儿子后需要手动消除影响. 代码: #include<bits/ ...

  2. 44 The shopping psychology 购物心理

    The shopping psychology 购物心理 ①People can be addicted to different things ---e. g.,alcohol, drugs, ce ...

  3. py-函数基础

    定义: 函数是指将一组语句的集合通过一个名字(函数名)封装起来,要想执行这个函数,只需调用其函数名即可 特性: 1.减少重复代码2.使程序变的可扩展3.使程序变得易维护 函数参数 形参变量 只有在被调 ...

  4. HDU 3681 Prison Break (二分 + bfs + TSP)

    题意:给定上一个 n * m的矩阵,你的出发点是 F,你初始有一个电量,每走一步就会少1,如果遇到G,那么就会加满,每个G只能第一次使用,问你把所有的Y都经过,初始电量最少是多少. 析:首先先预处理每 ...

  5. MATLAB矩阵的一些用法

    1.怎样去提取和修改矩阵中的一个元素. (1)创建一个矩阵 >> A=[1,2,3,4;5,6,7,8;9,10,11,12;13,14,15,16]A =     1     2     ...

  6. DUBBO配置规则详解

    研究DUBBO也已经大半年了,对它的大部分源码进行了分析,以及对它的内部机制有了比较深入的了解,以及各个模块的实现.DUBBO包含很多内容,如果想了解DUBBO第一步就是启动它,从而可以很好的使用它, ...

  7. hdu 4950 打怪

    http://acm.hdu.edu.cn/showproblem.php?pid=4950 给定怪兽血量h,你攻击力a,怪物回血力b,你攻击k次要休息一次,问能否杀死怪兽 特判一次打死怪兽的情况和第 ...

  8. hdu 5018

    http://acm.hdu.edu.cn/showproblem.php?pid=5018 任意给你三个数,让你判断第三个数是否在以前两个数为开头组成的Fibonacci 数列中. 直接暴力 #in ...

  9. underscore utility

    1._.noConflict:命名冲突处理方法 _.noConflict = function() { root._ = previousUnderscore; //返回this不错 return t ...

  10. 19、Docker Compose

      编排(Orchestration)功能是复杂系统实现灵活可操作性的关键.特别是docker应用场景中,编排意味着用户可以灵活地对各种容器资源实现定义和管理.   在我们部署多容器的应用时: 要从D ...