(四)Spark集群搭建-Java&Python版Spark
Spark集群搭建
视频教程
1、优酷
2、YouTube
安装scala环境
下载地址http://www.scala-lang.org/download/
上传scala-2.10.5.tgz到master和slave机器的hadoop用户installer目录下
两台机器都要做
[hadoop@master installer]$ ls
hadoop2 hadoop-2.6.0.tar.gz scala-2.10.5.tgz
解压
[hadoop@master installer]$ tar -zxvf scala-2.10.5.tgz
[hadoop@master installer]$ mv scala-2.10.5 scala
[hadoop@master installer]$ cd scala
[hadoop@master scala]$ pwd
/home/hadoop/installer/scala
配置环境变量:
[hadoop@master ~]$ vim .bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# User specific aliases and functions
export JAVA_HOME=/usr/java/jdk1.7.0_79
export HADOOP_HOME=/home/hadoop/installer/hadoop2
export SCALA_HOME=/home/hadoop/installer/scala
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export CLASSPATH=$CLASSPATH:$HADOOP_HOME/lib:$JAVA_HOME/lib:$SCALA_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SCALA_HOME/bin
[hadoop@master ~]$ . .bashrc
安装python
安装gcc
[root@master ~]# mkdir /RHEL5U4
[root@master ~]# mount /dev/cdrom /media/
[root@master media]# cp -r * /RHEL5U4/
[root@master ~]vim /etc/yum.repos.d/iso.repo
[rhel-Server]
Name=5u4_Server
Baseurl=file:///RHEL5U4/Server
Enable=1
Gpgcheck=0
Gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
yum clean all
yum install gcc
Python安装
[root@master installer]# tar -zxvf Python-2.7.12
上传zlib-1.2.8.tar.gz
替换/root/installer/Python-2.7.12/Modules的zlib
[root@master Python-2.7.12]# ./configure --prefix=/usr/local/python27
[root@master Python-2.7.12]# make
[root@master Python-2.7.12]# make install
[root@master Python-2.7.12]# mv /usr/bin/python /usr/bin/python_old
[root@master Python-2.7.12]# ln -s /usr/local/python27/bin/python /usr/bin/
[root@master Python-2.7.12]# python
Python 2.7.12 (default, Nov 7 2016, 21:42:16)
[GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>>
安装spark环境
下载地址http://spark.apache.org/downloads.html
上传spark-2.0.0-bin-hadoop2.6.tgz到master的hadoop用户installer目录下
解压缩
[hadoop@master installer]$ tar -zxvf spark-2.0.0-bin-hadoop2.6.tgz
[hadoop@master installer]$ mv spark-2.0.0-bin-hadoop2.6 spark2
[hadoop@master installer]$ cd spark2/
[hadoop@master spark2]$ ls
bin conf data examples jars LICENSE licenses NOTICE python R README.md RELEASE sbin yarn
[hadoop@master spark2]$ pwd
/home/hadoop/installer/spark2
[hadoop@master ~]$ vim .bashrc
# .bashrc
# Source global definitions
if [ -f /etc/bashrc ]; then
. /etc/bashrc
fi
# User specific aliases and functions
export JAVA_HOME=/usr/java/jdk1.7.0_79
export HADOOP_HOME=/home/hadoop/installer/hadoop2
export SCALA_HOME=/home/hadoop/installer/scala
export SPARK_HOME=/home/hadoop/installer/spark2
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export CLASSPATH=$CLASSPATH:$HADOOP_HOME/lib:$JAVA_HOME/lib:$SCALA_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$SCALA_HOME/bin:$SPARK_HOME/bin:$SPARK_HOME/sbin
[hadoop@master ~]$ . .bashrc
[hadoop@master ~]$ scp .bashrc slave:~
.bashrc 100% 621 0.6KB/s 00:00
在slave机器上执行
[hadoop@slave ~]$ . .bashrc
配置spark
[hadoop@master conf]$ cp spark-env.sh.template spark-env.sh
[hadoop@slave conf]$ vim spark-env.sh
#!/usr/bin/env bash
#
# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements. See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the "License"); you may not use this file except in compliance with
# the License. You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
export JAVA_HOME=/usr/java/jdk1.7.0_79
export SCALA_HOME=/home/hadoop/installer/scala
export SPARK_MASTER_HOST=master
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_EXECUTOR_MEMORY=600M
export SPARK_DRIVER_MEMORY=600M
[hadoop@slave conf]$ vim slaves
master
slave
[hadoop@master installer]$ scp -r spark2 slave:~/installer/
启动spark集群
[hadoop@master ~]$ start-master.sh
[hadoop@master ~]$ start-slaves.sh
[hadoop@master ~]$ jps
17769 ResourceManager
20192 Master
20275 Worker
17443 NameNode
20521 Jps
17631 SecondaryNameNode
[hadoop@slave ~]$ jps
13297 DataNode
15367 Worker
13408 NodeManager
16245 Jps
Spark wordcount
[hadoop@master ~]$ spark-shell
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel).
16/11/04 11:05:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/11/04 11:05:09 WARN spark.SparkContext: Use an existing SparkContext, some configuration may not take effect.
Spark context Web UI available at http://192.168.3.100:4040
Spark context available as 'sc' (master = local[*], app id = local-1478228709028).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.0.0
/_/
Using Scala version 2.11.8 (Java HotSpot(TM) Client VM, Java 1.7.0_79)
Type in expressions to have them evaluated.
Type :help for more information.
scala> val file = sc.textFile("hdfs://master:9000/data/wordcount")
16/11/04 11:05:14 WARN util.SizeEstimator: Failed to check whether UseCompressedOops is set; assuming yes
file: org.apache.spark.rdd.RDD[String] = hdfs://master:9000/data/input/wordcount MapPartitionsRDD[1] at textFile at <console>:24
scala> val count=file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_)
count: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[4] at reduceByKey at <console>:26
scala> count.collect()
res0: Array[(String, Int)] = Array((package,1), (this,1), (Version"](http://spark.apache.org/docs/latest/building-spark.html#specifying-the-hadoop-version),1), (Because,1), (Python,2), (cluster.,1), (its,1), ([run,1), (general,2), (have,1), (pre-built,1), (YARN,,1), (locally,2), (changed,1), (locally.,1), (sc.parallelize(1,1), (only,1), (Configuration,1), (This,2), (basic,1), (first,1), (learning,,1), ([Eclipse](https://cwiki.apache.org/confluence/display/SPARK/Useful+Developer+Tools#UsefulDeveloperTools-Eclipse),1), (documentation,3), (graph,1), (Hive,2), (several,1), (["Specifying,1), ("yarn",1), (page](http://spark.apache.org/documentation.html),1), ([params]`.,1), ([project,2), (prefer,1), (SparkPi,2), (<http://spark.apache.org/>,1), (engine,1), (version,1), (file,1), (documentation...
scala>
(四)Spark集群搭建-Java&Python版Spark的更多相关文章
- (三)Spark-Hadoop集群搭建-Java&Python版Spark
Spark-Hadoop集群搭建 视频教程: 1.优酷 2.YouTube 配置java 启动ftp [root@master ~]# /etc/init.d/vsftpd restart 关闭 vs ...
- Spark集群搭建_YARN
2017年3月1日, 星期三 Spark集群搭建_YARN 前提:参考Spark集群搭建_Standalone 1.修改spark中conf中的spark-env.sh 2.Spark on ...
- Spark集群搭建【Spark+Hadoop+Scala+Zookeeper】
1.安装Linux 需要:3台CentOS7虚拟机 IP:192.168.245.130,192.168.245.131,192.168.245.132(类似,尽量保持连续,方便记忆) 注意: 3台虚 ...
- Spark集群搭建简配+它到底有多快?【单挑纯C/CPP/HADOOP】
最近耳闻Spark风生水起,这两天利用休息时间研究了一下,果然还是给人不少惊喜.可惜,笔者不善JAVA,只有PYTHON和SCALA接口.花了不少时间从零开始认识PYTHON和SCALA,不少时间答了 ...
- hadoop+spark集群搭建入门
忽略元数据末尾 回到原数据开始处 Hadoop+spark集群搭建 说明: 本文档主要讲述hadoop+spark的集群搭建,linux环境是centos,本文档集群搭建使用两个节点作为集群环境:一个 ...
- Spark集群搭建中的问题
参照<Spark实战高手之路>学习的,书籍电子版在51CTO网站 资料链接 Hadoop下载[链接](http://archive.apache.org/dist/hadoop/core/ ...
- spark集群搭建
文中的所有操作都是在之前的文章scala的安装及使用文章基础上建立的,重复操作已经简写: 配置中使用了master01.slave01.slave02.slave03: 一.虚拟机中操作(启动网卡)s ...
- 十、scala、spark集群搭建
spark集群搭建: 1.上传scala-2.10.6.tgz到master 2.解压scala-2.10.6.tgz 3.配置环境变量 export SCALA_HOME=/mnt/scala-2. ...
- Spark集群搭建简要
Spark集群搭建 1 Spark编译 1.1 下载源代码 git clone git://github.com/apache/spark.git -b branch-1.6 1.2 修改pom文件 ...
随机推荐
- ABP(现代ASP.NET样板开发框架)系列之21、ABP展现层——Javascript函数库
点这里进入ABP系列文章总目录 基于DDD的现代ASP.NET开发框架--ABP系列之21.ABP展现层——Javascript函数库 ABP是“ASP.NET Boilerplate Project ...
- ABP框架 - 动态Web Api层
文档目录 本节内容: 创建动态Web Api控制器 ForAll 方法 重写 ForAll ForMethods Http 动词 WithVerb 方法 HTTP 特性 命名约定 Api 浏览器 Re ...
- mongoose - 让node.js高效操作mongodb
Mongoose库简而言之就是在node环境中操作MongoDB数据库的一种便捷的封装,一种对象模型工具,类似ORM,Mongoose将数据库中的数据转换为JavaScript对象以供你在应用中使用. ...
- FWaaS 实践: 允许 ssh - 每天5分钟玩转 OpenStack(119)
上一节应用了无规则的虚拟防火墙,不允许任何流量通过. 今天我们会在防火墙中添加一条规则,允许 ssh.最后我们会对安全组和 FWaaS 作个比较. 下面我们添加一条 firewall rule:允许 ...
- AngularJS开发指南11:AngularJS的model,controller,view详解
model model这个词在AngularJS中,既可以表示一个(比如,一个叫做phones的model,它的值是一个包含多个phone的数组)对象,也可以表示应用中的整个数据模型,这取决于我们所讨 ...
- Oracle 11g静默安装软件+手工创建数据库
由于是二次跳转+远程操作,无法使用图形界面,不能直接图形界面安装.采用静默安装软件+手工创建数据库的方式完成需求. 静默模式安装Oracle软件,配置监听程序 手工建库 检查各组件是否符合要求 1. ...
- Vertica环境安装R-Lang包提示缺少libgfortran.so.1
环境:RHEL 6.4 + Vertica 7.0.0-11.最终确认安装compat-libgfortran-41-4.1.2-39.el6.x86_64.rpm即可解决. # rpm -ivh v ...
- 安卓Design包之AppBar和Toolbar的联用
前面讲了Design包的的CoordinatorLayout和SnackBar的混用,现在继续理解Design包的AppBar; AppBarLayout跟它的名字一样,把容器类的组件全部作为AppB ...
- js实现StringBuffer
实现 function StringBuffer() { this.__strings__ = []; }; StringBuffer.prototype.Append = function (str ...
- 【干货】用大白话聊聊JavaSE — ArrayList 深入剖析和Java基础知识详解(二)
在上一节中,我们简单阐述了Java的一些基础知识,比如多态,接口的实现等. 然后,演示了ArrayList的几个基本方法. ArrayList是一个集合框架,它的底层其实就是一个数组,这一点,官方文档 ...