Installing Apache Spark on Ubuntu 16.04
Santosh Srinivas
on 07 Nov 2016, tagged onApache Spark, Analytics, Data Minin
I've finally got to a long pending to-do-item to play with Apache Spark.
The following installation steps worked for me on Ubuntu 16.04.
- Download the latest pre-built version from http://spark.apache.org/downloads.html
The below options worked for me:
- Unzip and move Spark
cd ~/Downloads/
tar xzvf spark-2.0.1-bin-hadoop2.7.tgz
mv spark-2.0.1-bin-hadoop2.7/ spark
sudo mv spark/ /usr/lib/
- Install SBT
As mentioned at sbt - Download
echo "deb https://dl.bintray.com/sbt/debian /" | sudo tee -a /etc/apt/sources.list.d/sbt.list
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv 2EE0EA64E40A89B84B2DF73499E82A75642AC823
sudo apt-get update
sudo apt-get install sbt
- Make sure Java is installed
If not, install java
sudo apt-add-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
- Configure Spark
cd /usr/lib/spark/conf/
cp spark-env.sh.template spark-env.sh
vi spark-env.sh
Add the following lines
JAVA_HOME=/usr/lib/jvm/java-8-oracle
SPARK_WORKER_MEMORY=4g
- Configure IPv6
Basically, disable IPv6 using sudo vi /etc/sysctl.conf and add below lines
net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1
net.ipv6.conf.lo.disable_ipv6 = 1
- Configure .bashrc
I modified .bashrc in Sublime Text using subl ~/.bashrc and added the following lines
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export SBT_HOME=/usr/share/sbt-launcher-packaging/bin/sbt-launch.jar
export SPARK_HOME=/usr/lib/spark
export PATH=$PATH:$JAVA_HOME/bin
export PATH=$PATH:$SBT_HOME/bin:$SPARK_HOME/bin:$SPARK_HOME/sbin
- Configure fish (Optional - But I love the fish shell)
Modify config.fish using subl ~/.config/fish/config.fish and add the following lines
#Credit: http://fishshell.com/docs/current/tutorial.html#tut_startup
set -x PATH $PATH /usr/lib/spark
set -x PATH $PATH /usr/lib/spark/bin
set -x PATH $PATH /usr/lib/spark/sbin
- Test Spark (Should work both in fish and bash)
Run pyspark (this is available in /usr/lib/spark/bin/) and test out.
For example ....
>>> a = 5
>>> b = 3
>>> a+b
8
>>> print(“Welcome to Spark”)
Welcome to Spark
## type Ctrl-d to exit
Try also, the built in run-example using run-example org.apache.spark.examples.SparkPi
That's it! You are ready to rock on using Apache Spark!
Next, I plan to checkout analysis using R as mentioned inhttp://www.milanor.net/blog/wp-content/uploads/2016/11/interactiveDataAnalysiswithSparkR_v5.pdf
Installing Apache Spark on Ubuntu 16.04的更多相关文章
- Install and Configure Apache Kafka on Ubuntu 16.04
https://devops.profitbricks.com/tutorials/install-and-configure-apache-kafka-on-ubuntu-1604-1/ by hi ...
- Install LAMP Stack On Ubuntu 16.04
原文:http://www.unixmen.com/how-to-install-lamp-stack-on-ubuntu-16-04/ LAMP is a combination of operat ...
- Ubuntu 16.04 LAMP server tutorial with Apache 2.4, PHP 7 and MariaDB (instead of MySQL)
https://www.howtoforge.com/tutorial/install-apache-with-php-and-mysql-on-ubuntu-16-04-lamp/ This tut ...
- digitalocean --- How To Install Apache Tomcat 8 on Ubuntu 16.04
https://www.digitalocean.com/community/tutorials/how-to-install-apache-tomcat-8-on-ubuntu-16-04 Intr ...
- 安装Hadoop及Spark(Ubuntu 16.04)
安装Hadoop及Spark(Ubuntu 16.04) 安装JDK 下载jdk(以jdk-8u91-linux-x64.tar.gz为例) 新建文件夹 sudo mkdir /usr/lib/jvm ...
- 解决Ubuntu 16.04 上Android Studio2.3上面运行APP时提示DELETE_FAILED_INTERNAL_ERROR Error while Installing APKs的问题
本人工作环境:Ubuntu 16.04 LTS + Android Studio 2.3 AVD启动之后,运行APP,报错提示: DELETE_FAILED_INTERNAL_ERROR Error ...
- Installing Moses on Ubuntu 16.04
Installing Moses on Ubuntu 16.04 The process of installation To install requirements sudo apt-get in ...
- Installing Hyperledger Fabric v1.1 on Ubuntu 16.04 — Part I
There is an entire library of Blockchain APIs which you can select according to the needs that suffi ...
- 如何在Ubuntu 16.04上安装Apache Web服务器
转载自:https://www.howtoing.com/how-to-install-the-apache-web-server-on-ubuntu-16-04 介绍 Apache HTTP服务器是 ...
随机推荐
- 使用Caffe训练适合自己样本集的AlexNet网络模型,并对其进行分类
1.在开始之前,先简单回顾一下几个概念. Caffe(Convolution Architecture For Feature Extraction-卷积神经网络框架):是一个清晰,可读性高,快速的深 ...
- python二叉树简单实现
二叉树简单实现: class Node: def __init__(self,item): self.item = item self.child1 = None self.child2 = None ...
- Eclipse daemon not running. starting it now on port ***的
daemon not running. starting it now on port ***的 1) 运行 cmd,进入命令行2) 输入 netstat -ano ,找出占用端口***(port * ...
- java_String、StringBuilder
在介绍String和StringBuilder前先学习一下equals方法和toString方法.API java1.6提取码:04b6 equals方法 equals方法,用于比较两个对象是否相同, ...
- 【BZOJ 2724】 2724: [Violet 6]蒲公英 (区间众数不带修改版本)
2724: [Violet 6]蒲公英 Time Limit: 40 Sec Memory Limit: 512 MBSubmit: 1908 Solved: 678 Description In ...
- 汉化 的 空指针 bug
韩梦飞沙 韩亚飞 313134555@qq.com yue31313 han_meng_fei_sha nulljava.lang.NullPointerException at com.an ...
- ArrayList,Vector,LinkedList
韩梦飞沙 韩亚飞 313134555@qq.com yue31313 han_meng_fei_sha 数组列表 和 向量Vector, 都是使用数组方式存储. 向量 使用了 同步synchr ...
- hdu 4163 Stock Prices 花式排序
Stock Prices Time Limit: 10000/5000 MS (Java/Others) Memory Limit: 32768/32768 K (Java/Others)Tot ...
- Spring Batch 批处理框架介绍
前言 在大型的企业应用中,或多或少都会存在大量的任务需要处理,如邮件批量通知所有将要过期的会员,日终更新订单信息等.而在批量处理任务的过程中,又需要注意很多细节,如任务异常.性能瓶颈等等.那么,使用一 ...
- elasticsearch中文分词+全文搜索demo
本文假设你已经搭建好elasticsearch服务器,并在上面装了kibana和IK中文分词组件 elasticsearch+kibana+ik的安装,之前的文章有介绍,可参考. mapping介绍: ...
Download Apache Spark