在Win7虚拟机下搭建Hadoop2.6.0+Spark1.4.0单机环境
Hadoop的安装和配置可以参考我之前的文章:在Win7虚拟机下搭建Hadoop2.6.0伪分布式环境。
本篇介绍如何在Hadoop2.6.0基础上搭建spark1.4.0单机环境。
1. 软件准备
scala-2.11.7.tgz
spark-1.4.0-bin-hadoop2.6.tgz
都可以从官网下载。
2. scala安装和配置
scala-2.11.7.tgz解压缩即可。我解压缩到目录/home/vm/tools/scala,之后配置~/.bash_profile环境变量。
#scala export SCALA_HOME=/home/vm/tools/scala export PATH=$SCALA_HOME/bin:$PATH |
使用source ~/.bash_profile生效。
验证scala安装是否成功:
交互式使用scala:
3. spark安装和配置
解压缩spark-1.4.0-bin-hadoop2.6.tgz到/home/vm/tools/spark目录,之后配置~/.bash_profile环境变量。
#spark export SPARK_HOME=/home/vm/tools/spark export PATH=$SPARK_HOME/bin:$PATH |
修改$SPARK_HOME/conf/spark-env.sh
export SPARK_HOME=/home/vm/tools/spark export SCALA_HOME=/home/vm/tools/scala export JAVA_HOME=/home/vm/tools/jdk export SPARK_MASTER_IP=192.168.62.129 export SPARK_WORKER_MEMORY=512m |
修改$SPARK_HOME/conf/spark-defaults.conf
spark.master spark://192.168.62.129:7077 spark.serializer org.apache.spark.serializer.KryoSerializer |
修改$SPARK_HOME/conf/spark-defaults.conf
192.168.62.129 这是我测试机器的IP地址 |
启动spark
cd /home/vm/tools/spark/sbin sh start-all.sh |
测试Spark是否安装成功
cd $SPARK_HOME/bin/ ./run-example SparkPi |
SparkPi的执行日志:
vm@ubuntu:~/tools/spark/bin$ ./run-example SparkPi Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties // :: INFO SparkContext: Running Spark version 1.4. // :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable // :: INFO SecurityManager: Changing view acls to: vm // :: INFO SecurityManager: Changing modify acls to: vm // :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vm); users with modify permissions: Set(vm) // :: INFO Slf4jLogger: Slf4jLogger started // :: INFO Remoting: Starting remoting // :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.62.129:34337] // :: INFO Utils: Successfully started service 'sparkDriver' on port . // :: INFO SparkEnv: Registering MapOutputTracker // :: INFO SparkEnv: Registering BlockManagerMaster // :: INFO DiskBlockManager: Created local directory at /tmp/spark--e4c4-4dcc-8c16-f46fce5e657d/blockmgr-be03da6d-31fe-43dd-959c-6cfa4307b269 // :: INFO MemoryStore: MemoryStore started with capacity 267.3 MB // :: INFO HttpFileServer: HTTP File server directory is /tmp/spark--e4c4-4dcc-8c16-f46fce5e657d/httpd-fdc26a4d-c0b6-4fc9-9dee-fb085191ee5a // :: INFO HttpServer: Starting HTTP Server // :: INFO Utils: Successfully started service 'HTTP file server' on port . // :: INFO SparkEnv: Registering OutputCommitCoordinator // :: INFO Utils: Successfully started service 'SparkUI' on port . // :: INFO SparkUI: Started SparkUI at http://192.168.62.129:4040 // :: INFO SparkContext: Added JAR file:/home/vm/tools/spark/lib/spark-examples-1.4.-hadoop2.6.0.jar at http://192.168.62.129:56880/jars/spark-examples-1.4.0-hadoop2.6.0.jar with timestamp 1438099360726 // :: INFO Executor: Starting executor ID driver on host localhost // :: INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port . // :: INFO NettyBlockTransferService: Server created on // :: INFO BlockManagerMaster: Trying to register BlockManager // :: INFO BlockManagerMasterEndpoint: Registering block manager localhost: with 267.3 MB RAM, BlockManagerId(driver, localhost, ) // :: INFO BlockManagerMaster: Registered BlockManager // :: INFO SparkContext: Starting job: reduce at SparkPi.scala: // :: INFO DAGScheduler: Got job (reduce at SparkPi.scala:) with output partitions (allowLocal=false) // :: INFO DAGScheduler: Final stage: ResultStage (reduce at SparkPi.scala:) // :: INFO DAGScheduler: Parents of final stage: List() // :: INFO DAGScheduler: Missing parents: List() // :: INFO DAGScheduler: Submitting ResultStage (MapPartitionsRDD[] at map at SparkPi.scala:), which has no missing parents // :: INFO MemoryStore: ensureFreeSpace() called with curMem=, maxMem= // :: INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 1888.0 B, free 267.3 MB) // :: INFO MemoryStore: ensureFreeSpace() called with curMem=, maxMem= // :: INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1186.0 B, free 267.3 MB) // :: INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on localhost: (size: 1186.0 B, free: 267.3 MB) // :: INFO SparkContext: Created broadcast from broadcast at DAGScheduler.scala: // :: INFO DAGScheduler: Submitting missing tasks from ResultStage (MapPartitionsRDD[] at map at SparkPi.scala:) // :: INFO TaskSchedulerImpl: Adding task set 0.0 with tasks // :: INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID , localhost, PROCESS_LOCAL, bytes) // :: INFO TaskSetManager: Starting task 1.0 in stage 0.0 (TID , localhost, PROCESS_LOCAL, bytes) // :: INFO Executor: Running task 1.0 in stage 0.0 (TID ) // :: INFO Executor: Running task 0.0 in stage 0.0 (TID ) // :: INFO Executor: Fetching http://192.168.62.129:56880/jars/spark-examples-1.4.0-hadoop2.6.0.jar with timestamp 1438099360726 // :: INFO Utils: Fetching http://192.168.62.129:56880/jars/spark-examples-1.4.0-hadoop2.6.0.jar to /tmp/spark-78277899-e4c4-4dcc-8c16-f46fce5e657d/userFiles-27c8dd76-e417-4d13-9bfd-a978cbbaacd1/fetchFileTemp5302506499464337647.tmp // :: INFO Executor: Adding file:/tmp/spark--e4c4-4dcc-8c16-f46fce5e657d/userFiles-27c8dd76-e417-4d13-9bfd-a978cbbaacd1/spark-examples-1.4.-hadoop2.6.0.jar to class loader // :: INFO Executor: Finished task 1.0 in stage 0.0 (TID ). bytes result sent to driver // :: INFO Executor: Finished task 0.0 in stage 0.0 (TID ). bytes result sent to driver // :: INFO TaskSetManager: Finished task 1.0 in stage 0.0 (TID ) in ms on localhost (/) // :: INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID ) in ms on localhost (/) // :: INFO DAGScheduler: ResultStage (reduce at SparkPi.scala:) finished in 2.817 s // :: INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool // :: INFO DAGScheduler: Job finished: reduce at SparkPi.scala:, took 4.244145 s Pi is roughly 3.14622 // :: INFO SparkUI: Stopped Spark web UI at http://192.168.62.129:4040 // :: INFO DAGScheduler: Stopping DAGScheduler // :: INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped! // :: INFO Utils: path = /tmp/spark--e4c4-4dcc-8c16-f46fce5e657d/blockmgr-be03da6d-31fe-43dd-959c-6cfa4307b269, already present as root for deletion. // :: INFO MemoryStore: MemoryStore cleared // :: INFO BlockManager: BlockManager stopped // :: INFO BlockManagerMaster: BlockManagerMaster stopped // :: INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped! // :: INFO SparkContext: Successfully stopped SparkContext // :: INFO Utils: Shutdown hook called // :: INFO Utils: Deleting directory /tmp/spark--e4c4-4dcc-8c16-f46fce5e657d
在浏览器中打开地址 http://192.168.62.129:8080 可以查看spark集群和任务基本情况:
4. spark-shell工具
在/home/vm/tools/spark/bin下执行./spark-shell,即可进入spark-shell交互界面。通过spark-shell可以进行一些调试工作。
vm@ubuntu:~/tools/spark/bin$ ./spark-shell log4j:WARN No appenders could be found for logger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory). log4j:WARN Please initialize the log4j system properly. log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info. Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties // :: INFO SecurityManager: Changing view acls to: vm // :: INFO SecurityManager: Changing modify acls to: vm // :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vm); users with modify permissions: Set(vm) // :: INFO HttpServer: Starting HTTP Server // :: INFO Utils: Successfully started service 'HTTP class server' on port . Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /___/ .__/\_,_/_/ /_/\_\ version 1.4. /_/ Using Scala version 2.10. (Java HotSpot(TM) -Bit Server VM, Java 1.7.0_80) Type in expressions to have them evaluated. Type :help for more information. // :: INFO SparkContext: Running Spark version 1.4. // :: INFO SecurityManager: Changing view acls to: vm // :: INFO SecurityManager: Changing modify acls to: vm // :: INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(vm); users with modify permissions: Set(vm) // :: INFO Slf4jLogger: Slf4jLogger started // :: INFO Remoting: Starting remoting // :: INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@192.168.62.129:59312] // :: INFO Utils: Successfully started service 'sparkDriver' on port . // :: INFO SparkEnv: Registering MapOutputTracker // :: INFO SparkEnv: Registering BlockManagerMaster // :: INFO DiskBlockManager: Created local directory at /tmp/spark-621ebed4-8bd8-4e87-9ea5-08b5c7f05e98/blockmgr-a12211dd-e0ba--999c-6249b9c44d8a // :: INFO MemoryStore: MemoryStore started with capacity 267.3 MB // :: INFO HttpFileServer: HTTP File server directory is /tmp/spark-621ebed4-8bd8-4e87-9ea5-08b5c7f05e98/httpd-8512d909-5a81--8fbd-2b2ed741ae26 // :: INFO HttpServer: Starting HTTP Server // :: INFO Utils: Successfully started service 'HTTP file server' on port . // :: INFO SparkEnv: Registering OutputCommitCoordinator // :: INFO Utils: Successfully started service 'SparkUI' on port . // :: INFO SparkUI: Started SparkUI at http://192.168.62.129:4040 // :: INFO Executor: Starting executor ID driver on host localhost // :: INFO Executor: Using REPL class URI: http://192.168.62.129:56464 // :: INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port . // :: INFO NettyBlockTransferService: Server created on // :: INFO BlockManagerMaster: Trying to register BlockManager // :: INFO BlockManagerMasterEndpoint: Registering block manager localhost: with 267.3 MB RAM, BlockManagerId(driver, localhost, ) // :: INFO BlockManagerMaster: Registered BlockManager // :: INFO SparkILoop: Created spark context.. Spark context available as sc. // :: INFO HiveContext: Initializing execution hive, version 0.13. // :: INFO HiveMetaStore: : Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore // :: INFO ObjectStore: ObjectStore, initialize called // :: INFO Persistence: Property datanucleus.cache.level2 unknown - will be ignored // :: INFO Persistence: Property hive.metastore.integral.jdo.pushdown unknown - will be ignored // :: WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies) // :: WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies) // :: INFO ObjectStore: Setting MetaStore object pin classes with hive.metastore.cache.pinobjtypes="Table,StorageDescriptor,SerDeInfo,Partition,Database,Type,FieldSchema,Order" // :: INFO MetaStoreDirectSql: MySQL check failed, assuming we are not on mysql: Lexical error at line , column . Encountered: "@" (), after : "". // :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. // :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. // :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MFieldSchema" is tagged as "embedded-only" so does not have its own datastore table. // :: INFO Datastore: The class "org.apache.hadoop.hive.metastore.model.MOrder" is tagged as "embedded-only" so does not have its own datastore table. // :: INFO ObjectStore: Initialized ObjectStore // :: WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 0.13.1aa // :: INFO HiveMetaStore: Added admin role in metastore // :: INFO HiveMetaStore: Added public role in metastore // :: INFO HiveMetaStore: No user is added in admin role, since config is empty // :: INFO SessionState: No Tez session required at this point. hive.execution.engine=mr. // :: INFO SparkILoop: Created sql context (with Hive support).. SQL context available as sqlContext. scala>
下一篇将介绍分别用eclipse和IDEA搭建spark开发环境。
在Win7虚拟机下搭建Hadoop2.6.0+Spark1.4.0单机环境的更多相关文章
- 在Win7虚拟机下搭建Hadoop2.6.0伪分布式环境
近几年大数据越来越火热.由于工作需要以及个人兴趣,最近开始学习大数据相关技术.学习过程中的一些经验教训希望能通过博文沉淀下来,与网友分享讨论,作为个人备忘. 第一篇,在win7虚拟机下搭建hadoop ...
- 搭建Hadoop2.6.0+Spark1.1.0集群环境
前几篇文章主要介绍了单机模式的hadoop和spark的安装和配置,方便开发和调试.本文主要介绍,真正集群环境下hadoop和spark的安装和使用. 1. 环境准备 集群有三台机器: master: ...
- Win7 32bit下一个hadoop2.5.1源代码编译平台的搭建各种错误遇到
从小白在安装hadoop困难和错误时遇到说起,同时,我们也希望能得到上帝的指示. 首先hadoop更新速度非常快,最新的是hadoop2.5.1,因此就介绍下在安装2.5.1时遇到的各种困难. 假设直 ...
- CentOS7下搭建hadoop2.7.3完全分布式
这里搭建的是3个节点的完全分布式,即1个nameNode,2个dataNode,分别如下: CentOS-master nameNode 192.168.11.128 CentOS-node1 ...
- windows下搭建hadoop-2.6.0本地idea开发环境
概述 本文记录windows下hadoop本地开发环境的搭建: OS:windows hadoop执行模式:独立模式 安装包结构: Hadoop-2.6.0-Windows.zip - cygwinI ...
- 在CentOS7下搭建Hadoop2.9.0集群
系统环境:CentOS 7 JDK版本:jdk-8u191-linux-x64 MYSQL版本:5.7.26 Hadoop版本:2.9.0 Hive版本:2.3.4 Host Name Ip User ...
- Eclipse下搭建Hadoop2.4.0开发环境
一.安装Eclipse 下载Eclipse,解压安装,例如安装到/usr/local,即/usr/local/eclipse 4.3.1版本下载地址:http://pan.baidu.com/s/1e ...
- centos7 下搭建hadoop2.9 分布式集群
首先说明,本文记录的是博主搭建的3节点的完全分布式hadoop集群的过程,环境是centos 7,1个nameNode,2个dataNode,如下: 1.首先,创建好3个Centos7的虚拟机,具体的 ...
- myeclipse下搭建hadoop2.7.3开发环境
需要下载的文件:链接:http://pan.baidu.com/s/1i5yRyuh 密码:ms91 一 下载并编译 hadoop-eclipse-plugin-2.7.3.jar 二 将had ...
随机推荐
- java ReentrantLock Condition
sychronized.wait.notify.notifyAll.sleep 在多线程环境下,为了防止多个线程同时调用同一个方法.修改同一份变量,造成数据读取结果混乱,可以使用synchronize ...
- C# Winform小程序:局域网设置NTP服务器、实现时间同步
设置NTP服务器: NTP是网络时间协议(Network Time Protocol),它是用来同步网络中各个计算机的时间的协议. 局域网不能连接Internet,可以设置一台计算机为NTP服务器. ...
- 拼凑的宿主-host
开发两年之久,竟然不知道host这个词是什么意思.前些天有幸遇到了,就跟别人请教了.今天理絮一下.总比不知道强吧. 白话来说宿主就是一些框架运行机制运行时需要依赖的平台. 例如web开发常用的IIS, ...
- mysql的引擎和锁
- ACM-线段树
http://blog.csdn.net/libin56842/article/details/8530197 基础可以看上面这篇文章 风格: maxn是题目给的最大区间,而节点数要开4倍,确切的说… ...
- JavaScirpt(JS)的this细究
一.js中function的不同形态 js中类和函数都要通过function关键字来构建. 1.js中当函数名大写时,一般是当作类来处理 function Foo(name, age) { this. ...
- Web站点如何防范XSS、CSRF、SQL注入攻击
XSS跨站脚本攻击 XSS跨站脚本攻击指攻击者在网页中嵌入客户端脚本(例如JavaScript),当用户浏览此网页时,脚本就会在用户的浏览器上执行,从而达到攻击者的目的,比如获取用户的Cookie,导 ...
- 网易回合制游戏录像批量下载(失效 不是因为代码 是因为网易官方关闭了录像网站 :P)
最近在访问网易大话西游2的录像专区时,发现页面还是很早之前的板式,网易的编辑并没有打算重新美化的打算,不由得内心一寒,结合之前好几个回合制游戏的倒闭,让很多人回顾都没办法回顾, 而且,很多人现在也没有 ...
- Android之自定义(上方标题随ViewPager手势慢慢滑动)
最近很蛋疼,项目要模仿网易新闻的样式去做.上次把仿网易新闻客户端的下拉刷新写出来了,这次是ViewPager的滑动,同时ViewPager的上面标题下划线跟随者移动,本来通过ViewPager的OnP ...
- Shader之ShaderUI使用方法
shader中的宏定义在material中Inspector中打开 Shader "Custom/Redify" { Properties{ _MainTex("Base ...