一、安装vmware虚拟机
二、在虚拟机上安装ubuntu12.04操作系统
三、安装jdk1.8.0_25
http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
注意:下载操作系统对应版本的jdk
解压:
tar -xzvf jdk-8u25-linux-i586.tar.gz
配置环境变量参数
sudo gedit /etc/profile
export JAVA_HOME=/home/yuanqin/Downloads/jdk1.8.0_25 (此地址为jdk安装路径,每个人根据自己jdk的安装地址进行配置)
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=${JAVA_HOME}/bin:$PATH
验证是否安装成功:java -version
手动设置系统默认jdk:
sudo update-alternatives --install /usr/bin/java java /home/yuanqin/Downloads/jdk1.8.0_25/bin/java 300
sudo update-alternatives --install /usr/bin/javac javac /home/yuanqin/Downloads/jdk1.8.0_25/bin/javac 300
sudo update-alternatives --config java
四、安装ssh并设置免密码登录
sudo apt-get install ssh
配置为可以免密码登录本机,首先查看yuanqin用户下是否有.ssh文件,没有的话自己创建一个
查看代码:ls -a /home/yuanqin
配置免密码登录的代码: ssh-keygen -t dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
验证是否安装成功:ssh -version ; ssh localhost
五、安装hadoop-1.2.1
http://mirrors.cnnic.cn/apache/hadoop/common/hadoop-1.2.1/
解压:
tar -xzvf hadoop-1.2.1.tar.gz
配置jdk安装位置:
sudo gedit /home/yuanqin/Downloads/hadoop-1.2.1/conf/hadoop-env.sh
值:export JAVA_HOME=/home/yuanqin/Downloads/jdk1.8.0-25
配置core-site.xml文件:
sudo gedit /home/yuanqin/Downloads/hadoop-1.2.1/conf/core-site.xml
值:<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
配置hdfs-site.xml文件:
sudo gedit /home/yuanqin/Downloads/hadoop-1.2.1/conf/hdfs-site.xml
值:<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
</configuration>
配置mapre-site.xml文件:
sudo gedit /home/yuanqin/Downloads/hadoop-1.2.1/conf/mapre-site.xml
值:<configuration>
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
</configuration>
接下来先格式化文件系统hdfs,进入hadoop文件夹,输入:bin/hadoop namenode -format
启动hadoop: bin/start-all.sh(bin/start-dfs.sh 启动hdfs; bin/start-mapred.sh 启动mapreduce)
验证hadoop是否安装成功;
在浏览器分别输入网址:http://localhost:50030 (mapreduce页面)
http://localhost:50030 (hdfs页面)
六、安装scala-2.10.3
参考:http://shiyanjun.cn/archives/696.html
2 |
tar xvzf scala-2.10.3.tgz |
在~/.bashrc中增加环境变量SCALA_HOME,并使之生效:
1 |
export SCALA_HOME=/usr/scala/scala-2.10.3 |
2 |
export PATH=$PATH:$SCALA_HOME/bin |
我们首先在主节点m1上配置Spark程序,然后将配置好的程序文件复制分发到集群的各个从结点上。下载解压缩:
2 |
tar xvzf spark-0.9.0-incubating-bin-hadoop1.tgz |
在~/.bashrc中增加环境变量SPARK_HOME,并使之生效:
1 |
export SPARK_HOME=/home/shirdrn/cloud/programs/spark-0.9.0-incubating-bin-hadoop1 |
2 |
export PATH=$PATH:$SPARK_HOME/bin |
在m1上配置Spark,修改spark-env.sh配置文件:
1 |
cd /home/shirdrn/cloud/programs/spark-0.9.0-incubating-bin-hadoop1/conf |
2 |
cp spark-env.sh.template spark-env.sh |
在该脚本文件中,同时将SCALA_HOME配置为Unix环境下实际指向路径,例如:
1 |
export SCALA_HOME=/usr/scala/scala-2.10.3 |
修改conf/slaves文件,将计算节点的主机名添加到该文件,一行一个,例如:
最后,将Spark的程序文件和配置文件拷贝分发到从节点机器上:
1 |
scp -r ~/cloud/programs/spark-0.9.0-incubating-bin-hadoop1 shirdrn@s1:~/cloud/programs/ |
2 |
scp -r ~/cloud/programs/spark-0.9.0-incubating-bin-hadoop1 shirdrn@s2:~/cloud/programs/ |
3 |
scp -r ~/cloud/programs/spark-0.9.0-incubating-bin-hadoop1 shirdrn@s3:~/cloud/programs/ |
启动Spark集群
我们会使用HDFS集群上存储的数据作为计算的输入,所以首先要把Hadoop集群安装配置好,并成功启动,我这里使用的是Hadoop 1.2.1版本。启动Spark计算集群非常简单,执行如下命令即可:
1 |
cd /home/shirdrn/cloud/programs/spark-0.9.0-incubating-bin-hadoop1/ |
可以看到,在m1上启动了一个名称为Master的进程,在s1上启动了一个名称为Worker的进程,如下所示,我这里也启动了Hadoop集群:
主节点m1上:
1 |
54968 SecondaryNameNode |
各个进程是否启动成功,也可以查看日志来诊断,例如:
2 |
tail -100f $SPARK_HOME/logs/spark-shirdrn-org.apache.spark.deploy.master.Master-1-m1.out |
4 |
tail -100f $SPARK_HOME/logs/spark-shirdrn-org.apache.spark.deploy.worker.Worker-1-s1.out |
Spark集群计算验证
我们使用我的网站的访问日志文件来演示,示例如下:
1 |
27.159.254.192 - - [21/Feb/2014:11:40:46 +0800] "GET /archives/526.html HTTP/1.1" 200 12080 "http://shiyanjun.cn/archives/526.html" "Mozilla/5.0 (Windows NT 5.1; rv:11.0) Gecko/20100101 Firefox/11.0" |
2 |
120.43.4.206 - - [21/Feb/2014:10:37:37 +0800] "GET /archives/417.html HTTP/1.1" 200 11464 "http://shiyanjun.cn/archives/417.html/" "Mozilla/5.0 (Windows NT 5.1; rv:11.0) Gecko/20100101 Firefox/11.0" |
统计该文件里面IP地址出现频率,来验证Spark集群能够正常计算。另外,我们需要从HDFS中读取这个日志文件,然后统计IP地址频率,最后将结果再保存到HDFS中的指定目录。
首先,需要启动用来提交计算任务的Spark Shell:
在Spark Shell上只能使用Scala语言写代码来运行。
然后,执行统计IP地址频率,在Spark Shell中执行如下代码来实现:
2 |
val result = file.flatMap(line => line.split("\\s+.*")).map(word => (word, 1)).reduceByKey((a, b) => a + b) |
上述的文件hdfs://m1:9000/user/shirdrn/wwwlog20140222.log是输入日志文件。处理过程的日志信息,示例如下所示:
14/03/06 21:59:22 INFO MemoryStore: ensureFreeSpace(784) called with curMem=43296, maxMem=311387750 |
02 |
14/03/06 21:59:22 INFO MemoryStore: Block broadcast_11 stored as values to memory (estimated size 784.0 B, free 296.9 MB) |
03 |
14/03/06 21:59:22 INFO FileInputFormat: Total input paths to process : 1 |
04 |
14/03/06 21:59:22 INFO SparkContext: Starting job: collect at <console>:13 |
05 |
14/03/06 21:59:22 INFO DAGScheduler: Registering RDD 84 (reduceByKey at <console>:13) |
06 |
14/03/06 21:59:22 INFO DAGScheduler: Got job 10 (collect at <console>:13) with 1 output partitions (allowLocal=false) |
07 |
14/03/06 21:59:22 INFO DAGScheduler: Final stage: Stage 20 (collect at <console>:13) |
08 |
14/03/06 21:59:22 INFO DAGScheduler: Parents of final stage: List(Stage 21) |
09 |
14/03/06 21:59:22 INFO DAGScheduler: Missing parents: List(Stage 21) |
10 |
14/03/06 21:59:22 INFO DAGScheduler: Submitting Stage 21 (MapPartitionsRDD[84] at reduceByKey at <console>:13), which has no missing parents |
11 |
14/03/06 21:59:22 INFO DAGScheduler: Submitting 1 missing tasks from Stage 21 (MapPartitionsRDD[84] at reduceByKey at <console>:13) |
12 |
14/03/06 21:59:22 INFO TaskSchedulerImpl: Adding task set 21.0 with 1 tasks |
13 |
14/03/06 21:59:22 INFO TaskSetManager: Starting task 21.0:0 as TID 19 on executor localhost: localhost (PROCESS_LOCAL) |
14 |
14/03/06 21:59:22 INFO TaskSetManager: Serialized task 21.0:0 as 1941 bytes in 0 ms |
15 |
14/03/06 21:59:22 INFO Executor: Running task ID 19 |
16 |
14/03/06 21:59:22 INFO BlockManager: Found block broadcast_11 locally |
18 |
14/03/06 21:59:23 INFO Executor: Serialized size of result for 19 is 738 |
19 |
14/03/06 21:59:23 INFO Executor: Sending result for 19 directly to driver |
20 |
14/03/06 21:59:23 INFO TaskSetManager: Finished TID 19 in 211 ms on localhost (progress: 0/1) |
21 |
14/03/06 21:59:23 INFO TaskSchedulerImpl: Remove TaskSet 21.0 from pool |
22 |
14/03/06 21:59:23 INFO DAGScheduler: Completed ShuffleMapTask(21, 0) |
23 |
14/03/06 21:59:23 INFO DAGScheduler: Stage 21 (reduceByKey at <console>:13) finished in 0.211 s |
24 |
14/03/06 21:59:23 INFO DAGScheduler: looking for newly runnable stages |
25 |
14/03/06 21:59:23 INFO DAGScheduler: running: Set() |
26 |
14/03/06 21:59:23 INFO DAGScheduler: waiting: Set(Stage 20) |
27 |
14/03/06 21:59:23 INFO DAGScheduler: failed: Set() |
28 |
14/03/06 21:59:23 INFO DAGScheduler: Missing parents for Stage 20: List() |
29 |
14/03/06 21:59:23 INFO DAGScheduler: Submitting Stage 20 (MapPartitionsRDD[86] at reduceByKey at <console>:13), which is now runnable |
30 |
14/03/06 21:59:23 INFO DAGScheduler: Submitting 1 missing tasks from Stage 20 (MapPartitionsRDD[86] at reduceByKey at <console>:13) |
31 |
14/03/06 21:59:23 INFO TaskSchedulerImpl: Adding task set 20.0 with 1 tasks |
14/03/06 21:59:23 INFO Executor: Finished task ID 19 |
33 |
14/03/06 21:59:23 INFO TaskSetManager: Starting task 20.0:0 as TID 20 on executor localhost: localhost (PROCESS_LOCAL) |
34 |
14/03/06 21:59:23 INFO TaskSetManager: Serialized task 20.0:0 as 1803 bytes in 0 ms |
35 |
14/03/06 21:59:23 INFO Executor: Running task ID 20 |
36 |
14/03/06 21:59:23 INFO BlockManager: Found block broadcast_11 locally |
37 |
14/03/06 21:59:23 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Getting 1 non-zero-bytes blocks out of 1 blocks |
38 |
14/03/06 21:59:23 INFO BlockFetcherIterator$BasicBlockFetcherIterator: Started 0 remote gets in 1 ms |
39 |
14/03/06 21:59:23 INFO Executor: Serialized size of result for 20 is 19423 |
40 |
14/03/06 21:59:23 INFO Executor: Sending result for 20 directly to driver |
41 |
14/03/06 21:59:23 INFO TaskSetManager: Finished TID 20 in 17 ms on localhost (progress: 0/1) |
42 |
14/03/06 21:59:23 INFO TaskSchedulerImpl: Remove TaskSet 20.0 from pool |
43 |
14/03/06 21:59:23 INFO DAGScheduler: Completed ResultTask(20, 0) |
44 |
14/03/06 21:59:23 INFO DAGScheduler: Stage 20 (collect at <console>:13) finished in 0.016 s |
45 |
14/03/06 21:59:23 INFO SparkContext: Job finished: collect at <console>:13, took 0.242136929 s |
46 |
14/03/06 21:59:23 INFO Executor: Finished task ID 20 |
47 |
res14: Array[(String, Int)] = Array((27.159.254.192,28), (120.43.9.81,40), (120.43.4.206,16), (120.37.242.176,56), (64.31.25.60,2), (27.153.161.9,32), (202.43.145.163,24), (61.187.102.6,1), (117.26.195.116,12), (27.153.186.194,64), (123.125.71.91,1), (110.85.106.105,64), (110.86.184.182,36), (27.150.247.36,52), (110.86.166.52,60), (175.98.162.2,20), (61.136.166.16,1), (46.105.105.217,1), (27.150.223.49,52), (112.5.252.6,20), (121.205.242.4,76), (183.61.174.211,3), (27.153.230.35,36), (112.111.172.96,40), (112.5.234.157,3), (144.76.95.232,7), (31.204.154.144,28), (123.125.71.22,1), (80.82.64.118,3), (27.153.248.188,160), (112.5.252.187,40), (221.219.105.71,4), (74.82.169.79,19), (117.26.253.195,32), (120.33.244.205,152), (110.86.165.8,84), (117.26.86.172,136), (27.153.233.101,8), (123.12... |
可以看到,输出了经过map和reduce计算后的部分结果。
最后,我们想要将结果保存到HDFS中,只要输入如下代码:
查看HDFS上的结果数据:
1 |
[shirdrn@m1 ~]$ hadoop fs -cat /user/shirdrn/wwwlog20140222.log.result/part-00000 | head -5 |
- Ubuntu下apache2安装配置(内含数字证书配置)
Ubuntu下apache2安装配置(内含数字证书配置)安装命令:sudo apt-get updatesudo apt-get install apache2 配置1.查看apache2安装目录命令 ...
- ubuntu下postgreSQL安装配置
一.安装并配置,并设置远程登陆的用户名和密码 1.安装postgreSQL sudo apt-get update sudo apt-get install postgresql-9.4 在Ubunt ...
- ubuntu下apache2 安装 配置 卸载 CGI设置 SSL设置
一.安装.卸载apache2 apache2可直接用命令安装 sudo apt-get install apache2 卸载比较麻烦,必须卸干净,否则会影响ap ...
- ubuntu下MySQL安装配置及基本操作
在linux下安装方法: 分为四种:一: 直接用软件仓库自动安装(如:ubuntu下,sudo apt-get install mysql-server; Debain下用yum安装): 二:官网下载 ...
- Win7和Ubuntu下mysql 安装配置
Windows下安装 下载对应版本的mysql安装包安装,如果安装目录为 C:\Program Files\MySQL\MySQL Server 5.6 增加环境变量 MYSQL_HOME=C:\Pr ...
- centOS7下Spark安装配置
环境说明: 操作系统: centos7 64位 3台 centos7-1 192.168.190.130 master centos7-2 192.168.190.129 slave1 centos7 ...
- ubuntu下smokeping安装配置
0.参考文件 http://wenku.baidu.com/view/950fbb0a79563c1ec5da71b1 http://aaaxiang000.blog.163.com/blog/sta ...
- 【云计算】ubuntu下docker安装配置指南
Docker Engine安装配置 以下描述仅Docker在Ubuntu Precise 12.04 (LTS).Ubuntu Trusty 14.04 (LTS).Ubuntu Wily 15.10 ...
- spark(1) - ubuntu 下 spark 安装
简单步骤: 前提:hadoop 环境搭建(我的是伪分布式) 1.官网下载spark 2.spark部署(单机模式): (1)解压 (2)移动文件到自定义目录下(同时修改文件名-原来的名字太长) (3) ...
随机推荐
- 【洛谷P1522】牛的旅行
题目大意:给定一个 N 个顶点的无向图,图中有若干联通块,现定义联通块的直径为该联通块中距离最远的两个点的距离,定义无向图的直径为这个图中所有联通块直径的最大值.现在在图上加一条边,使得两个本不连通的 ...
- SQL Server 行列相互转换命令:PIVOT和UNPIVOT使用详解
一.使用PIVOT和UNPIVOT命令的SQL Server版本要求 1.数据库的最低版本要求为SQL Server 2005 或更高. 2.必须将数据库的兼容级别设置为90 或更高. 3.查看我的数 ...
- vue-router 编程式导航
借助vue-router的实例方法,通过编写代码来实现导航的切换: back:回退一步 forward:前进一步 go:指定前进回退步数 push:导航到不同url,向history栈添加一个新的记录 ...
- CM记录-HDFS清理垃圾回收站
HDFS数据块所在存储的目录满了的解决方法 1.增加磁盘空间 2.删除回收站 hadoop dfs -du -h 查看各个HDFS空间占用情况 hadoop dfs -expunge 清空回收站 ...
- Java 调用 groovy 脚本文件,groovy 访问 MongoDB
groovy 访问 MongoDB 示例: shell.groovy package db import com.gmongo.GMongoClient import com.mongodb.Basi ...
- Codeforces 923 C. Perfect Security
http://codeforces.com/contest/923/problem/C Trie树 #include<cstdio> #include<iostream> us ...
- 对package.json的理解和学习
一.初步理解 1. npm安装package.json时 直接转到当前项目目录下用命令npm install 或npm install --save-dev安装即可,自动将package.json中 ...
- PWN入门
pwn ”Pwn”是一个黑客语法的俚语词 ,是指攻破设备或者系统 .发音类似“砰”,对黑客而言,这就是成功实施黑客攻击的声音——砰的一声,被“黑”的电脑或手机就被你操纵.以上是从百度百科上面抄的简介, ...
- 【CTF WEB】XSS-https://alf.nu/alert1
XSS练习平台 https://alf.nu/alert1 Warmup 1");alert(1)// Adobe 1");alert(1)// JSON </script& ...
- 走进异步编程的世界--async/await项目使用实战
起因:今天要做一个定时器任务:五分钟查询一次数据库发现超时未支付的订单数据将其状态改为已经关闭(数据量大约100条的情况) 开始未使用异步: public void SelfCloseGpPayOrd ...