Spark Week1 HomeWork
package wikipedia
import org.apache.spark.SparkConf
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.rdd.RDD
import org.apache.log4j.{Level,Logger}
case class WikipediaArticle(title: String, text: String) {
/**
* @return Whether the text of this article mentions `lang` or not
* @param lang Language to look for (e.g. "Scala")
*/
def mentionsLanguage(lang: String): Boolean = text.split(' ').contains(lang)
}
object WikipediaRanking {
// 设置日志
Logger.getLogger("org").setLevel(Level.ERROR)
val langs = List(
"JavaScript", "Java", "PHP", "Python", "C#", "C++", "Ruby", "CSS",
"Objective-C", "Perl", "Scala", "Haskell", "MATLAB", "Clojure", "Groovy")
val conf: SparkConf = new SparkConf()
val sc: SparkContext = new SparkContext("local[*]", "Wikipedia")
// Hint: use a combination of `sc.textFile`, `WikipediaData.filePath` and `WikipediaData.parse`
val wikiRdd: RDD[WikipediaArticle] = sc.textFile(WikipediaData.filePath).map(WikipediaData.parse)
/** Returns the number of articles on which the language `lang` occurs. 返回lang语言出现的文章篇数
* Hint1: consider using method `aggregate` on RDD[T].
* Hint2: consider using method `mentionsLanguage` on `WikipediaArticle`
*/
def occurrencesOfLang(lang: String, rdd: RDD[WikipediaArticle]): Int =
rdd.filter(_.mentionsLanguage(lang)).count().toInt
/* (1) Use `occurrencesOfLang` to compute the ranking of the languages
* (`val langs`) by determining the number of Wikipedia articles that
* mention each language at least once. Don't forget to sort the
* languages by their occurrence, in decreasing order!
*
* Note: this operation is long-running. It can potentially run for
* several seconds.
*/
def rankLangs(langs: List[String], rdd: RDD[WikipediaArticle]): List[(String, Int)] = {
rdd.cache() // 允许数据存储在内存
langs.map(lang => (lang, occurrencesOfLang(lang, rdd))).sortBy(_._2).reverse
/*
对于langs的每一个元素找到包含它的文章篇数。
其中sortBy(_._2)指根据occurrencesOfLang(lang, rdd))来排序,
如果是sortBy(_._1)则根据lang来排序
默认从小到大排序,所以加上.reverse
*/
}
/* Compute an inverted index of the set of articles, mapping each language
* to the Wikipedia pages in which it occurs.
*/
def makeIndex(langs: List[String], rdd: RDD[WikipediaArticle]): RDD[(String, Iterable[WikipediaArticle])] = {
val articles_Languages = rdd.flatMap(article => {
langs.filter(lang => article.mentionsLanguage(lang))
.map(lang => (lang, article))
})
articles_Languages.groupByKey
}
/* (2) Compute the language ranking again, but now using the inverted index. Can you notice
* a performance improvement?
*
* Note: this operation is long-running. It can potentially run for
* several seconds.
*/
def rankLangsUsingIndex(index: RDD[(String, Iterable[WikipediaArticle])]): List[(String, Int)] =
index.mapValues(_.size).sortBy(-_._2).collect().toList
/* (3) Use `reduceByKey` so that the computation of the index and the ranking are combined.
* Can you notice an improvement in performance compared to measuring *both* the computation of the index
* and the computation of the ranking? If so, can you think of a reason?
*
* Note: this operation is long-running. It can potentially run for
* several seconds.
*/
def rankLangsReduceByKey(langs: List[String], rdd: RDD[WikipediaArticle]): List[(String, Int)] = {
rdd.flatMap(article => {
langs.filter(article.mentionsLanguage) // 相当于langs.filter(lang => article.mentionsLanguage(lang)) 或者 langs.filter(article.mentionsLanguage(_))
.map((_, 1))
}).reduceByKey(_ + _)
.sortBy(_._2)
.collect()
.toList
.reverse
}
def main(args: Array[String]) {
/* Languages ranked according to (1) */
val langsRanked: List[(String, Int)] = timed("Part 1: naive ranking", rankLangs(langs, wikiRdd))
/* An inverted index mapping languages to wikipedia pages on which they appear */
def index: RDD[(String, Iterable[WikipediaArticle])] = makeIndex(langs, wikiRdd)
/* Languages ranked according to (2), using the inverted index */
val langsRanked2: List[(String, Int)] = timed("Part 2: ranking using inverted index", rankLangsUsingIndex(index))
/* Languages ranked according to (3) */
val langsRanked3: List[(String, Int)] = timed("Part 3: ranking using reduceByKey", rankLangsReduceByKey(langs, wikiRdd))
/* Output the speed of each ranking */
println(timing)
sc.stop()
}
val timing = new StringBuffer
def timed[T](label: String, code: => T): T = {
val start = System.currentTimeMillis()
val result = code
val stop = System.currentTimeMillis()
timing.append(s"Processing $label took ${stop - start} ms.\n")
result
}
}
Spark Week1 HomeWork的更多相关文章
- CentOS7 安装spark集群
Spark版本 1.6.0 Scala版本 2.11.7 Zookeeper版本 3.4.7 配置虚拟机 3台虚拟机,sm,sd1,sd2 1. 关闭防火墙 systemctl stop firewa ...
- 【cs229-Lecture2】Linear Regression with One Variable (Week 1)(含测试数据和源码)
从Ⅱ到Ⅳ都在讲的是线性回归,其中第Ⅱ章讲得是简单线性回归(simple linear regression, SLR)(单变量),第Ⅲ章讲的是线代基础,第Ⅳ章讲的是多元回归(大于一个自变量). 本文的 ...
- Spark小课堂Week1 Hello Spark
Spark小课堂Week1 Hello Spark 看到Spark这个词,你的第一印象是什么? 这是一朵"火花",官方的定义是Spark是一个高速的.通用的.分布式计算系统!!! ...
- Week1 Team Homework #2 Introduction of team member with photos
小组成员介绍 组长:黄剑锟 11061164 组员:顾泽鹏 11061160 组员:周辰光 11061154 组员:龚少波 11061167 组 ...
- 团队博客作业Week1 Team Homework #3软件工程在北航
这次我们采访了一位大四的学姐,让她简单地谈了谈去年学习软件工程的经历和感受. 在完成软件工程大作业的过程中,由于计划安排与实际脱节,导致时间前松后紧,平均每周花在这门课上的时间大约有8个小时. 项目完 ...
- Week1 Team Homework #1: Study the projects done by previous student groups
我们研究了学长的项目:百度3D地图API的调用.下面是我们对该项目的一些看法: 优点: 界面清晰 各类之间调用及其他关系容易理清. 缺点: 前段html代码过于冗杂,很多(div)块间的层次关系不 ...
- Week1 Team Homework #3: 软件工程在北航
在组内成员的共同努力,我们采访了几个学长学姐,顺利完成任务.反馈信息如下: 平均每周花在这门课上的时间 平均写的代码总行数 学到的最有用的部分 最没用的部分 <软件工程>最应该改进的地方 ...
- Week1 Team Homework #2: Introduction of each team member
王洛书 我是来自浙江的王洛书.热爱历史,热爱美食,热爱代码,热爱博物馆.很喜欢软件工程这门课的上课方式,也很喜欢组里的这些同学.希望能大家一起努力,在这门课上真正地收获能力上的提升! 陈睿翊
- Week1 Team Homework #1 from Z.XML-对于学长项目《shield star》的思考和看法
试用了一下学长黄杨等人开发的<shield star>游戏. 其实作为一个学弟,我对cocos2d-x引擎还算是比较了解,所以对于这样一款很“典型 ...
随机推荐
- NUMA 架构
NUMA架构的CPU -- 你真的用好了么? - http://cenalulu.github.io/linux/numa/ SQL Server 如何支持 NUMA - https://docs.m ...
- FMXUI中的三大杀器:TView、TLinearLayout、TRelativeLayout
好了,今天我们来介绍下FMXUI中的三大杀器:TView.TLinearLayout.TRelativeLayout. [名词定义] 非布局组件: 组件名不是以Layout结尾的组件,Delphi自带 ...
- javascript 实现ajax
AJAX 英文名称 Asynchronous JavaScript and XML即异步的 JavaScript 和 XML AJAX 是与服务器交换数据并更新部分网页一门无刷新技术构建自己的ajax ...
- Cleanmymac X 4.4.3 激活破解版|兼容mac最新系统-Mac电脑清理工具
CleanMyMac X 4.4.3 激活破解版为最新版清理工具,为你所爱的东西腾出空间.CleanMyMac拥有一系列巧妙的新功能,它可以安全.智能地扫描和清理整个系统,删除大的未使用的文件,卸载不 ...
- hive -e和hive -f的区别(转)
大家都知道,hive -f 后面指定的是一个文件,然后文件里面直接写sql,就可以运行hive的sql,hive -e 后面是直接用双引号拼接hivesql,然后就可以执行命令. 但是,有这么一个东西 ...
- MyBatis从入门到精通(一):MyBatis入门
最近在读刘增辉老师所著的<MyBatis从入门到精通>一书,很有收获,于是将自己学习的过程以博客形式输出,如有错误,欢迎指正,如帮助到你,不胜荣幸! 1. MyBatis简介 2001 ...
- React性能优化之PureComponent 和 memo使用分析
前言 关于react性能优化,在react 16这个版本,官方推出fiber,在框架层面优化了react性能上面的问题.由于这个太过于庞大,我们今天围绕子自组件更新策略,从两个及其微小的方面来谈rea ...
- Redis---学习笔记(更新中)
一.基本命令 #查看所有键 keys * #查看指定键 keys key #查看模糊键 keys ke* keys ke? keys ke[a-z] keys ke\? #判断键是否存在 exists ...
- Spring Boot:使用Memcached缓存
综合概述 Memcached是一个自由开源的,高性能,分布式内存对象缓存系统.Memcached基于内存的key-value存储,用来存储小块的任意数据,这些数据可以是数据库调用.API调用或者是页面 ...
- 【记录】Mysql数据库更新主键自增
语法:id从1000开始自增: ALTER TABLE 表名 AUTO_INCREMENT = 1000;