Spark2.x学习笔记:Spark SQL程序设计
1、RDD的局限性
- RDD仅表示数据集,RDD没有元数据,也就是说没有字段语义定义。
- RDD需要用户自己优化程序,对程序员要求较高。
- 从不同数据源读取数据相对困难。
- 合并多个数据源中的数据也较困难。
2 DataFrame和Dataset
(1)DataFrame
由于RDD的局限性,Spark产生了DataFrame。
DataFrame=RDD+Schema
其中Schema是就是元数据,是语义描述信息。
在Spark1.3之前,DataFrame被称为SchemaRDD。以行为单位构成的分布式数据集合,按照列赋予不同的名称。对select、filter、aggregation和sort等操作符的抽象。
- 内部数据无类型,统一为Row
- DataFrame是一种特殊类型的Dataset
- DataFrame自带优化器Catalyst,可以自动优化程序。
- DataFrame提供了一整套的Data Source API。
(2)Dataset
由于DataFrame的数据类型统一是Row,所以DataFrame也是有缺点的。
- Row运行时类型检查
比如salary是字符串类型,下面语句也只有运行时才进行类型检查。
dataframe.filter("salary>1000").show()
- Row不能直接操作domain对象
- 函数风格编程,没有面向对象风格的API
所以,Spark SQL引入了Dataset,扩展了DataFrame API,提供了编译时类型检查,面向对象风格的API。
Dataset可以和DataFrame、RDD相互转换。
DataFrame = Dataset[Row]
可见DataFrame是一种特殊的Dataset。
3 为什么需要DataFrame和Dataset?
我们知道Spark SQL提供了两种方式操作数据:
- SQL查询
- DataFrame和Dataset API
既然Spark SQL提供了SQL访问方式,那为什么还需要DataFrame和Dataset的API呢?
这是因为SQL语句虽然简单,但是SQL的表达能力却是有限的(所以Oracle数据库提供了PL/SQL)。DataFrame和Dataset可以采用更加通用的语言(Scala或Python)来表达用户的查询请求。此外,Dataset可以更快扑捉错误,因为SQL是运行时捕获异常,而Dataset是编译时检查错误。
4 基本步骤
- 创建SparkSession对象
SparkSession封装了Spark SQL执行环境信息,是所有Spark SQL程序唯一的入口。 - 创建DataFrame或Dataset
Spark SQL支持多种数据源 - 在DataFrame或Dataset之上进行转换和Action
Spark SQL提供了多钟转换和Action函数 - 返回结果
保存结果到HDFS中,或直接打印出来 。
步骤1:创建SparkSession对象
val spark=SparkSessin.builder
.master("local")
.appName("spark session example")
.getOrCreate()
注意:SparkSession中封装了spark.sparkContext和spark.sqlContext
后面所有程序或程序片段中出现的spark变量均是SparkSession对象
将RDD隐式转换为DataFrame
import spark.implicits._
步骤2:创建DataFrame或Dataset
提供了读写各种格式数据的API,包括常见的JSON,JDBC,Parquet,HDFS
步骤3:在DataFrame或Dataset之上进行各种操作 
5 实例演示
(1)进入spark-shell
[root@node1 ~]# spark-shell
// :: WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Spark context Web UI available at http://192.168.80.131:4040
Spark context available as 'sc' (master = local[*], app id = local-).
Spark session available as 'spark'.
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.2.
/_/ Using Scala version 2.11. (Java HotSpot(TM) -Bit Server VM, Java 1.8.0_112)
Type in expressions to have them evaluated.
Type :help for more information. scala>
这里的Spark session对象是对Spark context对象的进一步封装。也就是说Spark session对象(spark)中的SparkContext就是Spark context对象(sc),从下面输出信息可以验证。
scala> spark.sparkContext
res0: org.apache.spark.SparkContext = org.apache.spark.SparkContext@7bd7c4cf scala> println(sc)
org.apache.spark.SparkContext@7bd7c4cf scala>
(2)导入org.apache.spark.sql.Row
scala> import org.apache.spark.sql.Row
import org.apache.spark.sql.Row
(3)定义case class
scala> case class User(userID:Long,gender:String,age:Int,occupation:String,zipcode:String)
defined class User scala> val usersRDD=sc.textFile("file:///root/data/ml-1m/users.dat")
usersRDD: org.apache.spark.rdd.RDD[String] = file:///root/data/ml-1m/users.dat MapPartitionsRDD[3] at textFile at <console>:25 scala> usersRDD.count
(4)case class作为RDD的schema
scala> val userRDD =usersRDD.map(_.split("::")).map(p=>User(p().toLong,p().trim,p().toInt,p(),p()))
userRDD: org.apache.spark.rdd.RDD[User] = MapPartitionsRDD[] at map at <console>:
(5)通过RDD.toDF将RDD转换为DataFrame
scala> val userDF=userRDD.toDF
userDF: org.apache.spark.sql.DataFrame = [userID: bigint, gender: string ... more fields]
(6)查看DataFrame所以方法
输入userDF.,然后tab键,可以看到DataFrame所以方法
scala> userDF.
agg cube hint randomSplitAsList take
alias describe inputFiles rdd takeAsList
apply distinct intersect reduce toDF
as drop isLocal registerTempTable toJSON
cache dropDuplicates isStreaming repartition toJavaRDD
checkpoint dtypes javaRDD rollup toLocalIterator
coalesce except join sample toString
col explain joinWith schema transform
collect explode limit select union
collectAsList filter map selectExpr unionAll
columns first mapPartitions show unpersist
count flatMap na sort where
createGlobalTempView foreach orderBy sortWithinPartitions withColumn
createOrReplaceGlobalTempView foreachPartition persist sparkSession withColumnRenamed
createOrReplaceTempView groupBy printSchema sqlContext withWatermark
createTempView groupByKey queryExecution stat write
crossJoin head randomSplit storageLevel writeStream scala>
(7)输出DataFrame的Schema
scala> userDF.printSchema
root
|-- userID: long (nullable = false)
|-- gender: string (nullable = true)
|-- age: integer (nullable = false)
|-- occupation: string (nullable = true)
|-- zipcode: string (nullable = true)
(8)DataFrame的其他方法
scala> userDF.first
res5: org.apache.spark.sql.Row = [,F,,,] scala> userDF.take()
res6: Array[org.apache.spark.sql.Row] = Array([,F,,,], [,M,,,], [,M,,,], [,M,,,], [,M,,,], [,F,,,], [,M,,,], [,M,,,], [,M,,,], [,F,,,]) scala>
(9)查看DataFrame可以转化的数据格式
输入userDF.write.,然后tab键,可以看到DataFrame可以转化的数据格式
scala> userDF.write.
bucketBy format jdbc mode options parquet save sortBy
csv insertInto json option orc partitionBy saveAsTable text scala>
(10)将DataFrame数据以JSON格式写入HDFS
scala> userDF.write.json("/tmp/json")
scala>
(11)查看HDFS
[root@node1 ~]# hdfs dfs -ls /tmp/json
Found items
-rw-r--r-- root supergroup -- : /tmp/json/_SUCCESS
-rw-r--r-- root supergroup -- : /tmp/json/part--6f19a241-2f72-4a06-a6bc-81706c89bf5b-c000.json
[root@node1 ~]#
(12)也可以写入本地
scala> userDF.write.json("file:///tmp/json")
[root@node1 ~]# ls /tmp/json
part--66aa0658---a809-468e4fde23a5-c000.json _SUCCESS
[root@node1 ~]# tail - /tmp/json/part--66aa0658---a809-468e4fde23a5-c000.json
{"userID":,"gender":"F","age":,"occupation":"","zipcode":""}
{"userID":,"gender":"F","age":,"occupation":"","zipcode":""}
{"userID":,"gender":"F","age":,"occupation":"","zipcode":""}
{"userID":,"gender":"F","age":,"occupation":"","zipcode":""}
{"userID":,"gender":"M","age":,"occupation":"","zipcode":""}
[root@node1 ~]#
(13)查看Spark SQL可以读的数据格式
scala> val df=spark.read.
csv format jdbc json load option options orc parquet schema table text textFile scala>
(14)将JSON文件转化为DataFrame
scala> val df=spark.read.json("/tmp/json")
df: org.apache.spark.sql.DataFrame = [age: bigint, gender: string ... more fields]
scala> df.take()
res9: Array[org.apache.spark.sql.Row] = Array([,F,,,], [,M,,,])
scala>
(15)再将DataFrame转化为ORC格式数据(该格式文件是二进制文件)
scala> df.write.orc("file:///tmp/orc")
[root@node1 ~]# ls /tmp/orc
part--09cf3025-cc71-4a76-a35d-a7cef4885be8-c000.snappy.orc _SUCCESS
[root@node1 ~]#
(16)读取目录/tmp/orc下的所有orc文件
scala> val orcDF=spark.read.orc("file:///tmp/orc")
orcDF: org.apache.spark.sql.DataFrame = [age: bigint, gender: string ... more fields]
scala> orcDF.first
res11: org.apache.spark.sql.Row = [,F,,,]
scala>
6 select和filter
(1)select
scala> userDF.select("UserID","age").show
+------+---+
|UserID|age|
+------+---+
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
+------+---+
only showing top rows
scala> userDF.select("UserID","age").show()
+------+---+
|UserID|age|
+------+---+
| | |
| | |
+------+---+
only showing top rows
scala> userDF.selectExpr("UserID","ceil(age/10) as newAge").show()
+------+------+
|UserID|newAge|
+------+------+
| | |
| | |
+------+------+
only showing top rows
scala> userDF.select(max('age),min('age),avg('age)).show(2)
+--------+--------+------------------+
|max(age)|min(age)| avg(age)|
+--------+--------+------------------+
| | |30.639238410596025|
+--------+--------+------------------+
**()filter**
scala> userDF.filter(userDF("age")>).show()
+------+------+---+----------+-------+
|userID|gender|age|occupation|zipcode|
+------+------+---+----------+-------+
| | M| | | |
| | M| | | |
+------+------+---+----------+-------+
only showing top rows
scala> userDF.filter("age>30 and occupation=10").show()
+------+------+---+----------+-------+
|userID|gender|age|occupation|zipcode|
+------+------+---+----------+-------+
| | M| | | |
| | M| | | |
+------+------+---+----------+-------+
scala>
(3)select和filter组合
scala> userDF.select("userID","age").filter("age>30").show()
+------+---+
|userID|age|
+------+---+
| | |
| | |
+------+---+
only showing top rows
scala> userDF.filter("age>30").select("userID","age").show()
+------+---+
|userID|age|
+------+---+
| | |
| | |
+------+---+
only showing top rows
7 groupBy
scala> userDF.groupBy("age").count.show
+---+-----+
|age|count|
+---+-----+
| | |
| | |
| | |
| | |
| | |
| | |
| | |
+---+-----+
scala> userDF.groupBy("age").agg(count('gender),countDistinct('occupation)).show
+---+-------------+--------------------------+
|age|count(gender)|count(DISTINCT occupation)|
+---+-------------+--------------------------+
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
+---+-------------+--------------------------+
scala> userDF.groupBy("age").agg("gender"->"count","occupation"->"count").show
+---+-------------+-----------------+
|age|count(gender)|count(occupation)|
+---+-------------+-----------------+
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
+---+-------------+-----------------+
scala>
8 join
问题:求解看过movieID=2116电影的观众的性别与年龄的分布。
(1)Users DataFrame
scala> userDF.printSchema
root
|-- userID: long (nullable = false)
|-- gender: string (nullable = true)
|-- age: integer (nullable = false)
|-- occupation: string (nullable = true)
|-- zipcode: string (nullable = true) scala>
(2)Ratings DataFrame
scala> case class Rating(userID:Long,movieID:Long,Rating:Int,Timestamp:String)
defined class Rating scala> val ratingsRDD=sc.textFile("file:///root/data/ml-1m/ratings.dat")
ratingsRDD: org.apache.spark.rdd.RDD[String] = file:///root/data/ml-1m/ratings.dat MapPartitionsRDD[65] at textFile at <console>:25 scala> val ratingRDD =ratingsRDD.map(_.split("::")).map(p=>Rating(p().toLong,p().toLong,p().toInt,p()))
ratingRDD: org.apache.spark.rdd.RDD[Rating] = MapPartitionsRDD[] at map at <console>: scala> val ratingDF=ratingRDD.toDF
ratingDF: org.apache.spark.sql.DataFrame = [userID: bigint, movieID: bigint ... more fields] scala> scala> ratingDF.printSchema
root
|-- userID: long (nullable = false)
|-- movieID: long (nullable = false)
|-- Rating: integer (nullable = false)
|-- Timestamp: string (nullable = true) scala>
(2)join
scala> val mergeredDF=ratingDF.filter("movieID=2116").join(userDF,"userID").select("gender","age").groupBy("gender","age").count
mergeredDF: org.apache.spark.sql.DataFrame = [gender: string, age: int ... more field]
scala> mergeredDF.show
+------+---+-----+
|gender|age|count|
+------+---+-----+
| M| | |
| F| | |
| M| | |
| M| | |
| F| | |
| M| | |
| F| | |
| M| | |
| F| | |
| F| | |
| M| | |
| F| | |
| F| | |
| M| | |
+------+---+-----+
scala>
9 临时表
scala> userDF.createOrReplaceTempView("users")
scala> val groupedUsers=spark.sql("select gender,age,count(*) as num from users group by gender, age")
groupedUsers: org.apache.spark.sql.DataFrame = [gender: string, age: int ... more field]
scala> groupedUsers.show
+------+---+----+
|gender|age| num|
+------+---+----+
| M| | |
| F| | |
| M| | |
| M| | |
| F| | |
| M| ||
| F| | |
| M| | |
| F| | |
| F| | |
| M| | |
| F| | |
| F| | |
| M| | |
+------+---+----+
scala>
注意:在Spark程序运行中,临时表才存在。当Spark程序运行结束,临时表也被销毁。
10 Spark SQL的表
(1)Session范围内的临时表
- df.createOrReplaceTempView(“tableName”)
- 只在Session范围内有效,Session结束临时表自动销毁
(2)全局范围内的临时表
- df.createGlobalTempView(“tableName”)
- 所有Session共享
scala> userDF.createGlobalTempView("users")
scala> spark.sql("select * from global_temp.users").show
+------+------+---+----------+-------+
|userID|gender|age|occupation|zipcode|
+------+------+---+----------+-------+
| | F| | | |
| | M| | | |
| | M| | | |
| | M| | | |
| | M| | | |
| | F| | | |
| | M| | | |
| | M| | | |
| | M| | | |
| | F| | | |
| | F| | | |
| | M| | | |
| | M| | | |
| | M| | | |
| | M| | | |
| | F| | | |
| | M| | | |
| | F| | | |
| | M| | | |
| | M| | | |
+------+------+---+----------+-------+
only showing top rows
scala> spark.newSession().sql("select * from global_temp.users").show
+------+------+---+----------+-------+
|userID|gender|age|occupation|zipcode|
+------+------+---+----------+-------+
| | F| | | |
| | M| | | |
| | M| | | |
| | M| | | |
| | M| | | |
| | F| | | |
| | M| | | |
| | M| | | |
| | M| | | |
| | F| | | |
| | F| | | |
| | M| | | |
| | M| | | |
| | M| | | |
| | M| | | |
| | F| | | |
| | M| | | |
| | F| | | |
| | M| | | |
| | M| | | |
+------+------+---+----------+-------+
only showing top rows
scala>
(3)将DataFrame或Dataset持久化到Hive中
df.write.mode(“overwrite”).saveAsTable(“database.tableName”)
Spark2.x学习笔记:Spark SQL程序设计的更多相关文章
- Oracle学习笔记三 SQL命令
SQL简介 SQL 支持下列类别的命令: 1.数据定义语言(DDL) 2.数据操纵语言(DML) 3.事务控制语言(TCL) 4.数据控制语言(DCL)
- SQL反模式学习笔记21 SQL注入
目标:编写SQL动态查询,防止SQL注入 通常所说的“SQL动态查询”是指将程序中的变量和基本SQL语句拼接成一个完整的查询语句. 反模式:将未经验证的输入作为代码执行 当向SQL查询的字符串中插入别 ...
- Spark学习之Spark SQL(8)
Spark学习之Spark SQL(8) 1. Spark用来操作结构化和半结构化数据的接口--Spark SQL. 2. Spark SQL的三大功能 2.1 Spark SQL可以从各种结构化数据 ...
- Spark2.x学习笔记:Spark SQL快速入门
Spark SQL快速入门 本地表 (1)准备数据 [root@node1 ~]# mkdir /tmp/data [root@node1 ~]# cat data/ml-1m/users.dat | ...
- Spark2.x学习笔记:Spark SQL的SQL
Spark SQL所支持的SQL语法 select [distinct] [column names]|[wildcard] from tableName [join clause tableName ...
- CUBRID学习笔记 41 sql语法之select
cubrid的中sql查询语法 SELECT [ ] [{TO | INTO} ][FROM ] [WHERE ][GROUP BY {col_name | expr} [ASC | DESC], . ...
- 学习笔记之Java程序设计实用教程
Java程序设计实用教程 by 朱战立 & 沈伟 学习笔记之JAVA多线程(http://www.cnblogs.com/pegasus923/p/3995855.html) 国庆休假前学习了 ...
- Spark学习之Spark SQL
一.简介 Spark SQL 提供了以下三大功能. (1) Spark SQL 可以从各种结构化数据源(例如 JSON.Hive.Parquet 等)中读取数据. (2) Spark SQL 不仅支持 ...
- spark2.3.0 配置spark sql 操作hive
spark可以通过读取hive的元数据来兼容hive,读取hive的表数据,然后在spark引擎中进行sql统计分析,从而,通过spark sql与hive结合实现数据分析将成为一种最佳实践.配置步骤 ...
随机推荐
- 使用VS Code写PHP并进行调试
VS Code(Visual Studio Code)是由微软研发的一款免费.开源的跨平台文本(代码)编辑器. 1.先从官网下载安装好VS Code.官方下载地址是https://code.visua ...
- Material Design系列第二篇——Getting Started
Getting Started This lesson teaches you to Apply the Material Theme Design Your Layouts Specify Elev ...
- c++ malloc与free
今天看STL内存配置器的时候,第一级配置器就是直接用malloc.free来管理内存. 而free和malloc都只需要传入或传出一个指针就能分配和释放内存了. 编译器是如何知道,这个指针指向的空间的 ...
- Karma和Jasmine自动化单元测试
从零开始nodejs系列文章,将介绍如何利Javascript做为服务端脚本,通过Nodejs框架web开发.Nodejs框架是基于V8的引擎,是目前速度最快的Javascript引擎.chrome浏 ...
- sencha touch 评分扩展
原版 :https://market.sencha.com/extensions/sencha-touch-2-rating-star-field 效果: 我的改造版(只是类名变了): Ext.def ...
- Jenkins权限管理之Matrix Authorization Strategy
一.权限管理概述 jenkins的权限管理,我目前使用的是Role-based Authorization Strateg.这个很简单,权限是jenkins已经定死了的,就那些.该插件可以让我们新建角 ...
- CF 434C Tachibana Kanade's Tofu[数位dp+AC自动机]
Solution //本代码压掉后两维 #include<cstdio> #define max(a,b) (a<b?b:a) using namespace std; inline ...
- How are you vs How are you doing
How are you与How are you doing,有何不同呢? 貌似没有不同…… 中国教科书式的回答是"Fine, thank you, and you?" 随便一点&q ...
- 解决ios safari中按钮圆角问题【原创】
问题描述 使用html5编写页面在移动app中嵌套,总会涉及到按钮的使用,在android手机浏览器中显示正常,但在ios safari浏览器中会看到按钮显示为圆角样式,设置border-rad ...
- 7.22 python面试题
2018-7-22 16:32:24 把面试题敲完了,,好强悍! Python 10期考试题 1.常用字符串格式化有那些?并说明他们的区别 # format 直接调用函数 # %s 语法塘 # %r ...