Spark SQL自定义外部数据源
1 涉及到的API
BaseRelation: In a simple way, we can say it represents the collection of tuples with known schema
TableScan: provides a way to scan the data and generates the RDD[Row] from the data
RelationProvider: takes a list of parameters and returns a BaseRelation.
BaseRelation提供了定义数据结构Schema的方法,类似tuples的集合结构
TableScan,提供了扫描数据并生成RDD[Row]的方法
RelationProvider,拿到参数列表并返回一个BaseRelation
2 代码实现
定义ralation
package cn.zj.spark.sql.datasource import org.apache.hadoop.fs.Path
import org.apache.spark.sql.{DataFrame, SQLContext, SaveMode}
import org.apache.spark.sql.sources.{BaseRelation, CreatableRelationProvider, RelationProvider, SchemaRelationProvider}
import org.apache.spark.sql.types.StructType /**
* Created by rana on 29/9/16.
*/
class DefaultSource extends RelationProvider with SchemaRelationProvider with CreatableRelationProvider {
override def createRelation(sqlContext: SQLContext, parameters: Map[String, String]): BaseRelation = {
createRelation(sqlContext, parameters, null)
} override def createRelation(sqlContext: SQLContext, parameters: Map[String, String], schema: StructType): BaseRelation = {
val path = parameters.get("path")
path match {
case Some(p) => new CustomDatasourceRelation(sqlContext, p, schema)
case _ => throw new IllegalArgumentException("Path is required for custom-datasource format!!")
}
} override def createRelation(sqlContext: SQLContext, mode: SaveMode, parameters: Map[String, String],
data: DataFrame): BaseRelation = {
val path = parameters.getOrElse("path", "./output/") //can throw an exception/error, it's just for this tutorial
val fsPath = new Path(path)
val fs = fsPath.getFileSystem(sqlContext.sparkContext.hadoopConfiguration) mode match {
case SaveMode.Append => sys.error("Append mode is not supported by " + this.getClass.getCanonicalName); sys.exit(1)
case SaveMode.Overwrite => fs.delete(fsPath, true)
case SaveMode.ErrorIfExists => sys.error("Given path: " + path + " already exists!!"); sys.exit(1)
case SaveMode.Ignore => sys.exit()
} val formatName = parameters.getOrElse("format", "customFormat")
formatName match {
case "customFormat" => saveAsCustomFormat(data, path, mode)
case "json" => saveAsJson(data, path, mode)
case _ => throw new IllegalArgumentException(formatName + " is not supported!!!")
}
createRelation(sqlContext, parameters, data.schema)
} private def saveAsJson(data : DataFrame, path : String, mode: SaveMode): Unit = {
/**
* Here, I am using the dataframe's Api for storing it as json.
* you can have your own apis and ways for saving!!
*/
data.write.mode(mode).json(path)
} private def saveAsCustomFormat(data : DataFrame, path : String, mode: SaveMode): Unit = {
/**
* Here, I am going to save this as simple text file which has values separated by "|".
* But you can have your own way to store without any restriction.
*/
val customFormatRDD = data.rdd.map(row => {
row.toSeq.map(value => value.toString).mkString("|")
})
customFormatRDD.saveAsTextFile(path)
}
}
定义Schema以及读取数据代码
package cn.zj.spark.sql.datasource import org.apache.spark.rdd.RDD
import org.apache.spark.sql.{Row, SQLContext}
import org.apache.spark.sql.sources._
import org.apache.spark.sql.types._ /**
* Created by rana on 29/9/16.
*/
class CustomDatasourceRelation(override val sqlContext : SQLContext, path : String, userSchema : StructType)
extends BaseRelation with TableScan with PrunedScan with PrunedFilteredScan with Serializable { override def schema: StructType = {
if (userSchema != null) {
userSchema
} else {
StructType(
StructField("id", IntegerType, false) ::
StructField("name", StringType, true) ::
StructField("gender", StringType, true) ::
StructField("salary", LongType, true) ::
StructField("expenses", LongType, true) :: Nil
)
}
} override def buildScan(): RDD[Row] = {
println("TableScan: buildScan called...") val schemaFields = schema.fields
// Reading the file's content
val rdd = sqlContext.sparkContext.wholeTextFiles(path).map(f => f._2) val rows = rdd.map(fileContent => {
val lines = fileContent.split("\n")
val data = lines.map(line => line.split(",").map(word => word.trim).toSeq)
val tmp = data.map(words => words.zipWithIndex.map{
case (value, index) =>
val colName = schemaFields(index).name
Util.castTo(if (colName.equalsIgnoreCase("gender")) {if(value.toInt == 1) "Male" else "Female"} else value,
schemaFields(index).dataType)
}) tmp.map(s => Row.fromSeq(s))
}) rows.flatMap(e => e)
} override def buildScan(requiredColumns: Array[String]): RDD[Row] = {
println("PrunedScan: buildScan called...") val schemaFields = schema.fields
// Reading the file's content
val rdd = sqlContext.sparkContext.wholeTextFiles(path).map(f => f._2) val rows = rdd.map(fileContent => {
val lines = fileContent.split("\n")
val data = lines.map(line => line.split(",").map(word => word.trim).toSeq)
val tmp = data.map(words => words.zipWithIndex.map{
case (value, index) =>
val colName = schemaFields(index).name
val castedValue = Util.castTo(if (colName.equalsIgnoreCase("gender")) {if(value.toInt == 1) "Male" else "Female"} else value,
schemaFields(index).dataType)
if (requiredColumns.contains(colName)) Some(castedValue) else None
}) tmp.map(s => Row.fromSeq(s.filter(_.isDefined).map(value => value.get)))
}) rows.flatMap(e => e)
} override def buildScan(requiredColumns: Array[String], filters: Array[Filter]): RDD[Row] = {
println("PrunedFilterScan: buildScan called...") println("Filters: ")
filters.foreach(f => println(f.toString)) var customFilters: Map[String, List[CustomFilter]] = Map[String, List[CustomFilter]]()
filters.foreach( f => f match {
case EqualTo(attr, value) =>
println("EqualTo filter is used!!" + "Attribute: " + attr + " Value: " + value) /**
* as we are implementing only one filter for now, you can think that this below line doesn't mak emuch sense
* because any attribute can be equal to one value at a time. so what's the purpose of storing the same filter
* again if there are.
* but it will be useful when we have more than one filter on the same attribute. Take the below condition
* for example:
* attr > 5 && attr < 10
* so for such cases, it's better to keep a list.
* you can add some more filters in this code and try them. Here, we are implementing only equalTo filter
* for understanding of this concept.
*/
customFilters = customFilters ++ Map(attr -> {
customFilters.getOrElse(attr, List[CustomFilter]()) :+ new CustomFilter(attr, value, "equalTo")
})
case _ => println("filter: " + f.toString + " is not implemented by us!!")
}) val schemaFields = schema.fields
// Reading the file's content
val rdd = sqlContext.sparkContext.wholeTextFiles(path).map(f => f._2) val rows = rdd.map(file => {
val lines = file.split("\n")
val data = lines.map(line => line.split(",").map(word => word.trim).toSeq) val filteredData = data.map(s => if (customFilters.nonEmpty) {
var includeInResultSet = true
s.zipWithIndex.foreach {
case (value, index) =>
val attr = schemaFields(index).name
val filtersList = customFilters.getOrElse(attr, List())
if (filtersList.nonEmpty) {
if (CustomFilter.applyFilters(filtersList, value, schema)) {
} else {
includeInResultSet = false
}
}
}
if (includeInResultSet) s else Seq()
} else s) val tmp = filteredData.filter(_.nonEmpty).map(s => s.zipWithIndex.map {
case (value, index) =>
val colName = schemaFields(index).name
val castedValue = Util.castTo(if (colName.equalsIgnoreCase("gender")) {
if (value.toInt == 1) "Male" else "Female"
} else value,
schemaFields(index).dataType)
if (requiredColumns.contains(colName)) Some(castedValue) else None
}) tmp.map(s => Row.fromSeq(s.filter(_.isDefined).map(value => value.get)))
}) rows.flatMap(e => e)
}
}
类型转换类
package cn.zj.spark.sql.datasource import org.apache.spark.sql.types.{DataType, IntegerType, LongType, StringType} /**
* Created by rana on 30/9/16.
*/
object Util {
def castTo(value : String, dataType : DataType) = {
dataType match {
case _ : IntegerType => value.toInt
case _ : LongType => value.toLong
case _ : StringType => value
}
}
}
3 依赖的pom文件配置
<properties>
<maven.compiler.source>1.8</maven.compiler.source>
<maven.compiler.target>1.8</maven.compiler.target>
<scala.version>2.11.8</scala.version>
<spark.version>2.2.0</spark.version>
<!--<hadoop.version>2.6.0-cdh5.7.0</hadoop.version>-->
<!--<hbase.version>1.2.0-cdh5.7.0</hbase.version>-->
<encoding>UTF-8</encoding>
</properties> <dependencies>
<!-- 导入spark的依赖 -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<!-- 导入spark的依赖 -->
<!-- https://mvnrepository.com/artifact/org.apache.spark/spark-sql -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_2.11</artifactId>
<version>2.2.0</version>
</dependency> </dependencies>
4测试代码以及测试文件数据
package cn.zj.spark.sql.datasource import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession /**
* Created by rana on 29/9/16.
*/
object app extends App {
println("Application started...") val conf = new SparkConf().setAppName("spark-custom-datasource")
val spark = SparkSession.builder().config(conf).master("local").getOrCreate() val df = spark.sqlContext.read.format("cn.zj.spark.sql.datasource").load("1229practice/data/") df.createOrReplaceTempView("test")
spark.sql("select * from test where salary = 50000").show() println("Application Ended...")
}
数据
10002, Alice Heady, 0, 20000, 8000
10003, Jenny Brown, 0, 30000, 120000
10004, Bob Hayden, 1, 40000, 16000
10005, Cindy Heady, 0, 50000, 20000
10006, Doug Brown, 1, 60000, 24000
10007, Carolina Hayden, 0, 70000, 280000
参考文献:http://sparkdatasourceapi.blogspot.com/2016/10/spark-data-source-api-write-custom.html
完整代码详见 git@github.com:ZhangJin1988/spark-extend-dataSource.git
Spark SQL自定义外部数据源的更多相关文章
- Spark SQL 自定义函数类型
Spark SQL 自定义函数类型 一.spark读取数据 二.自定义函数结构 三.附上长长的各种pom 一.spark读取数据 前段时间一直在研究GeoMesa下的Spark JTS,Spark J ...
- spark SQL学习(数据源之json)
准备工作 数据文件students.json {"id":1, "name":"leo", "age":18} {&qu ...
- spark SQL学习(数据源之parquet)
Parquet是面向分析型业务得列式存储格式 编程方式加载数据 代码示例 package wujiadong_sparkSQL import org.apache.spark.sql.SQLConte ...
- 大数据技术之_19_Spark学习_03_Spark SQL 应用解析 + Spark SQL 概述、解析 、数据源、实战 + 执行 Spark SQL 查询 + JDBC/ODBC 服务器
第1章 Spark SQL 概述1.1 什么是 Spark SQL1.2 RDD vs DataFrames vs DataSet1.2.1 RDD1.2.2 DataFrame1.2.3 DataS ...
- 4. Spark SQL数据源
4.1 通用加载/保存方法 4.1.1手动指定选项 Spark SQL的DataFrame接口支持多种数据源的操作.一个DataFrame可以进行RDDs方式的操作,也可以被注册为临时表.把DataF ...
- Spark SQL | 目前Spark社区最活跃的组件之一
Spark SQL是一个用来处理结构化数据的Spark组件,前身是shark,但是shark过多的依赖于hive如采用hive的语法解析器.查询优化器等,制约了Spark各个组件之间的相互集成,因此S ...
- Spark SQL知识点大全与实战
Spark SQL概述 1.什么是Spark SQL Spark SQL是Spark用于结构化数据(structured data)处理的Spark模块. 与基本的Spark RDD API不同,Sp ...
- Spark SQL知识点与实战
Spark SQL概述 1.什么是Spark SQL Spark SQL是Spark用于结构化数据(structured data)处理的Spark模块. 与基本的Spark RDD API不同,Sp ...
- Spark SQL 官方文档-中文翻译
Spark SQL 官方文档-中文翻译 Spark版本:Spark 1.5.2 转载请注明出处:http://www.cnblogs.com/BYRans/ 1 概述(Overview) 2 Data ...
随机推荐
- Dockerfile技巧
换镜像源 Ubuntu RUN sed -i 's/archive.ubuntu.com/mirrors.tuna.tsinghua.edu.cn/g' /etc/apt/sources.list A ...
- SourceInsight快捷键
下载安装与设置 https://blog.csdn.net/k346k346/article/details/77412413 常用设置总结的还是比较全面的 问题: SourceInsight4.0中 ...
- ImageMagick - 设置透明带 AlphaChannel 的 png 图片的透明度
学习和使用ImageMagick也有几年时间了,其实对 ImageMagick 的了解还仅仅是皮毛吧. 最近在写代码的时候,遇到一个调整透明png图片透明度的问题: 当1张png图片,有些区域是透明的 ...
- 删除文件夹下面的文件的shell命令
首先看我的文件所在目录 我想删除位于desktop下面的helloBox中的react-hello-dimple中的package.json文件,我们注意一下时间是4月18号 shell命令如下 $ ...
- Day10--Python--动态传参,作用域
python的三目运算a = 10b = 20c = a if a > b else b #先判断中间的条件a > b是否成立,成立返回if前面的值,不成立返回else后面的值,也可以 c ...
- AT2165 Median Pyramid Hard 二分答案 脑洞题
无论再来多少次也不可能想到的写法. 二分一个最小的顶端值\(k\),大于设为\(1\)小于等于设为\(0\),可以证猜出来(你跟我说这可以?)如果存在两个连在一起的0/1那么它们会一直往上跑,还可以很 ...
- GD32 ------ 使用外部中断,中断函数需要延时才能读到真正电平
MCU:GD32F103RCT6 中断引脚没有外界上拉电阻 中断配置如下: RCC_APB2PeriphClockCmd(RCC_APB2Periph_GPIOC|RCC_APB2Periph_AFI ...
- mybatis下载地址(所有版本)
https://github.com/mybatis/mybatis-3/releases,这个github里面几乎包含了所有的没有batis
- Linux truncate的使用方法介绍
Linux truncate的使用方法介绍 参考资料:https://www.fengbohello.top/archives/linux-truncate 本命令缩减或扩充指定文件的大小为指定值.参 ...
- Windows-kms
系统下载 Windows 长期服务版 Windows 10 Enterprise LTSC 2019 (x64) - DVD (Chinese-Simplified) 文件名 cn_windows_1 ...