转化:

RDD、DataFrame、Dataset三者有许多共性,有各自适用的场景常常需要在三者之间转换

DataFrame/Dataset转RDD:

这个转换很简单

val rdd1=testDF.rdd
val rdd2=testDS.rdd RDD转DataFrame: import spark.implicits._
val testDF = rdd.map {line=>
(line._1,line._2)
}.toDF("col1","col2") 一般用元组把一行的数据写在一起,然后在toDF中指定字段名 RDD转Dataset:

import spark.implicits._
case class Coltest(col1:String,col2:Int)extends Serializable //定义字段名和类型
val testDS = rdd.map {line=>
Coltest(line._1,line._2)
}.toDS 可以注意到,定义每一行的类型(case class)时,已经给出了字段名和类型,后面只要往case class里面添加值即可 Dataset转DataFrame: 这个也很简单,因为只是把case class封装成Row import spark.implicits._
val testDF = testDS.toDF DataFrame转Dataset: import spark.implicits._
case class Coltest(col1:String,col2:Int)extends Serializable //定义字段名和类型
val testDS = testDF.as[Coltest] 这种方法就是在给出每一列的类型后,使用as方法,转成Dataset,这在数据类型是DataFrame又需要针对各个字段处理时极为方便
特别注意: 在使用一些特殊的操作时,一定要加上 import spark.implicits._ 不然toDF、toDS无法使用

package dataframe


import org.apache.spark.sql.{DataFrame, Dataset, SparkSession}


//
// Explore interoperability between DataFrame and Dataset. Note that Dataset
// is covered in much greater detail in the 'dataset' directory.
//
object DatasetConversion {


case class Cust(id: Integer, name: String, sales: Double, discount: Double, state: String)


case class StateSales(state: String, sales: Double)


def main(args: Array[String]) {
val spark =
SparkSession.builder()
.appName("DataFrame-DatasetConversion")
.master("local[4]")
.getOrCreate()


import spark.implicits._


// create a sequence of case class objects
// (we defined the case class above)
val custs = Seq(
Cust(1, "Widget Co", 120000.00, 0.00, "AZ"),
Cust(2, "Acme Widgets", 410500.00, 500.00, "CA"),
Cust(3, "Widgetry", 410500.00, 200.00, "CA"),
Cust(4, "Widgets R Us", 410500.00, 0.0, "CA"),
Cust(5, "Ye Olde Widgete", 500.00, 0.0, "MA")
)


// Create the DataFrame without passing through an RDD
val customerDF : DataFrame = spark.createDataFrame(custs)
//
// println("*** DataFrame schema")
//
// customerDF.printSchema()
//
// println("*** DataFrame contents")
//
// customerDF.show()

// +---+---------------+--------+--------+-----+
//| id| name| sales|discount|state|
//+---+---------------+--------+--------+-----+
//| 1| Widget Co|120000.0| 0.0| AZ|
//| 2| Acme Widgets|410500.0| 500.0| CA|
//| 3| Widgetry|410500.0| 200.0| CA|
//| 4| Widgets R Us|410500.0| 0.0| CA|
//| 5|Ye Olde Widgete| 500.0| 0.0| MA|
//+---+---------------+--------+--------+-----+


//
// println("*** Select and filter the DataFrame")
//
val smallerDF =
customerDF.select("sales", "state").filter($"state".equalTo("CA"))
//
// smallerDF.show()

//
// +--------+-----+
//| sales|state|
//+--------+-----+
//|410500.0| CA|
//|410500.0| CA|
//|410500.0| CA|
//+--------+-----+

///////////////////////////////////////////////////////////////////////////////////

// Convert it to a Dataset by specifying the type of the rows -- use a case
// class because we have one and it's most convenient to work with. Notice
// you have to choose a case class that matches the remaining columns.
// BUT also notice that the columns keep their order from the DataFrame --
// later you'll see a Dataset[StateSales] of the same type where the
// columns have the opposite order, because of the way it was created.


val customerDS : Dataset[StateSales] = smallerDF.as[StateSales]
//
// println("*** Dataset schema")
//
// customerDS.printSchema()
//
// println("*** Dataset contents")
//
// customerDS.show()


// Select and other operations can be performed directly on a Dataset too,
// but be careful to read the documentation for Dataset -- there are
// "typed transformations", which produce a Dataset, and
// "untyped transformations", which produce a DataFrame. In particular,
// you need to project using a TypedColumn to gate a Dataset.


// val verySmallDS : Dataset[Double] = customerDS.select($"sales".as[Double])
//
// println("*** Dataset after projecting one column")
//
// verySmallDS.show()

//
//+--------+
//| sales|
//+--------+
//|410500.0|
//|410500.0|
//|410500.0|
//+--------+


// If you select multiple columns on a Dataset you end up with a Dataset
// of tuple type, but the columns keep their names.
val tupleDS : Dataset[(String, Double)] =
customerDS.select($"state".as[String], $"sales".as[Double])
//
// println("*** Dataset after projecting two columns -- tuple version")
//
// tupleDS.show()

//
//+-----+--------+
//|state| sales|
//+-----+--------+
//| CA|410500.0|
//| CA|410500.0|
//| CA|410500.0|
//+-----+--------+


// You can also cast back to a Dataset of a case class. Notice this time
// the columns have the opposite order than the last Dataset[StateSales]
// val betterDS: Dataset[StateSales] = tupleDS.as[StateSales]
//
// println("*** Dataset after projecting two columns -- case class version")
//
// betterDS.show()

//
//+-----+--------+
//|state| sales|
//+-----+--------+
//| CA|410500.0|
//| CA|410500.0|
//| CA|410500.0|
//+-----+--------+


// Converting back to a DataFrame without making other changes is really easy
// val backToDataFrame : DataFrame = tupleDS.toDF()
//
// println("*** This time as a DataFrame")
//
// backToDataFrame.show()
//

//+-----+--------+
//|state| sales|
//+-----+--------+
//| CA|410500.0|
//| CA|410500.0|
//| CA|410500.0|
//+-----+--------+


//
// // While converting back to a DataFrame you can rename the columns
val renamedDataFrame : DataFrame = tupleDS.toDF("MyState", "MySales")


println("*** Again as a DataFrame but with renamed columns")


renamedDataFrame.show()


// +-------+--------+
//|MyState| MySales|
//+-------+--------+
//| CA|410500.0|
//| CA|410500.0|
//| CA|410500.0|
//+-------+--------+


}
}

 

RDD、DataFrame、Dataset三者三者之间转换的更多相关文章

  1. APACHE SPARK 2.0 API IMPROVEMENTS: RDD, DATAFRAME, DATASET AND SQL

    What’s New, What’s Changed and How to get Started. Are you ready for Apache Spark 2.0? If you are ju ...

  2. spark的数据结构 RDD——DataFrame——DataSet区别

    转载自:http://blog.csdn.net/wo334499/article/details/51689549 RDD 优点: 编译时类型安全 编译时就能检查出类型错误 面向对象的编程风格 直接 ...

  3. sparkSQL中RDD——DataFrame——DataSet的区别

    spark中RDD.DataFrame.DataSet都是spark的数据集合抽象,RDD针对的是一个个对象,但是DF与DS中针对的是一个个Row RDD 优点: 编译时类型安全 编译时就能检查出类型 ...

  4. RDD, DataFrame or Dataset

    总结: 1.RDD是一个Java对象的集合.RDD的优点是更面向对象,代码更容易理解.但在需要在集群中传输数据时需要为每个对象保留数据及结构信息,这会导致数据的冗余,同时这会导致大量的GC. 2.Da ...

  5. spark rdd df dataset

    RDD.DataFrame.DataSet的区别和联系 共性: 1)都是spark中得弹性分布式数据集,轻量级 2)都是惰性机制,延迟计算 3)根据内存情况,自动缓存,加快计算速度 4)都有parti ...

  6. byte[] 、Bitmap与Drawbale 三者直接的转换

    经常遇到这种类似头疼的问题 byte[] .Bitmap与Drawbale 三者直接的转换 1.byte[] ->Bitmap Bitmap Bitmap = BitmapFactory.dec ...

  7. Spark入门之DataFrame/DataSet

    目录 Part I. Gentle Overview of Big Data and Spark Overview 1.基本架构 2.基本概念 3.例子(可跳过) Spark工具箱 1.Dataset ...

  8. C#中对象,字符串,dataTable、DataReader、DataSet,对象集合转换成Json字符串方法。

    C#中对象,字符串,dataTable.DataReader.DataSet,对象集合转换成Json字符串方法. public class ConvertJson { #region 私有方法 /// ...

  9. 关于不同进制数之间转换的数学推导【Written By KillerLegend】

    关于不同进制数之间转换的数学推导 涉及范围:正整数范围内二进制(Binary),八进制(Octonary),十进制(Decimal),十六进制(hexadecimal)之间的转换 数的进制有多种,比如 ...

随机推荐

  1. iOS循环引用常见场景和解决办法

    好多场景会导致循环引用,例如使用Block.线程.委托.通知.观察者都可能会导致循环引用. 1.委托 遵守一个规则,委托方持有代理方的强引用,代理方持有委托方的弱引用. 实际场景中,委托方会是一个控制 ...

  2. arcengine直连sde

    搞了半天,找了好多资料,实验好多次,终于解决.参考资料:http://www.cnblogs.com/gaxin/p/5777864.html

  3. 写一个表达式检查所给的整数是否它第三个数字(从右向左)是7。示例:1732 -> true。

    在学习C#基础部分(课件来源:http://www.xuepub.com/52.html),遇到这么一个题目,前段时间面试遇到一个"车牌限行的问题",我就在想如何取末尾数值的问题. ...

  4. Linux下TCP/socket编程

    写在前面:本博客为本人原创,严禁任何形式的转载!本博客只允许放在博客园(.cnblogs.com),如果您在其他网站看到这篇博文,请通过下面这个唯一的合法链接转到原文! 本博客全网唯一合法URL:ht ...

  5. [web][nginx] 初识nginx -- 使用nginx搭建https DPI解码测试环境

    环境 CentOS 7 X86 文档: https://nginx.org/en/docs/ 安装: [root@dpdk ~]# cat /etc/yum.repos.d/nginx.repo [n ...

  6. SQL[Err]ORA-00XXX: missing 相关

    1.[Err]ORA-00936: missing expression 造成这个错误的原因是:选取的最后一个字段与from之间有逗号 解决方法:将字段与from之间的逗号去掉. 2.[Err] OR ...

  7. Glide加载图片报错You must not call setTag() on a view Glide is targeting

    报错信息为:You must not call setTag() on a view Glide is targeting 原因就是View使用setTag后导致Glide之前请求的标记被清除,强制转 ...

  8. Mybatis中dao接口和mapper 的加载过程

    这里考虑的是mybatis和spring整合的场景 1.在系统启动的时候,会去执行配置文件中有关扫描mybatis接口的配置:通过MapperScannerConfigurer扫描接口生成spring ...

  9. 根据后台加载数据,添加loading动画

    <script> var current = 0; var hit = @hits; $(this).scroll(function(){ var viewHeight =$(this). ...

  10. linux查找大文件命令

    测试服务器用久了,如果没有运行自动清除日志的脚本,会导致硬盘空间不足,应用.数据库.环境等启动不了: 如果你对系统不是特别熟悉,就无法知道那些占用空间的日志或缓存文件在哪里,这时,我们就可以利用查找大 ...