APACHE SPARK 2.0 API IMPROVEMENTS: RDD, DATAFRAME, DATASET AND SQL
What’s New, What’s Changed and How to get Started.
Are you ready for Apache Spark 2.0?
If you are just getting started with Apache Spark, the 2.0 release is the one to start with as the APIs have just gone through a major overhaul to improve ease-of-use.
If you are using an older version and want to learn what has changed then this article will give you the low down on why you should upgrade and what the impact to your code will be.
What’s new with Apache Spark 2.0?
Let’s start with the good news, and there’s plenty.
- There are really only two programmatic APIs now; RDD and Dataset. For backwards compatibility, DataFrame still exists but is just a synonym for a Dataset.
- Spark SQL has been improved to support a wider range of queries, including correlated subqueries. This was largely led by an effort to run TPC-DS benchmarks in Spark.
- Performance is once again significantly improved thanks to advanced “whole stage code generation” when compiling query plans
CSV support is now built-in and based on the DataBricks spark-csv project, making it a breeze to create Datasets from CSV data with little coding.
Spark 2.0 is a major release, and there are some breaking changes that mean you may need to rewrite some of your code. Hereare some things we ran into when updating our apache-spark-examples.
- For Scala users, SparkSession replaces SparkContext and SQLContext as the top-level context, but still provides access to SQLContext and SQLContext for backwards compatibility
- DataFrame is now a synonym for Dataset[Row] and you can use these two types interchangeably, although we recommend using the latter.
- Performing a map() operation on a Dataset now returns a Dataset rather than an RDD, reducing the need to keep switching between the two APIs, and improving performance.
- Some Java functional interfaces, such as FlatMapFunction, have been updated to return Iterator<T>rather than Iterable<T>.
Get help upgrading to Apache Spark 2.0 or making the transition from Java to Scala. Contact Us!
RDD vs. Dataset 2.0
Both the RDD API and the Dataset API represent data sets of a specific class. For instance, you can create an RDD[Person] as well as a Dataset[Person] so both can provide compile-time type-safety. Both can also be used with the generic Row structure provided in Spark for cases where classes might not exist that represent the data being manipulated, such as when reading CSV files.
RDDs can be used with any Java or Scala class and operate by manipulating those objects directly with all of the associated costs of object creation, serialization and garbage collection.
Datasets are limited to classes that implement the Scala Product trait, such as case classes. There is a very good reason for this limitation. Datasets store data in an optimized binary format, often in off-heap memory, to avoid the costs of deserialization and garbage collection. Even though it feels like you are coding against regular objects, Spark is really generating its own optimized byte-code for accessing the data directly.
RDD
1
2
3
|
// raw object manipulation
val rdd: RDD[Person] = …
val rdd2: RDD[String] = rdd.map(person => person.lastName)
|
Dataset
1
2
3
|
// optimized direct access to off-heap memory without deserializing objects
val ds: Dataset[Person] = …
val ds2: Dataset[String] = ds.map(person => person.lastName)
|
Getting Started with Scala
Here are some code samples to help you get started fast with Apache Spark 2.0 and Scala.
Creating SparkSession
SparkSession is now the starting point for a Spark driver program, instead of creating a SparkContext and a SQLContext.
1
2
3
4
5
6
7
8
|
val spark = SparkSession.builder
.master("local[*]")
.appName("Example")
.getOrCreate()
// accessing legacy SparkContext and SQLContext
spark.sparkContext
spark.sqlContext
|
Creating a Dataset from a collection
SparkSession provides a createDataset method that accepts a collection.
1
|
var ds: Dataset[String] = spark.createDataset(List("one","two","three"))
|
Converting an RDD to a Dataset
SparkSession provides a createDataset method for converting an RDD to a Dataset. This only works if you import spark.implicits_ (where spark is the name of the SparkSession variable).
1
2
3
4
5
|
// always import implicits so that Spark can infer types when creating Datasets
import spark.implicits._
val rdd: RDD[Person] = ??? // assume this exists
val dataset: Dataset[Person] = spark.createDataset[Person](rdd)
|
Converting a DataFrame to a Dataset
A DataFrame (which is really a Dataset[Row]) can be converted to a Dataset of a specific class by performing a map() operation.
1
2
3
4
5
6
7
8
|
// read a text file into a DataFrame a.k.a. Dataset[Row]
var df: Dataset[Row] = spark.read.text("people.txt")
// use map() to convert to a Dataset of a specific class
var ds: Dataset[Person] = spark.read.text("people.txt")
.map(row => parsePerson(row))
def parsePerson(row: Row) : Person = ??? // fill in parsing logic here
|
Reading a CSV directly as a Dataset
The built-in CSV support makes it easy to read a CSV and return a Dataset of a specific case class. This only works if the CSV contains a header row and the field names match the case class.
1
2
3
4
|
val ds: Dataset[Person] = spark.read
.option("header","true")
.csv("people.csv")
.as[Person]
|
Getting Started with Java
Here are some code samples to help you get started fast with Spark 2.0 and Java.
Creating SparkSession
1
2
3
4
5
6
7
|
SparkSession spark = SparkSession.builder()
.master("local[*]")
.appName("Example")
.getOrCreate();
// Java still requires of the JavaSparkContext
JavaSparkContext sc = new JavaSparkContext(spark.sparkContext());
|
Creating a Dataset from a collection
SparkSession provides a createDataset method that accepts a collection.
1
2
3
4
|
Dataset<Person> ds = spark.createDataset(
Collections.singletonList(new Person(1, "Joe", "Bloggs")),
Encoders.bean(Person.class)
);
|
Converting an RDD to a Dataset
SparkSession provides a createDataset method for converting an RDD to a Dataset.
1
2
3
4
|
Dataset<Person> ds = spark.createDataset(
javaRDD.rdd(), // convert a JavaRDD to an RDD
Encoders.bean(Person.class)
);
|
Converting a DataFrame to a Dataset
A DataFrame (which is really a Dataset[Row]) can be converted to a Dataset of a specific class by performing a map() operation.
1
2
3
4
5
6
7
8
|
Dataset<Person> ds = df.map(new MapFunction<Row, Person>() {
@Override
public Person call(Row value) throws Exception {
return new Person(Integer.parseInt(value.getString(0)),
value.getString(1),
value.getString(2));
}
}, Encoders.bean(Person.class));
|
Reading a CSV directly as a Dataset
The built-in CSV support makes it easy to read a CSV and return a Dataset of a specific case class. This only works if the CSV contains a header row and the field names match the case class.
1
2
3
4
|
Dataset<Person> ds = spark.read()
.option("header", "true")
.csv("testdata/people.csv")
.as(Encoders.bean(Person.class));
|
Spark+Scala beats Spark+Java
Using Apache Spark with Java is harder than using Apache Spark with Scala and we spent significantly longer upgrading our Java examples than we did with our Scala examples, including running into some confusing runtime errors that were hard to track down (for example, we hit a runtime error with Spark’s code generation because one of our Java classes was not declared as public).
Also, we weren’t always able to use concise lambda functions even though we are using Java 8, and had to revert to anonymous inner classes with verbose (and confusing) syntax.
Conclusion
Spark 2.0 represents a significant milestone in the evolution of this open source project and provides cleaner APIs and improved performance compared to the 1.6 release.
The Scala API is a joy to code with, but the Java API can often be frustrating. It’s worth biting the bullet and switching to Scala.
Full source code for a number of examples is available from our github repo here.
Get help upgrading to Spark 2.0 or making the transition from Java to Scala. Contact Us!
APACHE SPARK 2.0 API IMPROVEMENTS: RDD, DATAFRAME, DATASET AND SQL的更多相关文章
- Apache Spark 2.0三种API的传说:RDD、DataFrame和Dataset
Apache Spark吸引广大社区开发者的一个重要原因是:Apache Spark提供极其简单.易用的APIs,支持跨多种语言(比如:Scala.Java.Python和R)来操作大数据. 本文主要 ...
- Apache Spark 3.0 预览版正式发布,多项重大功能发布
2019年11月08日 数砖的 Xingbo Jiang 大佬给社区发了一封邮件,宣布 Apache Spark 3.0 预览版正式发布,这个版本主要是为了对即将发布的 Apache Spark 3. ...
- Apache Spark 3.0 将内置支持 GPU 调度
如今大数据和机器学习已经有了很大的结合,在机器学习里面,因为计算迭代的时间可能会很长,开发人员一般会选择使用 GPU.FPGA 或 TPU 来加速计算.在 Apache Hadoop 3.1 版本里面 ...
- spark的数据结构 RDD——DataFrame——DataSet区别
转载自:http://blog.csdn.net/wo334499/article/details/51689549 RDD 优点: 编译时类型安全 编译时就能检查出类型错误 面向对象的编程风格 直接 ...
- Spark注册UDF函数,用于DataFrame DSL or SQL
import org.apache.spark.sql.SparkSession import org.apache.spark.sql.functions._ object Test2 { def ...
- sparkSQL中RDD——DataFrame——DataSet的区别
spark中RDD.DataFrame.DataSet都是spark的数据集合抽象,RDD针对的是一个个对象,但是DF与DS中针对的是一个个Row RDD 优点: 编译时类型安全 编译时就能检查出类型 ...
- There Are Now 3 Apache Spark APIs. Here’s How to Choose the Right One
See Apache Spark 2.0 API Improvements: RDD, DataFrame, DataSet and SQL here. Apache Spark is evolvin ...
- RDD, DataFrame or Dataset
总结: 1.RDD是一个Java对象的集合.RDD的优点是更面向对象,代码更容易理解.但在需要在集群中传输数据时需要为每个对象保留数据及结构信息,这会导致数据的冗余,同时这会导致大量的GC. 2.Da ...
- 且谈 Apache Spark 的 API 三剑客:RDD、DataFrame 和 Dataset
作者:Jules S. Damji 译者:足下 本文翻译自 A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets ,翻译已 ...
随机推荐
- 使用ML.NET实现基于RFM模型的客户价值分析
RFM模型 在众多的客户价值分析模型中,RFM模型是被广泛应用的,尤其在零售和企业服务领域堪称经典的分类手段.它的核心定义从基本的交易数据中来,借助恰当的聚类算法,反映出对客户较为直观的分类指示,对于 ...
- 深入理解Spring IOC工作原理
为什么会出现spring,spring出现解决了什么问题? 1.分析普通多层架构存在的问题 JSP->Servlet->Service->Dao 层与层之间的依赖很强,属于耦合而且是 ...
- MySQL的可重复读级别能解决幻读吗
引言 之前在深入了解数据库理论的时候,了解到事物的不同隔离级别可能存在的问题.为了更好的理解所以在MySQL数据库中测试复现这些问题.关于脏读和不可重复读在相应的隔离级别下都很容易的复现了.但是对于幻 ...
- HBase BulkLoad批量写入数据实战
1.概述 在进行数据传输中,批量加载数据到HBase集群有多种方式,比如通过HBase API进行批量写入数据.使用Sqoop工具批量导数到HBase集群.使用MapReduce批量导入等.这些方式, ...
- python基础3--函数
1.函数定义 你可以定义一个由自己想要功能的函数,以下是简单的规则: 函数代码块以def关键词开头,后接函数标识符名称和圆括号(). 任何传入参数和自变量必须放在圆括号中间.圆括号之间可以用于定义参数 ...
- MariaDB官方手册翻译
MariaDB官方手册 翻译:create database语句(已提交到MariaDB官方手册) 翻译:rename table语句(已提交到MariaDB官方手册) 翻译:alter table语 ...
- c# ?? 和?
static void Main(string[] args) { double? num1 = null; // ? 说明num1可以为空 ...
- Android破解学习之路(十)—— 我们恋爱吧 三色绘恋 二次破解
前言 好久没有写破解教程了(我不会告诉你我太懒了),找到一款恋爱游戏,像我这样的宅男只能玩玩恋爱游戏感觉一下恋爱的心动了.. 这款游戏免费试玩,但是后续章节得花6元钱购买,我怎么会有钱呢,而且身在吾爱 ...
- Win10系统简单开启热点
介绍 笔记本电脑使用的都是无线网卡,我们可以通过这网卡来开启热点供手机使用,说起开热点,大家都是想到的使用360随身wifi或者是猎豹wifi来开启热点吧,我个人不太喜欢使用这些软件,原因因为有DNS ...
- 【Java】List遍历时删除元素的正确方式
当要删除ArrayList里面的某个元素,一不注意就容易出bug.今天就给大家说一下在ArrayList循环遍历并删除元素的问题.首先请看下面的例子: import java.util.ArrayLi ...