See Apache Spark 2.0 API Improvements: RDD, DataFrame, DataSet and SQL here.

Apache Spark is evolving at a rapid pace, including changes and additions to core APIs. One of the most disruptive areas of change is around the representation of data sets. Spark 1.0 used the RDD API but in the past twelve months, two new alternative and incompatible APIs have been introduced. Spark 1.3 introduced the radically different DataFrame API and the recently released Spark 1.6 release introduces a preview of the new Dataset API.

Many existing Spark developers will be wondering whether to jump from RDDs directly to the Dataset API, or whether to first move to the DataFrame API. Newcomers to Spark will have to choose which API to start learning with.

This article provides an overview of each of these APIs, and outlines the strengths and weaknesses of each one. A companion github repository provides working examples that are a good starting point for experimentation with the approaches outlined in this article.

Talk to a Spark expert.Contact Us.

The RDD (Resilient Distributed Dataset) API has been in Spark since the 1.0 release. This interface and its Java equivalent, JavaRDD, will be familiar to any developers who have worked through the standard Spark tutorials. From a developer’s perspective, an RDD is simply a set of Java or Scala objects representing data.

The RDD API provides many transformation methods, such as map()filter(), and reduce() for performing computations on the data. Each of these methods results in a new RDD representing the transformed data. However, these methods are just defining the operations to be performed and the transformations are not performed until an action method is called. Examples of action methods are collect() and saveAsObjectFile().

Example of RDD transformations and actions

Scala:

 
1
2
3
rdd.filter(_.age > 21)              // transformation
   .map(_.last)                     // transformation                    
   .saveAsObjectFile("under21.bin") // action

Java:

 
1
2
3
rdd.filter(p -> p.getAge() < 21)     // transformation
   .map(p -> p.getLast())            // transformation
   .saveAsObjectFile("under21.bin"); // action

The main advantage of RDDs is that they are simple and well understood because they deal with concrete classes, providing a familiar object-oriented programming style with compile-time type-safety. For example, given an RDD containing instances of Person we can filter by age by referencing the age attribute of each Person object:

Example: Filter by attribute with RDD

Scala:

 
1
rdd.filter(_.age > 21)

Java:

 
1
rdd.filter(person -> person.getAge() > 21)

The main disadvantage to RDDs is that they don’t perform particularly well. Whenever Spark needs to distribute the data within the cluster, or write the data to disk, it does so using Java serialization by default (although it is possible to use Kryo as a faster alternative in most cases). The overhead of serializing individual Java and Scala objects is expensive and requires sending both data and structure between nodes (each serialized object contains the class structure as well as the values). There is also the overhead of garbage collection that results from creating and destroying individual objects.

DataFrame API

Spark 1.3 introduced a new DataFrame API as part of the Project Tungsten initiative which seeks to improve the performance and scalability of Spark. The DataFrame API introduces the concept of a schema to describe the data, allowing Spark to manage the schema and only pass data between nodes, in a much more efficient way than using Java serialization. There are also advantages when performing computations in a single process as Spark can serialize the data into off-heap storage in a binary format and then perform many transformations directly on this off-heap memory, avoiding the garbage-collection costs associated with constructing individual objects for each row in the data set. Because Spark understands the schema, there is no need to use Java serialization to encode the data.

The DataFrame API is radically different from the RDD API because it is an API for building a relational query plan that Spark’s Catalyst optimizer can then execute. The API is natural for developers who are familiar with building query plans, but not natural for the majority of developers. The query plan can be built from SQL expressions in strings or from a more functional approach using a fluent-style API.

Example: Filter by attribute with DataFrame

Note that these examples have the same syntax in both Java and Scala

SQL Style

 
1
df.filter("age > 21");

Expression builder style:

 
1
df.filter(df.col("age").gt(21));

Because the code is referring to data attributes by name, it is not possible for the compiler to catch any errors. If attribute names are incorrect then the error will only detected at runtime, when the query plan is created.

Another downside with the DataFrame API is that it is very scala-centric and while it does support Java, the support is limited. For example, when creating a DataFrame from an existing RDD of Java objects, Spark’s Catalyst optimizer cannot infer the schema and assumes that any objects in the DataFrame implement the scala.Product interface. Scala case classes work out the box because they implement this interface.

Dataset API

The Dataset API, released as an API preview in Spark 1.6, aims to provide the best of both worlds; the familiar object-oriented programming style and compile-time type-safety of the RDD API but with the performance benefits of the Catalyst query optimizer. Datasets also use the same efficient off-heap storage mechanism as the DataFrame API.

When it comes to serializing data, the Dataset API has the concept of encoders which translate between JVM representations (objects) and Spark’s internal binary format. Spark has built-in encoders which are very advanced in that they generate byte code to interact with off-heap data and provide on-demand access to individual attributes without having to de-serialize an entire object. Spark does not yet provide an API for implementing custom encoders, but that is planned for a future release.

Additionally, the Dataset API is designed to work equally well with both Java and Scala. When working with Java objects, it is important that they are fully bean-compliant. In writing the examples to accompany this article, we ran into errors when trying to create a Dataset in Java from a list of Java objects that were not fully bean-compliant.

Need help with Spark APIs? Contact Us.

Example: Creating Dataset from a list of objects

Scala

 
1
2
3
4
5
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val sampleData: Seq[ScalaPerson] = ScalaData.sampleData()
val dataset = sqlContext.createDataset(sampleData)

Java

 
1
2
3
4
JavaSparkContext sc = new JavaSparkContext(sparkConf);
SQLContext sqlContext = new SQLContext(sc);
List data = JavaData.sampleData();
Dataset dataset = sqlContext.createDataset(data, Encoders.bean(JavaPerson.class));

Transformations with the Dataset API look very much like the RDD API and deal with the Person class rather than an abstraction of a row.

Example: Filter by attribute with Dataset

Scala

 
1
dataset.filter(_.age < 21);

Java

 
1
dataset.filter(person -> person.getAge() < 21);

Despite the similarity with RDD code, this code is building a query plan, rather than dealing with individual objects, and if age is the only attribute accessed, then the rest of the the object’s data will not be read from off-heap storage.

Get started with your Big Data Strategy. Contact Us.

Conclusions

If you are developing primarily in Java then it is worth considering a move to Scala before adopting the DataFrame or Dataset APIs. Although there is an effort to support Java, Spark is written in Scala and the code often makes assumptions that make it hard (but not impossible) to deal with Java objects.

If you are developing in Scala and need your code to go into production with Spark 1.6.0 then the DataFrame API is clearly the most stable option available and currently offers the best performance.

However, the Dataset API preview looks very promising and provides a more natural way to code. Given the rapid evolution of Spark it is likely that this API will mature very quickly through 2016 and become the de-facto API for developing new applications.

See Apache Spark 2.0 API Improvements: RDD, DataFrame, DataSet and SQL here.

There Are Now 3 Apache Spark APIs. Here’s How to Choose the Right One的更多相关文章

  1. A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets(中英双语)

    文章标题 A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets 且谈Apache Spark的API三剑客:RDD.Dat ...

  2. 且谈 Apache Spark 的 API 三剑客:RDD、DataFrame 和 Dataset

    作者:Jules S. Damji 译者:足下 本文翻译自 A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets ,翻译已 ...

  3. Why Apache Spark is a Crossover Hit for Data Scientists [FWD]

    Spark is a compelling multi-purpose platform for use cases that span investigative, as well as opera ...

  4. Apache Spark 2.2.0 中文文档 - 概述 | ApacheCN

    Spark 概述 Apache Spark 是一个快速的, 多用途的集群计算系统. 它提供了 Java, Scala, Python 和 R 的高级 API,以及一个支持通用的执行图计算的优化过的引擎 ...

  5. Apache Spark 2.2.0 中文文档 - Spark SQL, DataFrames and Datasets Guide | ApacheCN

    Spark SQL, DataFrames and Datasets Guide Overview SQL Datasets and DataFrames 开始入门 起始点: SparkSession ...

  6. 分享一个.NET平台开源免费跨平台的大数据分析框架.NET for Apache Spark

    今天早上六点半左右微信群里就看到张队发的关于.NET Spark大数据的链接https://devblogs.microsoft.com/dotnet/introducing-net-for-apac ...

  7. APACHE SPARK 2.0 API IMPROVEMENTS: RDD, DATAFRAME, DATASET AND SQL

    What’s New, What’s Changed and How to get Started. Are you ready for Apache Spark 2.0? If you are ju ...

  8. Introducing Apache Spark Datasets(中英双语)

    文章标题 Introducing Apache Spark Datasets 作者介绍 Michael Armbrust, Wenchen Fan, Reynold Xin and Matei Zah ...

  9. Introducing DataFrames in Apache Spark for Large Scale Data Science(中英双语)

    文章标题 Introducing DataFrames in Apache Spark for Large Scale Data Science 一个用于大规模数据科学的API——DataFrame ...

随机推荐

  1. C# 调用IP库(QQWry.Dat)查询IP位置及自动升级IP库方法【转】

    前言 C# 用IP地址(123.125.114.144)查询位置(北京市百度公司)的东西,非常好用也非常方便,可手动升级刷新IP库,一次编码永久收益,可支持winform.asp.net等程序. 本文 ...

  2. 前端笔记之HTML5&CSS3(中)选择器&伪类伪元素&CSS3效果&渐变背景&过渡

    一.CSS3选择器 CSS3是CSS的第三代版本,新增了很多功能,例如:强大的选择器.盒模型.圆角.渐变.动画.2D/3D转换.文字特效等. CSS3和HTML5没有任何关系!HTML5骨架中,可以用 ...

  3. shell从入门到精通进阶之一:Shell基础知识

    1.1 简介 Shell是一个C语言编写的脚本语言,它是用户与Linux的桥梁,用户输入命令交给Shell处理,Shell将相应的操作传递给内核(Kernel),内核把处理的结果输出给用户. 下面是处 ...

  4. 微服务实战(二):使用API Gateway

    微服务实战(一):微服务架构的优势与不足 微服务实战(二):使用API Gateway 微服务实战(三):深入微服务架构的进程间通信 微服务实战(四):服务发现的可行方案以及实践案例 微服务实践(五) ...

  5. [十四]基础类型之StringBuffer 与 StringBuilder对比

    StringBuilder 和 StringBuffer是高度类似的两个类 StringBuilder是StringBuffer的版本改写,下面从几个方面简单的对比下他们的区别 类继承关系 上文中,我 ...

  6. 痞子衡嵌入式:ARM Cortex-M文件那些事(8)- 镜像文件(.bin/.hex/.s19)

    大家好,我是痞子衡,是正经搞技术的痞子.今天痞子衡给大家讲的是嵌入式开发里的image文件(.bin, .hex, .s19). 今天这节课是痞子衡<ARM Cortex-M文件那些事>主 ...

  7. javascript sort 函数用法

    sort 函数 博客地址:https://ainyi.com/41 简单的说,sort() 在没有参数时,返回的结果是按升序来排列的.即字符串的Unicode码位点(code point)排序 [5, ...

  8. Smobiler 4.0 正式发布

    l Smobiler4.0提供了三大技术亮点:第三方插件.JS.自定义控件等:   强大的插件移动应用引擎 Smobiler支持分插件打包功能和插件扩展机制,让应用开发更加灵活. 分插件打包是指Smo ...

  9. 第48章 UserInfo端点(UserInfo Endpoint) - Identity Server 4 中文文档(v1.0.0)

    UserInfo端点可用于检索有关用户的身份信息(请参阅规范). 调用者需要发送代表用户的有效访问令牌.根据授予的范围,UserInfo端点将返回映射的声明(至少需要openid作用域). 示例 GE ...

  10. 《Odoo开发指南》精选分享—第1章-开始使用Odoo开发(1)

    引言 在进入Odoo开发之前,我们需要建立我们的开发环境,并学习它的基本管理任务. 在本章中,我们将学习如何设置工作环境,在这里我们将构建我们的Odoo应用程序.我们将学习如何设置Debian或Ubu ...