Introducing Apache Spark Datasets(中英双语)
文章标题
Introducing Apache Spark Datasets
作者介绍
Michael Armbrust, Wenchen Fan, Reynold Xin and Matei Zaharia
文章正文
Developers have always loved Apache Spark for providing APIs that are simple yet powerful, a combination of traits that makes complex analysis possible with minimal programmer effort. At Databricks, we have continued to push Spark’s usability and performance envelope through the introduction of DataFrames and Spark SQL. These are high-level APIs for working with structured data (e.g. database tables, JSON files), which let Spark automatically optimize both storage and computation. Behind these APIs, the Catalyst optimizer and Tungsten execution engine optimize applications in ways that were not possible with Spark’s object-oriented (RDD) API, such as operating on data in a raw binary form.
Today we’re excited to announce Spark Datasets, an extension of the DataFrame API that provides a type-safe, object-oriented programming interface. Spark 1.6 includes an API preview of Datasets, and they will be a development focus for the next several versions of Spark. Like DataFrames, Datasets take advantage of Spark’s Catalyst optimizer by exposing expressions and data fields to a query planner. Datasets also leverage Tungsten’s fast in-memory encoding. Datasets extend these benefits with compile-time type safety – meaning production applications can be checked for errors before they are run. They also allow direct operations over user-defined classes.
In the long run, we expect Datasets to become a powerful way to write more efficient Spark applications. We have designed them to work alongside the existing RDD API, but improve efficiency when data can be represented in a structured form. Spark 1.6 offers the first glimpse at Datasets, and we expect to improve them in future releases.
1、Working with Datasets
A Dataset is a strongly-typed, immutable collection of objects that are mapped to a relational schema. At the core of the Dataset API is a new concept called an encoder, which is responsible for converting between JVM objects and tabular representation. The tabular representation is stored using Spark’s internal Tungsten binary format, allowing for operations on serialized data and improved memory utilization. Spark 1.6 comes with support for automatically generating encoders for a wide variety of types, including primitive types (e.g. String, Integer, Long), Scala case classes, and Java Beans.
Users of RDDs will find the Dataset API quite familiar, as it provides many of the same functional transformations (e.g. map, flatMap, filter). Consider the following code, which reads lines of a text file and splits them into words:
// RDDs val lines = sc.textFile("/wikipedia")
val words = lines
.flatMap(_.split(" "))
.filter(_ != "")
// Datasets val lines = sqlContext.read.text("/wikipedia").as[String]
val words = lines
.flatMap(_.split(" "))
.filter(_ != "")
Both APIs make it easy to express the transformation using lambda functions. The compiler and your IDE understand the types being used, and can provide helpful tips and error messages while you construct your data pipeline.
While this high-level code may look similar syntactically, with Datasets you also have access to all the power of a full relational execution engine. For example, if you now want to perform an aggregation (such as counting the number of occurrences of each word), that operation can be expressed simply and efficiently as follows:
// RDDs val counts = words
.groupBy(_.toLowerCase)
.map(w => (w._1, w._2.size))
// Datasets val counts = words
.groupBy(_.toLowerCase)
.count()
Since the Dataset version of word count can take advantage of the built-in aggregate count
, this computation can not only be expressed with less code, but it will also execute significantly faster. As you can see in the graph below, the Dataset implementation runs much faster than the naive RDD implementation. In contrast, getting the same performance using RDDs would require users to manually consider how to express the computation in a way that parallelizes optimally.
Another benefit of this new Dataset API is the reduction in memory usage. Since Spark understands the structure of data in Datasets, it can create a more optimal layout in memory when caching Datasets. In the following example, we compare caching several million strings in memory using Datasets as opposed to RDDs. In both cases, caching data can lead to significant performance improvements for subsequent queries. However, since Dataset encoders provide more information to Spark about the data being stored, the cached representation can be optimized to use 4.5x less space.
To help you get started, we’ve put together some example notebooks: Working with Classes, Word Count.
2、Lightning-fast Serialization with Encoders
Encoders are highly optimized and use runtime code generation to build custom bytecode for serialization and deserialization. As a result, they can operate significantly faster than Java or Kryo serialization.
In addition to speed, the resulting serialized size of encoded data can also be significantly smaller (up to 2x), reducing the cost of network transfers. Furthermore, the serialized data is already in the Tungsten binary format, which means that many operations can be done in-place, without needing to materialize an object at all. Spark has built-in support for automatically generating encoders for primitive types (e.g. String, Integer, Long), Scala case classes, and Java Beans. We plan to open up this functionality and allow efficient serialization of custom types in a future release.
3、Seamless Support for Semi-Structured Data
The power of encoders goes beyond performance. They also serve as a powerful bridge between semi-structured formats (e.g. JSON) and type-safe languages like Java and Scala.
For example, consider the following dataset about universities:
{"name": "UC Berkeley", "yearFounded": 1868, numStudents: 37581} {"name": "MIT", "yearFounded": 1860, numStudents: 11318} …
Instead of manually extracting fields and casting them to the desired type, you can simply define a class with the expected structure and map the input data to it. Columns are automatically lined up by name, and the types are preserved.
case class University(name: String, numStudents: Long, yearFounded: Long) val schools = sqlContext.read.json("/schools.json").as[University] schools.map(s => s"${s.name} is ${2015 – s.yearFounded} years old")
Encoders eagerly check that your data matches the expected schema, providing helpful error messages before you attempt to incorrectly process TBs of data. For example, if we try to use a datatype that is too small, such that conversion to an object would result in truncation (i.e. numStudents is larger than a byte, which holds a maximum value of 255) the Analyzer will emit an AnalysisException.
case class University(numStudents: Byte) val schools = sqlContext.read.json("/schools.json").as[University] org.apache.spark.sql.AnalysisException: Cannot upcast `yearFounded` from bigint to smallint as it may truncate
When performing the mapping, encoders will automatically handle complex types, including nested classes, arrays, and maps.
4、A Single API for Java and Scala
Another goal to the Dataset API is to provide a single interface that is usable in both Scala and Java. This unification is great news for Java users as it ensure that their APIs won’t lag behind the Scala interfaces, code examples can easily be used from either language, and libraries no longer have to deal with two slightly different types of input. The only difference for Java users is they need to specify the encoder to use since the compiler does not provide type information. For example, if wanted to process json data using Java you could do it as follows:
public class University implements Serializable {
private String name;
private long numStudents;
private long yearFounded; public void setName(String name) {...}
public String getName() {...}
public void setNumStudents(long numStudents) {...}
public long getNumStudents() {...}
public void setYearFounded(long yearFounded) {...}
public long getYearFounded() {...}
} class BuildString implements MapFunction {
public String call(University u) throws Exception {
return u.getName() + " is " + (2015 - u.getYearFounded()) + " years old.";
}
} Dataset schools = context.read().json("/schools.json").as(Encoders.bean(University.class));
Dataset strings = schools.map(new BuildString(), Encoders.STRING());
5、Looking Forward
While Datasets are a new API, we have made them interoperate easily with RDDs and existing Spark programs. Simply calling the rdd()
method on a Dataset will give an RDD. In the long run, we hope that Datasets can become a common way to work with structured data, and we may converge the APIs even further.
As we look forward to Spark 2.0, we plan some exciting improvements to Datasets, specifically:
- Performance optimizations – In many cases, the current implementation of the Dataset API does not yet leverage the additional information it has and can be slower than RDDs. Over the next several releases, we will be working on improving the performance of this new API.
- Custom encoders – while we currently autogenerate encoders for a wide variety of types, we’d like to open up an API for custom objects.
- Python Support.
- Unification of DataFrames with Datasets – due to compatibility guarantees, DataFrames and Datasets currently cannot share a common parent class. With Spark 2.0, we will be able to unify these abstractions with minor changes to the API, making it easy to build libraries that work with both.
If you’d like to try out Datasets yourself, they are already available in Databricks. We’ve put together a few example notebooks for you to try out: Working with Classes, Word Count.
Spark 1.6 is available on Databricks today, sign up for a free 14-day trial.
参考文献
- https://databricks.com/blog/2016/01/04/introducing-apache-spark-datasets.html
- https://blog.csdn.net/sunbow0/article/details/50723233
Introducing Apache Spark Datasets(中英双语)的更多相关文章
- 我在 B 站学机器学习(Machine Learning)- 吴恩达(Andrew Ng)【中英双语】
我在 B 站学机器学习(Machine Learning)- 吴恩达(Andrew Ng)[中英双语] 视频地址:https://www.bilibili.com/video/av9912938/ t ...
- Introducing DataFrames in Apache Spark for Large Scale Data Science(中英双语)
文章标题 Introducing DataFrames in Apache Spark for Large Scale Data Science 一个用于大规模数据科学的API——DataFrame ...
- A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets(中英双语)
文章标题 A Tale of Three Apache Spark APIs: RDDs, DataFrames, and Datasets 且谈Apache Spark的API三剑客:RDD.Dat ...
- What’s new for Spark SQL in Apache Spark 1.3(中英双语)
文章标题 What’s new for Spark SQL in Apache Spark 1.3 作者介绍 Michael Armbrust 文章正文 The Apache Spark 1.3 re ...
- Apache Spark as a Compiler: Joining a Billion Rows per Second on a Laptop(中英双语)
文章标题 Apache Spark as a Compiler: Joining a Billion Rows per Second on a Laptop Deep dive into the ne ...
- One SQL to Rule Them All – an Efficient and Syntactically Idiomatic Approach to Management of Streams and Tables(中英双语)
文章标题 One SQL to Rule Them All – an Efficient and Syntactically Idiomatic Approach to Management of S ...
- Matalb中英双语手册-年少无知翻译版本
更新: 20171207: 这是大学期间参加数模翻译的手册 正文: 愚人节快乐,突然发现自己在博客园的一篇文章.摘取如下: MATLAB 语言是一种工程语言,语法很像 VB 和 C,比 R 语言容易学 ...
- Spark 论文篇-RDD:一种为内存化集群计算设计的容错抽象(中英双语)
论文内容: 待整理 参考文献: Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster C ...
- Deep Dive into Spark SQL’s Catalyst Optimizer(中英双语)
文章标题 Deep Dive into Spark SQL’s Catalyst Optimizer 作者介绍 Michael Armbrust, Yin Huai, Cheng Liang, Rey ...
随机推荐
- POJ 3169 Layout 【差分约束】+【spfa】
<题目链接> 题目大意: 一些母牛按序号排成一条直线.有两种要求,A和B距离不得超过X,还有一种是C和D距离不得少于Y,问可能的最大距离.如果没有最大距离输出-1,如果1.n之间距离任意就 ...
- Ajax技术使用之ajax与模态框结合的妙用
Ajax技术使用之ajax与模态框结合的妙用 要求: 使用ajax的方式提交数据:https://www.cnblogs.com/-wenli/p/10470063.html 使用模态框完成增加数据, ...
- Django之视图函数总结
Django之视图函数总结 HttpRequest与HttpResponse http请求中产生两个核心对象: HttpRequest对象:用户请求相关的所有信息(对象) HttpResponse对象 ...
- RabbitMQ实战经验分享
前言 最近在忙一个高考项目,看着系统顺利完成了这次高考,终于可以松口气了.看到那些即将参加高考的学生,也想起当年高三的自己. 下面分享下RabbitMQ实战经验,希望对大家有所帮助: 一.生产消息 关 ...
- 大数据小白系列——HDFS(3)
这里是大数据小白系列,这是本系列的第三篇,介绍HDFS中NameNode选举,JournalNode等概念. 上一期我们说到了为解决NameNode(下称NN)单点失败问题,HDFS中使用了双NN的机 ...
- 想造轮子的时候,ctrl+f一下
Chardet,字符编码探测器,可以自动检测文本.网页.xml的编码. colorama,主要用来给文本添加各种颜色,并且非常简单易用. Prettytable,主要用于在终端或浏览器端构建格式化的输 ...
- php 单例模式和工厂模式
单例模式又称为职责模式,它用来在程序中创建一个单一功能的访问点,通俗地说就是实例化出来的对象是唯一的.所有的单例模式至少拥有以下三种公共元素:1. 它们必须拥有一个构造函数,并且必须被标记为priva ...
- JsonServer服务环境搭建
在前后端分离的这种工作模式下,分工明确,各司其职.前端负责展示数据,后端提供数据.然而,在这种过程中对于接口的规范 需要提前制定好.例如根据规范提前模拟数据,这个时候就比较麻烦的.JsonServer ...
- sqlserver数据类型转换
Insert into [Cet.4] Select CONVERT(VARCHAR(20),CONVERT(DECIMAL(20,7),F1)) FROM Sheet1$ 我从外部导入了一个学号表, ...
- 基于ELK5.1(ElasticSearch, Logstash, Kibana)的一次整合
前言开源实时日志分析ELK平台(ElasticSearch, Logstash, Kibana组成),能很方便的帮我们收集日志,进行集中化的管理,并且能很方便的进行日志的统计和检索,下面基于ELK的最 ...