Apache Spark 2.0: Faster, Easier, and Smarter

http://blog.madhukaraphatak.com/categories/spark-two/

https://amplab.cs.berkeley.edu/technical-preview-of-apache-spark-2-0-easier-faster-and-smarter/

 

 

Dataset - New Abstraction of Spark

For long, RDD was the standard abstraction of Spark.
But from Spark 2.0, Dataset will become the new abstraction layer for spark. Though RDD API will be available, it will become low level API, used mostly for runtime and library development. All user land code will be written against the Dataset abstraction and it’s subset Dataframe API.

 

2.0中,最关键的是在RDD这个low level抽象层上,又加了一组DataSet的high level的抽象层,让用户可以跟方便的开发

 

From Definition, ” A Dataset is a strongly typed collection of domain-specific objects that can be transformed in parallel using functional or relational operations. Each dataset also has an untyped view called a DataFrame, which is a Dataset of Row. “

which sounds similar to RDD definition

” RDD represents an immutable,partitioned collection of elements that can be operated on in parallel “

 

Dataset is a superset of Dataframe API which is released in Spark 1.3.

Dataframe是一种特殊化的Dataset,Dataframe = Dataset[row]

 

SparkSession - New entry point of Spark

In earlier versions of spark, spark context was entry point for Spark. As RDD was main API, it was created and manipulated using context API’s. For every other API,we needed to use different contexts.For streaming, we needed StreamingContext, for SQL sqlContext and for hive HiveContext. But as DataSet and Dataframe API’s are becoming new standard API’s we need an entry point build for them. So in Spark 2.0, we have a new entry point for DataSet and Dataframe API’s called as Spark Session.

因为有了新的抽象层,所以需要加新的入口,SparkSession

就像RDD对应于SparkContext

 

更强大的SQL支持

On the SQL side, we have significantly expanded the SQL capabilities of Spark, with the introduction of a new ANSI SQL parser and support for subqueries.

Spark 2.0 can run all the 99 TPC-DS queries, which require many of the SQL:2003 features.

 

Tungsten 2.0

Spark 2.0 ships with the second generation Tungsten engine.

This engine builds upon ideas from modern compilers and MPP databases and applies them to data processing.

The main idea is to emit optimized bytecode at runtime that collapses the entire query into a single function, eliminating virtual function calls and leveraging CPU registers for intermediate data. We call this technique “whole-stage code generation.”

在内存管理和基于CPU的性能优化上,faster

 

Structured Streaming

Spark 2.0’s Structured Streaming APIs is a novel way to approach streaming.
It stems from the realization that the simplest way to compute answers on streams of data is to not having to reason about the fact that it is a stream.

 

意思就是说,不要意识到流,流就是个无限的DataFrames

这个是典型的spark的思路,batch是一切的根本,和Flink截然相反

Spark 2.0的更多相关文章

  1. Spark 1.0.0 横空出世 Spark on Yarn 部署(Hadoop 2.4)

    就在昨天,北京时间5月30日20点多.Spark 1.0.0最终公布了:Spark 1.0.0 released 依据官网描写叙述,Spark 1.0.0支持SQL编写:Spark SQL Progr ...

  2. APACHE SPARK 2.0 API IMPROVEMENTS: RDD, DATAFRAME, DATASET AND SQL

    What’s New, What’s Changed and How to get Started. Are you ready for Apache Spark 2.0? If you are ju ...

  3. Apache Spark 3.0 将内置支持 GPU 调度

    如今大数据和机器学习已经有了很大的结合,在机器学习里面,因为计算迭代的时间可能会很长,开发人员一般会选择使用 GPU.FPGA 或 TPU 来加速计算.在 Apache Hadoop 3.1 版本里面 ...

  4. spark 2.0.0集群安装与hive on spark配置

    1. 环境准备: JDK1.8 hive 2.3.4 hadoop 2.7.3 hbase 1.3.3 scala 2.11.12 mysql5.7 2. 下载spark2.0.0 cd /home/ ...

  5. Spark 2.0 PCA主成份分析

    PCA在Spark2.0中用法比较简单,只需要设置: .setInputCol(“features”)//保证输入是特征值向量 .setOutputCol(“pcaFeatures”)//输出 .se ...

  6. Apache Spark 2.0三种API的传说:RDD、DataFrame和Dataset

    Apache Spark吸引广大社区开发者的一个重要原因是:Apache Spark提供极其简单.易用的APIs,支持跨多种语言(比如:Scala.Java.Python和R)来操作大数据. 本文主要 ...

  7. Spark 2.0 DataFrame map操作中Unable to find encoder for type stored in a Dataset.问题的分析与解决

    转载:http://blog.csdn.net/sparkexpert/article/details/52871000 随着新版本的spark已经逐渐稳定,最近拟将原有框架升级到spark 2.0. ...

  8. Spark 2.0.0 SPARK-SQL returns NPE Error

    com.esotericsoftware.kryo.KryoException: java.lang.NullPointerExceptionSerialization trace:underlyin ...

  9. Apache Spark 3.0 预览版正式发布,多项重大功能发布

    2019年11月08日 数砖的 Xingbo Jiang 大佬给社区发了一封邮件,宣布 Apache Spark 3.0 预览版正式发布,这个版本主要是为了对即将发布的 Apache Spark 3. ...

随机推荐

  1. &是什么运算符(转)

    &表示两种运算符,其中一种表示取值运算符,一种是按位与 取值运算符 int a=1; int *p=&a; //其中&a表示的就是把a中的地址取出来,然后赋给指针变量,也就是说 ...

  2. VMware Workstation卸载清理批处理命令

    echo offclsecho "flag">>%windir%\system32\test.logif not exist %windir%\system32\tes ...

  3. FZU2215 Simple Polynomial Problem(中缀表达求值)

    比赛时没做出这题太可惜了. 赛后才反应过来这就是个中缀表达式求值,数字栈存的不是数字而是多项式. 而且,中缀表达式求值很水的,几行就可以搞定. #include<cstdio> #incl ...

  4. Ubuntu 14.04 MySQL同步

    主服务器:192.168.2.212 从服务器:192.168.2.211   主服务器(192.168.2.212): 先到/etc/mysql/my.cnf下 将 bind-address 127 ...

  5. java.lang.RuntimeException: Invalid action class configuration that references an unknown class named [xxxAction]。

    java.lang.RuntimeException: Invalid action class configuration that references an unknown class name ...

  6. BZOJ4377 : [POI2015]Kurs szybkiego czytania

    因为$a$与$n$互质,所以对于$0$到$n-1$里每个$i$,$ai\bmod n$的值互不相同. 设匹配成功的起点为$i$,那么可以得到$3m$段$ai\bmod n$的值不能取的禁区,每段都是连 ...

  7. Codeforces Round #189 (Div. 2) A. Magic Numbers

    #include <iostream> #include <vector> #include <algorithm> #include <string> ...

  8. 【BZOJ】3669: [Noi2014]魔法森林(lct+特殊的技巧)

    http://www.lydsy.com/JudgeOnline/problem.php?id=3669 首先看到题目应该可以得到我们要最小化 min{ max{a(u, v)} + max{b(u, ...

  9. COJ574 数字序列

    试题描述   霸中智力测试机构的一项工作就是按照一定的规则删除一个序列的数字,得到一个确定的数列.陈思同学很渴望成为霸中智力测试机构的主管,但是他在这个工作上做的并不好,俗话说熟能生巧,他打算做很多练 ...

  10. libtiff4.04

    http://www.linuxfromscratch.org/blfs/view/svn/general/libtiff.html 安装方法 : ./configure --prefix=/usr ...