Structured Streaming Programming Guide
https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html
http://www.slideshare.net/databricks/a-deep-dive-into-structured-streaming
Structured Streaming is a scalable and fault-tolerant stream processing engine built on the Spark SQL engine.
You can express your streaming computation the same way you would express a batch computation on static data.
The Spark SQL engine will take care of running it incrementally and continuously and updating the final result as streaming data continues to arrive. You can use the Dataset/DataFrame API in Scala, Java or Python to express streaming aggregations, event-time windows, stream-to-batch joins, etc. The computation is executed on the same optimized Spark SQL engine.
Finally, the system ensures end-to-end exactly-once fault-tolerance guarantees through checkpointing and Write Ahead Logs.
In short, Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing without the user having to reason about streaming.
你可以像在静态数据源上一样,使用DataFrame接口去执行SQL,这些SQL会跑在和batch相同的optimized Spark SQL engine上
并且可以保证exactly-once fault-tolerance,通过checkpointing and Write Ahead Logs

只是将DStream抽象,换成DataFrame,即table
这样就可以进行结构化的操作,
并且基本和处理batch数据一样,

可以看到差别不大
整个过程是这样的,

可以看到,这里的output模式是complete,因为有聚合,所以每次输出需要,输出until now的统计数据
输出的mode,分为,
The “Output” is defined as what gets written out to the external storage. The output can be defined in different modes
Complete Mode - The entire updated Result Table will be written to the external storage. It is up to the storage connector to decide how to handle writing of the entire table.
Append Mode - Only the new rows appended in the Result Table since the last trigger will be written to the external storage. This is applicable only on the queries where existing rows in the Result Table are not expected to change.
Update Mode - Only the rows that were updated in the Result Table since the last trigger will be written to the external storage (not available yet in Spark 2.0). Note that this is different from the Complete Mode in that this mode does not output the rows that are not changed.
complete mode上面的例子已经给出
append mode,就是每次只输出增量,这个对于没有聚合的场景就是合适的
Window Operations on Event Time

spark认为自己对于Event time是天然支持的,只需要把它作为dataframe里面的一个列,然后做groupby即可以
然后对于late data,因为是增量输出的,所以也是可以handle的
Fault Tolerance Semantics
Delivering end-to-end exactly-once semantics was one of key goals behind the design of Structured Streaming.
To achieve that, we have designed the Structured Streaming sources, the sinks and the execution engine to reliably track the exact progress of the processing so that it can handle any kind of failure by restarting and/or reprocessing. Every streaming source is assumed to have offsets (similar to Kafka offsets, or Kinesis sequence numbers) to track the read position in the stream. The engine uses checkpointing and write ahead logs to record the offset range of the data being processed in each trigger. The streaming sinks are designed to be idempotent for handling reprocessing. Together, using replayable sources and idempotant sinks, Structured Streaming can ensure end-to-end exactly-once semantics under any failure.
首先依赖source是可以依据offset replay,而sink是幂等的,这样只需要通过Write Ahead Logs记录offset,checkpoint记录state,就可以做到exactly once,因为本质是batch
Structured Streaming Programming Guide的更多相关文章
- Structured Streaming Programming Guide结构化流编程指南
目录 Overview Quick Example Programming Model Basic Concepts Handling Event-time and Late Data Fault T ...
- Spark Streaming Programming Guide
参考,http://spark.incubator.apache.org/docs/latest/streaming-programming-guide.html Overview SparkStre ...
- spark第六篇:Spark Streaming Programming Guide
预览 Spark Streaming是Spark核心API的扩展,支持高扩展,高吞吐量,实时数据流的容错流处理.数据可以从Kafka,Flume或TCP socket等许多来源获取,并且可以使用复杂的 ...
- Spark Structured streaming框架(1)之基本使用
Spark Struntured Streaming是Spark 2.1.0版本后新增加的流计算引擎,本博将通过几篇博文详细介绍这个框架.这篇是介绍Spark Structured Streamin ...
- Spark Structured Streaming框架(2)之数据输入源详解
Spark Structured Streaming目前的2.1.0版本只支持输入源:File.kafka和socket. 1. Socket Socket方式是最简单的数据输入源,如Quick ex ...
- Spark Structured Streaming框架(5)之进程管理
Structured Streaming提供一些API来管理Streaming对象.用户可以通过这些API来手动管理已经启动的Streaming,保证在系统中的Streaming有序执行. 1. St ...
- Spark Structured Streaming框架(4)之窗口管理详解
1. 结构 1.1 概述 Structured Streaming组件滑动窗口功能由三个参数决定其功能:窗口时间.滑动步长和触发时间. 窗口时间:是指确定数据操作的长度: 滑动步长:是指窗口每次向前移 ...
- Spark Structured Streaming框架(3)之数据输出源详解
Spark Structured streaming API支持的输出源有:Console.Memory.File和Foreach.其中Console在前两篇博文中已有详述,而Memory使用非常简单 ...
- Spark Structured Streaming框架(2)之数据输入源详解
Spark Structured Streaming目前的2.1.0版本只支持输入源:File.kafka和socket. 1. Socket Socket方式是最简单的数据输入源,如Quick ex ...
随机推荐
- struts2文件下载,动态设置资源地址
转自:http://blog.csdn.net/ctrl_shift_del/article/details/6277340 ServletActionContext.getServletContex ...
- NHibernate初探(1)
1 NHibernate是ORM的一种. 是一种为了解决面向对象与关系数据库存在的互不匹配的现象的技术.ORM是通过使用描述对象和数据库之间映射的元数据,将程序中的对象自动持久化到关系数据库中.本质上 ...
- Bitset 用法(STL)
std::bitset是STL的一个模板类,它的参数是整形的数值,使用位的方式和数组区别不大,相当于只能存一个位的数组.下面看一个例子 bitset<20> b1(5); cout< ...
- isnull的使用方法
is null 查看列数据为空 select*from lrb where lrid is null ISNULL使用指定的替换值替换 NULL. 语法ISNULL ( check_express ...
- 安装完最小化 RHEL/CentOS 7 后需要做的 30 件事情(三)码农网
12. 安装 Apache Tomcat Tomcat 是由 Apache 设计的用来运行 Java HTTP web 服务器的 servlet 容器.按照下面的方法安装 tomcat,但需要指出的是 ...
- Github排行榜
http://githubranking.com/ 中国区开发者排行榜: http://githubrank.com/ 也可以在官网查询: https://github.com/search?q=st ...
- 简单几何(点的位置) POJ 1584 A Round Peg in a Ground Hole
题目传送门 题意:判断给定的多边形是否为凸的,peg(pig?)是否在多边形内,且以其为圆心的圆不超出多边形(擦着边也不行). 分析:判断凸多边形就用凸包,看看点集的个数是否为n.在多边形内用叉积方向 ...
- 贪心 BestCoder Round #39 1001 Delete
题目传送门 /* 贪心水题:找出出现次数>1的次数和res,如果要减去的比res小,那么总的不同的数字tot不会少: 否则再在tot里减去多余的即为答案 用set容器也可以做,思路一样 */ # ...
- windows 8 系统部署IIS并发布网站
企业用户可以在已经部署了windows 8 的电脑中通过部署IIS服务器来发布自己公司的企业内部网站实现对企业的网络办公的管理工作. 准备篇 IIS的添加和运行 一.IIS的添加 1.请进入“控制面板 ...
- LightOJ1125 Divisible Group Sums(DP)
题目问从N个数中取出M个数,有多少种取法使它们的和能被D整除. dp[i][j][k]表示,前i个数取出j个数模D的余数为k的方案数 我用“我为人人”的方式来转移,就从i到i+1转移,对于第i+1个数 ...