Structured Streaming Programming Guide
https://spark.apache.org/docs/latest/structured-streaming-programming-guide.html
http://www.slideshare.net/databricks/a-deep-dive-into-structured-streaming
Structured Streaming is a scalable and fault-tolerant stream processing engine built on the Spark SQL engine.
You can express your streaming computation the same way you would express a batch computation on static data.
The Spark SQL engine will take care of running it incrementally and continuously and updating the final result as streaming data continues to arrive. You can use the Dataset/DataFrame API in Scala, Java or Python to express streaming aggregations, event-time windows, stream-to-batch joins, etc. The computation is executed on the same optimized Spark SQL engine.
Finally, the system ensures end-to-end exactly-once fault-tolerance guarantees through checkpointing and Write Ahead Logs.
In short, Structured Streaming provides fast, scalable, fault-tolerant, end-to-end exactly-once stream processing without the user having to reason about streaming.
你可以像在静态数据源上一样,使用DataFrame接口去执行SQL,这些SQL会跑在和batch相同的optimized Spark SQL engine上
并且可以保证exactly-once fault-tolerance,通过checkpointing and Write Ahead Logs
只是将DStream抽象,换成DataFrame,即table
这样就可以进行结构化的操作,
并且基本和处理batch数据一样,
可以看到差别不大
整个过程是这样的,
可以看到,这里的output模式是complete,因为有聚合,所以每次输出需要,输出until now的统计数据
输出的mode,分为,
The “Output” is defined as what gets written out to the external storage. The output can be defined in different modes
Complete Mode - The entire updated Result Table will be written to the external storage. It is up to the storage connector to decide how to handle writing of the entire table.
Append Mode - Only the new rows appended in the Result Table since the last trigger will be written to the external storage. This is applicable only on the queries where existing rows in the Result Table are not expected to change.
Update Mode - Only the rows that were updated in the Result Table since the last trigger will be written to the external storage (not available yet in Spark 2.0). Note that this is different from the Complete Mode in that this mode does not output the rows that are not changed.
complete mode上面的例子已经给出
append mode,就是每次只输出增量,这个对于没有聚合的场景就是合适的
Window Operations on Event Time
spark认为自己对于Event time是天然支持的,只需要把它作为dataframe里面的一个列,然后做groupby即可以
然后对于late data,因为是增量输出的,所以也是可以handle的
Fault Tolerance Semantics
Delivering end-to-end exactly-once semantics was one of key goals behind the design of Structured Streaming.
To achieve that, we have designed the Structured Streaming sources, the sinks and the execution engine to reliably track the exact progress of the processing so that it can handle any kind of failure by restarting and/or reprocessing. Every streaming source is assumed to have offsets (similar to Kafka offsets, or Kinesis sequence numbers) to track the read position in the stream. The engine uses checkpointing and write ahead logs to record the offset range of the data being processed in each trigger. The streaming sinks are designed to be idempotent for handling reprocessing. Together, using replayable sources and idempotant sinks, Structured Streaming can ensure end-to-end exactly-once semantics under any failure.
首先依赖source是可以依据offset replay,而sink是幂等的,这样只需要通过Write Ahead Logs记录offset,checkpoint记录state,就可以做到exactly once,因为本质是batch
Structured Streaming Programming Guide的更多相关文章
- Structured Streaming Programming Guide结构化流编程指南
目录 Overview Quick Example Programming Model Basic Concepts Handling Event-time and Late Data Fault T ...
- Spark Streaming Programming Guide
参考,http://spark.incubator.apache.org/docs/latest/streaming-programming-guide.html Overview SparkStre ...
- spark第六篇:Spark Streaming Programming Guide
预览 Spark Streaming是Spark核心API的扩展,支持高扩展,高吞吐量,实时数据流的容错流处理.数据可以从Kafka,Flume或TCP socket等许多来源获取,并且可以使用复杂的 ...
- Spark Structured streaming框架(1)之基本使用
Spark Struntured Streaming是Spark 2.1.0版本后新增加的流计算引擎,本博将通过几篇博文详细介绍这个框架.这篇是介绍Spark Structured Streamin ...
- Spark Structured Streaming框架(2)之数据输入源详解
Spark Structured Streaming目前的2.1.0版本只支持输入源:File.kafka和socket. 1. Socket Socket方式是最简单的数据输入源,如Quick ex ...
- Spark Structured Streaming框架(5)之进程管理
Structured Streaming提供一些API来管理Streaming对象.用户可以通过这些API来手动管理已经启动的Streaming,保证在系统中的Streaming有序执行. 1. St ...
- Spark Structured Streaming框架(4)之窗口管理详解
1. 结构 1.1 概述 Structured Streaming组件滑动窗口功能由三个参数决定其功能:窗口时间.滑动步长和触发时间. 窗口时间:是指确定数据操作的长度: 滑动步长:是指窗口每次向前移 ...
- Spark Structured Streaming框架(3)之数据输出源详解
Spark Structured streaming API支持的输出源有:Console.Memory.File和Foreach.其中Console在前两篇博文中已有详述,而Memory使用非常简单 ...
- Spark Structured Streaming框架(2)之数据输入源详解
Spark Structured Streaming目前的2.1.0版本只支持输入源:File.kafka和socket. 1. Socket Socket方式是最简单的数据输入源,如Quick ex ...
随机推荐
- hdu 1430+hdu 3567(预处理)
题目链接:http://acm.hdu.edu.cn/showproblem.php?pid=1430 思路:由于只是8种颜色,所以标号就无所谓了,对起始状态重新修改标号为 12345678,对目标状 ...
- Xamarin.Android提示找不到mono.Android.Support.v4
Xamarin.Android提示找不到mono.Android.Support.v4 错误信息:Error: Exception while loading assemblies: System.I ...
- DP/最短路 URAL 1741 Communication Fiend
题目传送门 /* 题意:程序从1到n版本升级,正版+正版->正版,正版+盗版->盗版,盗版+盗版->盗版 正版+破解版->正版,盗版+破解版->盗版 DP:每种情况考虑一 ...
- Partran,Nastran和ANSYS的区别
Partran .Nastran是MSC公司的产品.Patran是前处理器,用于建模.划分网格.设定载荷和边界条件等等:Nastran只是MSC公司提供的求解器之一,主要用于结构分析和热分析,应用的是 ...
- ural 1431. Diplomas
1431. Diplomas Time limit: 1.0 secondMemory limit: 64 MB It might be interesting for you to learn th ...
- ural 1289. One Way Ticket
1289. One Way Ticket Time limit: 1.0 secondMemory limit: 64 MB A crowed of volunteers dressed in the ...
- BZOJ1107 : [POI2007]驾驶考试egz
i可以作为起点说明把边反向后可以从1和n到达i. 设fl[i]表示从1到达i至少需要加几条边,fr[i]表示从n到达i至少需要加几条边. 把图上下翻转后,从左往右依次计算fl[i],有fl[i]=i- ...
- 【BZOJ】1441: Min(裴蜀定理)
http://www.lydsy.com/JudgeOnline/problem.php?id=1441 这东西竟然还有个名词叫裴蜀定理................ 裸题不说....<初等数 ...
- 使 SortList 实现重复键排序
SortList 默认对按Key来排序,且Key值不能重复,但有时可能需要用有重复值的Key来排序,以下是实现方式: 1.对强类型:以float为例 #region 使SortList能对重复键排序 ...
- com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Unknown column '??????' in 'field list'
严重: Servlet.service() for servlet jsp threw exceptioncom.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErro ...