struct streaming中的监听器StreamingQueryListener
在struct streaming提供了一个类,用来监听流的启动、停止、状态更新
StreamingQueryListener
实例化:StreamingQueryListener 后需要实现3个函数:
abstract class StreamingQueryListener {
import StreamingQueryListener._
/**
* Called when a query is started.
* @note This is called synchronously with
* [[org.apache.spark.sql.streaming.DataStreamWriter `DataStreamWriter.start()`]],
* that is, `onQueryStart` will be called on all listeners before
* `DataStreamWriter.start()` returns the corresponding [[StreamingQuery]]. Please
* don't block this method as it will block your query.
* @since 2.0.0
*/
def onQueryStarted(event: QueryStartedEvent): Unit
/**
* Called when there is some status update (ingestion rate updated, etc.)
*
* @note This method is asynchronous. The status in [[StreamingQuery]] will always be
* latest no matter when this method is called. Therefore, the status of [[StreamingQuery]]
* may be changed before/when you process the event. E.g., you may find [[StreamingQuery]]
* is terminated when you are processing `QueryProgressEvent`.
* @since 2.0.0
*/
def onQueryProgress(event: QueryProgressEvent): Unit
/**
* Called when a query is stopped, with or without error.
* @since 2.0.0
*/
def onQueryTerminated(event: QueryTerminatedEvent): Unit
}
onQueryStarted:结构化流启动的时候异步回调
onQueryProgress:查询过程中的状态发生更新时候的异步回调
onQueryTerminated:查询结束实时的异步回调
上面这些内容有什么作用?
一般在流处理中添加任务告警时候能用到。比如在onQueryStarted中判断是不是有满足告警的条件 , 如果有的话,就发送邮件告警或者钉钉告警灯
那么在告警信息中我们就可以根据其中的exception获取报错具体详情,然后一并发送到邮件中
@InterfaceStability.Evolving
class QueryTerminatedEvent private[sql](
val id: UUID,
val runId: UUID,
val exception: Option[String]) extends Event
最后,附上一个使用的小例子:
/**
* Created by angel
*/
object Test {
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder
.appName("IQL")
.master("local[4]")
.enableHiveSupport()
.getOrCreate()
spark.sparkContext.setLogLevel("WARN") // Save the code as demo-StreamingQueryManager.scala
// Start it using spark-shell
// $ ./bin/spark-shell -i demo-StreamingQueryManager.scala // Register a StreamingQueryListener to receive notifications about state changes of streaming queries
import org.apache.spark.sql.streaming.StreamingQueryListener
val myQueryListener = new StreamingQueryListener {
import org.apache.spark.sql.streaming.StreamingQueryListener._
def onQueryTerminated(event: QueryTerminatedEvent): Unit = {
println(s"Query ${event.id} terminated")
} def onQueryStarted(event: QueryStartedEvent): Unit = {
println(s"Query ${event.id} started")
}
def onQueryProgress(event: QueryProgressEvent): Unit = {
println(s"Query ${event.progress.name} process")
}
}
spark.streams.addListener(myQueryListener) import org.apache.spark.sql.streaming._
import scala.concurrent.duration._ // Start streaming queries // Start the first query
val q4s = spark.readStream.
format("rate").
load.
writeStream.
format("console").
trigger(Trigger.ProcessingTime(.seconds)).
option("truncate", false).
start // Start another query that is slightly slower
val q10s = spark.readStream.
format("rate").
load.
writeStream.
format("console").
trigger(Trigger.ProcessingTime(.seconds)).
option("truncate", false).
start // Both queries run concurrently
// You should see different outputs in the console
// q4s prints out 4 rows every batch and twice as often as q10s
// q10s prints out 10 rows every batch /*
-------------------------------------------
Batch: 7
-------------------------------------------
+-----------------------+-----+
|timestamp |value|
+-----------------------+-----+
|2017-10-27 13:44:07.462|21 |
|2017-10-27 13:44:08.462|22 |
|2017-10-27 13:44:09.462|23 |
|2017-10-27 13:44:10.462|24 |
+-----------------------+-----+ -------------------------------------------
Batch: 8
-------------------------------------------
+-----------------------+-----+
|timestamp |value|
+-----------------------+-----+
|2017-10-27 13:44:11.462|25 |
|2017-10-27 13:44:12.462|26 |
|2017-10-27 13:44:13.462|27 |
|2017-10-27 13:44:14.462|28 |
+-----------------------+-----+ -------------------------------------------
Batch: 2
-------------------------------------------
+-----------------------+-----+
|timestamp |value|
+-----------------------+-----+
|2017-10-27 13:44:09.847|6 |
|2017-10-27 13:44:10.847|7 |
|2017-10-27 13:44:11.847|8 |
|2017-10-27 13:44:12.847|9 |
|2017-10-27 13:44:13.847|10 |
|2017-10-27 13:44:14.847|11 |
|2017-10-27 13:44:15.847|12 |
|2017-10-27 13:44:16.847|13 |
|2017-10-27 13:44:17.847|14 |
|2017-10-27 13:44:18.847|15 |
+-----------------------+-----+
*/ // Stop q4s on a separate thread
// as we're about to block the current thread awaiting query termination
import java.util.concurrent.Executors
import java.util.concurrent.TimeUnit.SECONDS
def queryTerminator(query: StreamingQuery) = new Runnable {
def run = {
println(s"Stopping streaming query: ${query.id}")
query.stop
}
}
import java.util.concurrent.TimeUnit.SECONDS
// Stop the first query after 10 seconds
Executors.newSingleThreadScheduledExecutor.
scheduleWithFixedDelay(queryTerminator(q4s), , * , SECONDS)
// Stop the other query after 20 seconds
Executors.newSingleThreadScheduledExecutor.
scheduleWithFixedDelay(queryTerminator(q10s), , * , SECONDS) // Use StreamingQueryManager to wait for any query termination (either q1 or q2)
// the current thread will block indefinitely until either streaming query has finished
spark.streams.awaitAnyTermination // You are here only after either streaming query has finished
// Executing spark.streams.awaitAnyTermination again would return immediately // You should have received the QueryTerminatedEvent for the query termination // reset the last terminated streaming query
spark.streams.resetTerminated // You know at least one query has terminated // Wait for the other query to terminate
spark.streams.awaitAnyTermination assert(spark.streams.active.isEmpty) println("The demo went all fine. Exiting...") // leave spark-shell
System.exit()
}
}
小例子
struct streaming中的监听器StreamingQueryListener的更多相关文章
- spark streaming中使用checkpoint
从官方的Programming Guides中看到的 我理解streaming中的checkpoint有两种,一种指的是metadata的checkpoint,用于恢复你的streaming:一种是r ...
- Spark Streaming中向flume拉取数据
在这里看到的解决方法 https://issues.apache.org/jira/browse/SPARK-1729 请是个人理解,有问题请大家留言. 其实本身flume是不支持像KAFKA一样的发 ...
- Spark Streaming中的操作函数分析
根据Spark官方文档中的描述,在Spark Streaming应用中,一个DStream对象可以调用多种操作,主要分为以下几类 Transformations Window Operations J ...
- spark streaming中维护kafka偏移量到外部介质
spark streaming中维护kafka偏移量到外部介质 以kafka偏移量维护到redis为例. redis存储格式 使用的数据结构为string,其中key为topic:partition, ...
- 在web.xml中配置监听器来控制ioc容器生命周期
5.整合关键-在web.xml中配置监听器来控制ioc容器生命周期 原因: 1.配置的组件太多,需保障单实例 2.项目停止后,ioc容器也需要关掉,降低对内存资源的占用. 项目启动创建容器,项目停止销 ...
- 在Java Web程序中使用监听器可以通过以下两种方法
之前学习了很多涉及servlet的内容,本小结我们说一下监听器,说起监听器,编过桌面程序和手机App的都不陌生,常见的套路都是拖一个控件,然后给它绑定一个监听器,即可以对该对象的事件进行监听以便发生响 ...
- Button 在布局文件中定义监听器,文字阴影,自定义图片,代码绘制样式,添加音效的方法
1.Button自己在xml文件中绑定监听器 <LinearLayout xmlns:android="http://schemas.android.com/apk/res/andro ...
- Kafka:ZK+Kafka+Spark Streaming集群环境搭建(十六)Structured Streaming中ForeachSink的用法
Structured Streaming默认支持的sink类型有File sink,Foreach sink,Console sink,Memory sink. ForeachWriter实现: 以写 ...
- Spark Streaming中的操作函数讲解
Spark Streaming中的操作函数讲解 根据根据Spark官方文档中的描述,在Spark Streaming应用中,一个DStream对象可以调用多种操作,主要分为以下几类 Transform ...
随机推荐
- HashSet——add remove contains方法底层代码分析(hashCode equals 方法的重写)
引言:我们都知道HashSet这个类有add remove contains方法,但是我们要深刻理解到底是怎么判断它是否重复加入了,什么时候才移除,什么时候才算是包括????????? add ...
- Yali7月集训Contest2 T1 Cube 题解
题目链接: 连我们都只有纸质题目...话说雅礼集训都是这样的吗... 大意 0维基本图形是一个点 1维基本图形是一条线段 2维基本图形是一个正方形 3维基本图形是一个正方体 4维基本图形是... 求\ ...
- SQL的GROUP BY 与 Order By
1.概述 “Group By”从字面意义上理解就是根据“By”指定的规则对数据进行分组,所谓的分组就是将一个“数据集”划分成若干个“小区域”,然后针对若干个“小区域”进行数据处理. 2.原始表 3.简 ...
- 1、传统身份验证和JWT的身份验证
1.传统身份验证和JWT的身份验证 传统身份验证: HTTP 是一种没有状态的协议,也就是它并不知道是谁是访问应用.这里我们把用户看成是客户端,客户端使用用户名还有密码通过了身份验证,不过 ...
- [Android] Installation failed due to: ''pm install-create -r -t -S 4590498' returns error 'UNSUPPORTED''
小米特有问题 关闭开发者选项的启用MIUI优化 不得不说, 这是真的坑...
- js之语句(表达式语句,复合语句,声明语句)
语句就是JavaScript整句或命令,以分号结束,用来执行以使某件事发生.下面将介绍三种语句:表达式语句,复合语句,声明语句. 一.表达式语句 表达式语句是javascript中最简单的语句 < ...
- MYSQL 创建数据库以及表
创建数据库,表 创建一个数据库,再在数据库下创建一个或多个表,不难,记不住的同学可以直接copy,慢慢的用会即刻,懂的同学请看代码,没有太多基础的同学,除了看代码,请看最下方的知识点 创建数据库: C ...
- maven入门-- part4 坐标和依赖
Maven的坐标为各种构件引入了秩序,任何一个构件都必须明确的定义自己的坐标,maven的坐标包括如下的元素: groupId: 定义当前Maven项目隶属的实际项目 artifactId: 该元素定 ...
- frp 路由穿透(github开源穿透软件)
server配置(windows):下载: https://github.com/fatedier/frp/releases [common] # 服务器端端口 bind_port = # 客户端连接 ...
- go语言入门(2)数据类型
1,命名 Go语言中的函数名.变量名.常量名.类型名.语句标号和包名等所有的命名,都遵循一个简单的命名规则:一个名字必须以一个字母(Unicode字母)或下划线开头,后面可以跟任意数量的字母.数字或下 ...