WaterMark除了可以限定来迟数据范围,是否可以实现最近一小时统计?

WaterMark目的用来限定参数计算数据的范围:比如当前计算数据内max timestamp是12::00,waterMark限定数据分为是60 minutes,那么如果此时输入11:00之前的数据就会被舍弃不参与统计,视为来迟范围超出了60minutes限定范围。

那么,是否可以借助它实现最近一小时的数据统计呢?

代码示例:

package com.dx.streaming

import java.sql.Timestamp
import java.text.SimpleDateFormat import org.apache.spark.sql.streaming.OutputMode
import org.apache.spark.sql.{Encoders, SparkSession}
import org.apache.log4j.{Level, Logger} case class MyEntity(id: String, timestamp: Timestamp, value: Integer) object Main {
Logger.getLogger("org.apache.spark").setLevel(Level.WARN);
Logger.getLogger("akka").setLevel(Level.ERROR);
Logger.getLogger("kafka").setLevel(Level.ERROR); def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().appName("test").master("local[*]").getOrCreate()
val lines = spark.readStream.format("socket").option("host", "192.168.0.141").option("port", 19999).load() var sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
import spark.implicits._
lines.as(Encoders.STRING)
.map(row => {
val fields = row.split(",")
MyEntity(fields(0), new Timestamp(sdf.parse(fields(1)).getTime), Integer.valueOf(fields(2)))
})
.createOrReplaceTempView("tv_entity") spark.sql("select id,timestamp,value from tv_entity")
.withWatermark("timestamp", "60 minutes")
.createOrReplaceTempView("tv_entity_watermark") val resultDf = spark.sql(
s"""
|select id,sum(value) as sum_value
|from tv_entity_watermark
|group id
|""".stripMargin) val query = resultDf.writeStream.format("console").outputMode(OutputMode.Update()).start() query.awaitTermination()
query.stop()
}
}

当通过nc -lk 19999中依次(每组输入间隔几秒时间即可)输入如下数据时:

1,2018-12-01 12:00:01,100
2,2018-12-01 12:00:01,100 1,2018-12-01 12:05:01,100
2,2018-12-01 12:05:01,100 1,2018-12-01 12:15:01,100
2,2018-12-01 12:15:01,100 1,2018-12-01 12:25:01,100
2,2018-12-01 12:25:01,100 1,2018-12-01 12:35:01,100
2,2018-12-01 12:35:01,100 1,2018-12-01 12:45:01,100
2,2018-12-01 12:45:01,100 1,2018-12-01 12:55:01,100
2,2018-12-01 12:55:01,100 1,2018-12-01 13:05:02,100
2,2018-12-01 13:05:02,100 1,2018-12-01 13:15:01,100
2,2018-12-01 13:15:01,100

发现最终统计结果为:

id  , sum_value
,
,

而不是期望的

id  , sum_value
,
,

既然是不能限定数据统计范围是60minutes,是否需要借助于窗口函数window就可以实现呢?

是否需要借助于watermark和窗口函数window就可以实现最近1小时数据统计呢?

    spark.sql("select id,timestamp,value from tv_entity")
.withWatermark("timestamp", "60 minutes")
.createOrReplaceTempView("tv_entity_watermark") val resultDf = spark.sql(
s"""
|select id,sum(value) as sum_value
|from tv_entity_watermark
|group window(timestamp,'60 minutes','60 minutes'),id
|""".stripMargin) val query = resultDf.writeStream.format("console").outputMode(OutputMode.Update()).start()

依然输入上边的测试数据,会发现超过1小时候数据会重新开辟(归零后重新统计)一个统计结果,而不是滚动的一小时统计。

就是把上边的测试数据分为了两组来分别统计:

第一组(小时)参与统计数据:

,-- ::,
,-- ::, ,-- ::,
,-- ::, ,-- ::,
,-- ::, ,-- ::,
,-- ::, ,-- ::,
,-- ::, ,-- ::,
,-- ::, ,-- ::,
,-- ::,

第二组(小时)参与统计数据:

,-- ::,
,-- ::, ,-- ::,
,-- ::,

猜测总结:

根据上边测试结果可以推出一个猜测结论:

在spark structured streaming中是不存储参数统计的数据的,只是对数据进行了maxTimestamp.avgTimestamp,minTimestamp存储,同时只是对数据的统计结果进行存储,下次再次触发统计时只是在原有的统计结果之上进行累加等操作,而参与统计的数据应该是没有存储,否则这类需求应该是可以实现。

但是以下代码尝试确实是可以实现,缺点太耗费资源:

 package com.dx.streaming

 import java.sql.Timestamp
import java.text.SimpleDateFormat import org.apache.spark.sql.streaming.OutputMode
import org.apache.spark.sql.{Encoders, SparkSession}
import org.apache.log4j.{Level, Logger} case class MyEntity(id: String, timestamp: Timestamp, value: Integer) object Main {
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("akka").setLevel(Level.ERROR)
Logger.getLogger("kafka").setLevel(Level.ERROR) def main(args: Array[String]): Unit = {
val spark = SparkSession.builder().appName("test").master("local[*]").getOrCreate()
val lines = spark.readStream.format("socket").option("host", "192.168.0.141").option("port", 19999).load() var sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss")
import spark.implicits._
lines.as(Encoders.STRING)
.map(row => {
val fields = row.split(",")
MyEntity(fields(0), new Timestamp(sdf.parse(fields(1)).getTime), Integer.valueOf(fields(2)))
})
.createOrReplaceTempView("tv_entity") spark.sql("select id,timestamp,value from tv_entity")
.withWatermark("timestamp", "60 minutes")
.createOrReplaceTempView("tv_entity_watermark") var resultDf = spark.sql(
s"""
|select id,min(timestamp) min_timestamp,max(timestamp) max_timestamp,sum(value) as sum_value
|from tv_entity_watermark
|group by window(timestamp,'3600 seconds','60 seconds'),id
|""".stripMargin) val query = resultDf.writeStream.format("console").outputMode(OutputMode.Update()).start() query.awaitTermination()
query.stop()
}
}

使用spark streaming把历史结果保存到内存中实现最近一小时统计:

pom.xml

        <!--Spark -->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_2.11</artifactId>
<version>2.2.0</version>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>2.2.0</version>
</dependency>

java code:

package com.dx.streaming;

import java.io.Serializable;
import java.sql.Timestamp;
import java.text.SimpleDateFormat;
import java.util.ArrayList;
import java.util.Date;
import java.util.HashMap;
import java.util.List;
import java.util.Map; import org.apache.log4j.Level;
import org.apache.log4j.LogManager;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;
import org.apache.spark.api.java.function.VoidFunction;
import org.apache.spark.streaming.Durations;
import org.apache.spark.streaming.api.java.JavaDStream;
import org.apache.spark.streaming.api.java.JavaReceiverInputDStream;
import org.apache.spark.streaming.api.java.JavaStreamingContext; public class Main {
private static List<MyEntity> store = new ArrayList<MyEntity>();
private static JavaStreamingContext jssc; public static void main(String[] args) throws Exception {
// set log4j programmatically
LogManager.getLogger("org.apache.spark").setLevel(Level.WARN);
LogManager.getLogger("akka").setLevel(Level.ERROR);
LogManager.getLogger("kafka").setLevel(Level.ERROR); SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
//
System.out.println(sdf.parse("2018-12-04 11:00:00").getTime() - sdf.parse("2018-12-04 10:00:00").getTime()); SparkConf conf = new SparkConf().setMaster("local[*]").setAppName("NetworkWordCount");
JavaSparkContext sc = new JavaSparkContext(conf);
// jssc = new JavaStreamingContext(conf, Durations.seconds(10));
jssc = new JavaStreamingContext(sc, Durations.seconds(10)); JavaReceiverInputDStream<String> lines = jssc.socketTextStream("192.168.0.141", 19999); JavaDStream<MyEntity> dStream = lines.map(new Function<String, MyEntity>() {
private static final long serialVersionUID = 1L; public MyEntity call(String line) throws Exception {
String[] fields = line.split(",");
MyEntity myEntity = new MyEntity();
myEntity.setId(Integer.valueOf(fields[0]));
myEntity.setTimestamp(Timestamp.valueOf(fields[1]));
myEntity.setValue(Long.valueOf(fields[2]));
return myEntity;
}
}); // 不确定是否必须repartition(1),目的避免外边这层循环多次循环,确保只执行一次大循环。
dStream.repartition(1).foreachRDD(new VoidFunction<JavaRDD<MyEntity>>() {
public void call(JavaRDD<MyEntity> tItems) throws Exception {
System.out.println("print...");
tItems.foreach(new VoidFunction<MyEntity>() {
public void call(MyEntity t) throws Exception {
System.out.println(">>>>>>>>>>>>>" + t.toString());
store.add(t);
System.out.println(store.size());
}
}); System.out.println("@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@");
for (MyEntity myEntity : store) {
System.out.println("++++++++++++++++++++++" + myEntity.toString());
} // 第一步:从store中超過1小時之前的數據剔除;
MyEntity first = store.get(0);
MyEntity last = store.get(store.size() - 1);
// 超過一小時(这里为什么这么做,假设数据本身就是按照时间循序有序插入的,实际业务中如果相同可以这样做)
while (last.getTimestamp().getTime() - first.getTimestamp().getTime() > 3600000) {
store.remove(0);
first = store.get(0);
} // 第二步:執行業務統計代碼
Map<Integer, Long> statistics = new HashMap<Integer, Long>();
for (MyEntity myEntity : store) {
if (false == statistics.containsKey(myEntity.getId())) {
statistics.put(myEntity.getId(), myEntity.getValue());
} else {
statistics.put(myEntity.getId(), myEntity.getValue() + statistics.get(myEntity.getId()));
}
} // 第三步:将结果写入关系数据库
System.out.println("#######################print result##########################");
for (Map.Entry<Integer, Long> kv : statistics.entrySet()) {
System.out.println(kv.getKey() + "," + kv.getValue());
}
}
}); jssc.start(); // Start the computation
jssc.awaitTermination(); // Wait for the computation to terminate
}
} class MyEntity implements Serializable {
private final SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
private int id;
private Timestamp timestamp;
private long value; public int getId() {
return id;
} public void setId(int id) {
this.id = id;
} public Timestamp getTimestamp() {
return timestamp;
} public void setTimestamp(Timestamp timestamp) {
this.timestamp = timestamp;
} public long getValue() {
return value;
} public void setValue(long value) {
this.value = value;
} @Override
public String toString() {
return getId() + "," + sdf.format(new Date(getTimestamp().getTime())) + "," + getValue();
}
}

输出日志

// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@62d73ead{/streaming/batch,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@228cea97{/streaming/batch/json,null,AVAILABLE,@Spark}
// :: INFO handler.ContextHandler: Started o.s.j.s.ServletContextHandler@3db663d0{/static/streaming,null,AVAILABLE,@Spark}
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
print...
>>>>>>>>>>>>>,-- ::, >>>>>>>>>>>>>,-- ::, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
#######################print result##########################
,
,
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
print...
[Stage :> ( + ) / ]>>>>>>>>>>>>>,-- ::, >>>>>>>>>>>>>,-- ::, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
#######################print result##########################
,
,
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
print...
[Stage :> ( + ) / ]>>>>>>>>>>>>>,-- ::, >>>>>>>>>>>>>,-- ::, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
#######################print result##########################
,
,
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
print...
>>>>>>>>>>>>>,-- ::, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
#######################print result##########################
,
,
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
print...
>>>>>>>>>>>>>,-- ::, >>>>>>>>>>>>>,-- ::, >>>>>>>>>>>>>,-- ::, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
#######################print result##########################
,
,
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
print...
>>>>>>>>>>>>>,-- ::, >>>>>>>>>>>>>,-- ::, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
#######################print result##########################
,
,
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
print...
>>>>>>>>>>>>>,-- ::, >>>>>>>>>>>>>,-- ::, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
#######################print result##########################
,
,
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
print...
>>>>>>>>>>>>>,-- ::, >>>>>>>>>>>>>,-- ::, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
#######################print result##########################
,
,
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
// :: WARN storage.RandomBlockReplicationPolicy: Expecting replicas with only peer/s.
// :: WARN storage.BlockManager: Block input-- replicated to only peer(s) instead of peers
print...
>>>>>>>>>>>>>,-- ::, >>>>>>>>>>>>>,-- ::, @@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
#######################print result##########################
,
,
print...
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
++++++++++++++++++++++,-- ::,
#######################print result##########################
,
,

Spark2.3(三十四):Spark Structured Streaming之withWaterMark和windows窗口是否可以实现最近一小时统计的更多相关文章

  1. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十四)Structured Streaming:Encoder

    一般情况下我们在使用Dataset<Row>进行groupByKey时,你会发现这个方法最后一个参数需要一个encoder,那么这些encoder如何定义呢? 一般数据类型 static ...

  2. Spark2.2(三十三):Spark Streaming和Spark Structured Streaming更新broadcast总结(一)

    背景: 需要在spark2.2.0更新broadcast中的内容,网上也搜索了不少文章,都在讲解spark streaming中如何更新,但没有spark structured streaming更新 ...

  3. Spark2.3(四十二):Spark Streaming和Spark Structured Streaming更新broadcast总结(二)

    本次此时是在SPARK2,3 structured streaming下测试,不过这种方案,在spark2.2 structured streaming下应该也可行(请自行测试).以下是我测试结果: ...

  4. Spark2.2(三十八):Spark Structured Streaming2.4之前版本使用agg和dropduplication消耗内存比较多的问题(Memory issue with spark structured streaming)调研

    在spark中<Memory usage of state in Spark Structured Streaming>讲解Spark内存分配情况,以及提到了HDFSBackedState ...

  5. Spark2.3(三十五)Spark Structured Streaming源代码剖析(从CSDN和Github中看到别人分析的源代码的文章值得收藏)

    从CSDN中读取到关于spark structured streaming源代码分析不错的几篇文章 spark源码分析--事件总线LiveListenerBus spark事件总线的核心是LiveLi ...

  6. Spark2.x(五十五):在spark structured streaming下sink file(parquet,csv等),正常运行一段时间后:清理掉checkpoint,重新启动app,无法sink记录(file)到hdfs。

    场景: 在spark structured streaming读取kafka上的topic,然后将统计结果写入到hdfs,hdfs保存目录按照month,day,hour进行分区: 1)程序放到spa ...

  7. Spark2.2(三十九):如何根据appName监控spark任务,当任务不存在则启动(任务存在当超过多久没有活动状态则kill,等待下次启动)

    业务需求 实现一个根据spark任务的appName来监控任务是否存在,及任务是否卡死的监控. 1)给定一个appName,根据appName从yarn application -list中验证任务是 ...

  8. Kafka:ZK+Kafka+Spark Streaming集群环境搭建(二十九):推送avro格式数据到topic,并使用spark structured streaming接收topic解析avro数据

    推送avro格式数据到topic 源代码:https://github.com/Neuw84/structured-streaming-avro-demo/blob/master/src/main/j ...

  9. Spark Structured Streaming框架(2)之数据输入源详解

    Spark Structured Streaming目前的2.1.0版本只支持输入源:File.kafka和socket. 1. Socket Socket方式是最简单的数据输入源,如Quick ex ...

随机推荐

  1. jq中Deferred对象的使用

    var d=$.Deferred(); //deferred下面的方法有: // ["resolve", "resolveWith", "reject ...

  2. pytest十二:cmd命令行参数

    命令行参数是根据命令行选项将不同的值传递给测试函数,比如平常在 cmd 执行”pytest —html=report.html”,这里面的”—html=report.html“就是从命令行传入的参数对 ...

  3. [转] web无插件播放RTSP摄像机方案,拒绝插件,拥抱H5!

    需求 问题:有没有flash播放RTSP的播放器?H5能不能支持RTSP播放? 答案:没见过,以后估计也不会有: 问题:可以自己做浏览器插件播放RTSP吗? 答案:可以的,chrome做ppapi插件 ...

  4. 【转载-译文】requests库连接池说明

    转译自:https://laike9m.com/blog/requests-secret-pool_connections-and-pool_maxsize,89/ Requests' secret: ...

  5. Codeforces Round #310 (Div. 2)

    Problem A: 题目大意:给你一个由0,1组成的字符串,如果有相邻的0和1要消去,问你最后还剩几个字符. 写的时候不想看题意直接看样例,结果我以为是1在前0在后才行,交上去错了..后来仔细 看了 ...

  6. BZOJ1819 [JSOI]Word Query电子字典 Trie

    欢迎访问~原文出处——博客园-zhouzhendong 去博客园看该题解 题目传送门 - BZOJ1819 题意概括 字符串a与字符串b的编辑距离是指:允许对a或b串进行下列“编辑”操作,将a变为b或 ...

  7. Minimum Transport Cost HDU1385(路径打印)

    最短路的路径打印问题 同时路径要是最小字典序 字典序用floyd方便很多 学会了两种打印路径的方法!!! #include <stdio.h> #include <string.h& ...

  8. jupyter notebook connecting to kernel problem

    前几天帮同学配置 python 和 anaconda 环境,在装 jupyter notebook 时,出了点问题,搞了一天半终于搞好了,也是在 github 里找到了这个问题的解答. 当时显示的是无 ...

  9. python新手总结(二)

    random模块 随机小数 random uniform 随机整数 randint randrange 随机抽取 choice sample 打乱顺序 shuffle random.random() ...

  10. grpc使用客户端技巧

    grpc 使用技巧,最近在做的项目是服务端是go语言提供服务使用的是grpc框架. java在实现客户端的时候,参数的生成大部分采用创建者模式.java在接受go服务端 返回数据的时候,更多的是通过p ...