Spark Streaming官方文档学习--下
def getWordBlacklist(sparkContext):
if ('wordBlacklist' not in globals()):
globals()['wordBlacklist'] = sparkContext.broadcast(["a", "b", "c"])
return globals()['wordBlacklist']
def getDroppedWordsCounter(sparkContext):
if ('droppedWordsCounter' not in globals()):
globals()['droppedWordsCounter'] = sparkContext.accumulator(0)
return globals()['droppedWordsCounter']
def echo(time, rdd):
# Get or register the blacklist Broadcast
blacklist = getWordBlacklist(rdd.context)
# Get or register the droppedWordsCounter Accumulator
droppedWordsCounter = getDroppedWordsCounter(rdd.context)
# Use blacklist to drop words and use droppedWordsCounter to count them
def filterFunc(wordCount):
if wordCount[0] in blacklist.value:
droppedWordsCounter.add(wordCount[1])
False
else:
True
counts = "Counts at time %s %s" % (time, rdd.filter(filterFunc).collect())
wordCounts.foreachRDD(echo)
# Lazily instantiated global instance of SparkSession
def getSparkSessionInstance(sparkConf):
if ('sparkSessionSingletonInstance' not in globals()):
globals()['sparkSessionSingletonInstance'] = SparkSession\
.builder\
.config(conf=sparkConf)\
.getOrCreate()
return globals()['sparkSessionSingletonInstance']
...
# DataFrame operations inside your streaming program
words = ... # DStream of strings
def process(time, rdd):
print("========= %s =========" % str(time))
try:
# Get the singleton instance of SparkSession
spark = getSparkSessionInstance(rdd.context.getConf())
# Convert RDD[String] to RDD[Row] to DataFrame
rowRdd = rdd.map(lambda w: Row(word=w))
wordsDataFrame = spark.createDataFrame(rowRdd)
# Creates a temporary view using the DataFrame
wordsDataFrame.createOrReplaceTempView("words")
# Do word count on table using SQL and print it
wordCountsDataFrame = spark.sql("select word, count(*) as total from words group by word")
wordCountsDataFrame.show()
except:
pass
words.foreachRDD(process)
- Metadata checkpointing - Saving of the information defining the streaming computation to fault-tolerant storage like HDFS. This is used to recover from failure of the node running the driver of the streaming application. Metadata includes:
Configuration - The configuration that was used to create the streaming application.DStream operations - The set of DStream operations that define the streaming application.Incomplete batches - Batches whose jobs are queued but have not completed yet. - Data checkpointing - Saving of the generated RDDs to reliable storage.
- Usage of stateful transformations - If either updateStateByKey or reduceByKeyAndWindow (with inverse function) is used in the application, then the checkpoint directory must be provided to allow for periodic(周期的) RDD checkpointing.
- Recovering from failures of the driver running the application - Metadata checkpoints are used to recover with progress information.
- When the program is being started for the first time, it will create a new StreamingContext, set up all the streams and then call start().
- When the program is being restarted after failure, it will re-create a StreamingContext from the checkpoint data in the checkpoint directory.
# Function to create and setup a new StreamingContext
def functionToCreateContext():
sc = SparkContext(...) # new context
ssc = new StreamingContext(...)
lines = ssc.socketTextStream(...) # create DStreams
...
ssc.checkpoint(checkpointDirectory) # set checkpoint directory
return ssc
# Get StreamingContext from checkpoint data or create a new one
context = StreamingContext.getOrCreate(checkpointDirectory, functionToCreateContext)
# Do additional setup on context that needs to be done,
# irrespective of whether it is being started or restarted
context. ...
# Start the context
context.start()
context.awaitTermination()
StreamingContext.getOrCreate(checkpointDirectory, None).
- Cluster with a cluster manager
- Package the application JAR
If you are using spark-submit to start the application, then you will not need to provide Spark and Spark Streaming in the JAR. However, if your application uses advanced sources (e.g. Kafka, Flume), then you will have to package the extra artifact they link to, along with their dependencies, in the JAR that is used to deploy the application. - Configuring sufficient memory for the executors
Note that if you are doing 10 minute window operations, the system has to keep at least last 10 minutes of data in memory. So the memory requirements for the application depends on the operations used in it. - Configuring checkpointing
- Configuring automatic restart of the application driver
- Spark Standalone
the Standalone cluster manager can be instructed to supervise the driver, and relaunch it if the driver fails either due to non-zero exit code, or due to failure of the node running the driver. - YARN automatically restarting an application
- Mesos Marathon has been used to achieve this with Mesos
- Configuring write ahead logs
If enabled, all the data received from a receiver gets written into a write ahead log in the configuration checkpoint directory. - Setting the max receiving rate
- 更新的应用和旧的应用并行的执行,Once the new one (receiving the same data as the old one) has been warmed up and is ready for prime time, the old one be can be brought down.这要求,数据源可以向两个地方发送数据。
- 优雅的停止,就是处理完接受到的数据之后再停止。ensure data that has been received is completely processed before shutdown。Then the upgraded application can be started, which will start processing from the same point where the earlier application left off.为了实现这个需要数据源的数据是可以缓存的。

- Reducing the processing time of each batch of data by efficiently using cluster resources.
- Setting the right batch size such that the batches of data can be processed as fast as they are received (that is, data processing keeps up with the data ingestion).
Spark Streaming官方文档学习--下的更多相关文章
- Spark Streaming官方文档学习--上
官方文档地址:http://spark.apache.org/docs/latest/streaming-programming-guide.html Spark Streaming是spark ap ...
- Spark监控官方文档学习笔记
任务的监控和使用 有几种方式监控spark应用:Web UI,指标和外部方法 Web接口 每个SparkContext都会启动一个web UI,默认是4040端口,用来展示一些信息: 一系列调度的st ...
- Spring 4 官方文档学习(十一)Web MVC 框架
介绍Spring Web MVC 框架 Spring Web MVC的特性 其他MVC实现的可插拔性 DispatcherServlet 在WebApplicationContext中的特殊的bean ...
- Spark SQL 官方文档-中文翻译
Spark SQL 官方文档-中文翻译 Spark版本:Spark 1.5.2 转载请注明出处:http://www.cnblogs.com/BYRans/ 1 概述(Overview) 2 Data ...
- Spring 4 官方文档学习(十二)View技术
关键词:view technology.template.template engine.markup.内容较多,按需查用即可. 介绍 Thymeleaf Groovy Markup Template ...
- Spring 4 官方文档学习(十一)Web MVC 框架之配置Spring MVC
内容列表: 启用MVC Java config 或 MVC XML namespace 修改已提供的配置 类型转换和格式化 校验 拦截器 内容协商 View Controllers View Reso ...
- Spring Data Commons 官方文档学习
Spring Data Commons 官方文档学习 -by LarryZeal Version 1.12.6.Release, 2017-07-27 为知笔记版本在这里,带格式. Table o ...
- Spring 4 官方文档学习(十一)Web MVC 框架之resolving views 解析视图
接前面的Spring 4 官方文档学习(十一)Web MVC 框架,那篇太长,故另起一篇. 针对web应用的所有的MVC框架,都会提供一种呈现views的方式.Spring提供了view resolv ...
- Spring Boot 官方文档学习(一)入门及使用
个人说明:本文内容都是从为知笔记上复制过来的,样式难免走样,以后再修改吧.另外,本文可以看作官方文档的选择性的翻译(大部分),以及个人使用经验及问题. 其他说明:如果对Spring Boot没有概念, ...
随机推荐
- selenium验证码处理
在爬虫过程中经常遇到验证码,如何处理验证码就显得很重要 现在来说貌似没有完美的解决方案,很多都是通过第三方平台来实现验证码的验证 将获取的验证码的url发送到第三方平台,接收平台返回的验证码,貌似很简 ...
- xxxx
http://www.cnblogs.com/zhuojun/p/5747999.html
- ThinkPHP 3.2.3的 R 方法
R方法是可以调用其他的Controller中的方法, 例如 我想在Mit/DebugController.class.php中调用Foo/DoController.class.php中的share方法 ...
- css改变背景透明度【转】
兼容主流浏览器的CSS透明代码: .transparent_class { filter:alpha(opacity=50); -moz-opacity:0.5; -khtml-opacity: 0. ...
- $IFS和set
$IFS是内部字段分隔符的缩写.它决定Bash解析字符串时将怎样识别字段,或单词分界线.默认为(空格.制表符.换号) 修改$IFS: [xiluhua@vm-xiluhua][~/shell_scri ...
- Oracle之主键的创建、添加、删除操作
一.创建表的同时创建主键约束 1.1.无命名 SQL)); Table created SQL> select table_name,index_name from user_indexes w ...
- 《More Effective C++》
new operator申请内存,并调用构造函数: 而operator new只申请内存: new operator会调用operator new来申请内存:operator new可以重写或重载: ...
- 事务的隔离级别及mysql对应操作
/* 本次高并发解决之道 1,更改事务隔离级别为 read uncommitted读未提交 2,查询前设定延迟,延迟时间为随机 50-500 微秒 3,修改数据前将 超范围作为 限定修改条件 事务是作 ...
- Mongodb 和Redis 的相同点和不同点
MongoDB和Redis都是NoSQL,采用结构型数据存储.二者在使用场景中,存在一定的区别,这也主要由于二者在内存映射的处理过程,持久化的处理方法不同.MongoDB建议集群部署,更多的考虑到集群 ...
- 【转】MYSQL入门学习之十二:存储过程的基本操作
转载地址:http://www.2cto.com/database/201212/177380.html 存储过程简单来说,就是为以后的使用而保存的一条或多条MySQL语句的集合.可将其视为批文件,虽 ...