StreamingContext 和SparkContex的用途是差不多的,作为spark stream的入口,提供配置、生成DStream等功能。

总体来看,spark stream包括如下模块:


/**
* Main entry point for Spark Streaming functionality. It provides methods used to create
* [[org.apache.spark.streaming.dstream.DStream]]s from various input sources. It can be either
* created by providing a Spark master URL and an appName, or from a org.apache.spark.SparkConf
* configuration (see core Spark documentation), or from an existing org.apache.spark.SparkContext.
* The associated SparkContext can be accessed using `context.sparkContext`. After
* creating and transforming DStreams, the streaming computation can be started and stopped
* using `context.start()` and `context.stop()`, respectively.
* `context.awaitTransformation()` allows the current thread to wait for the termination
* of the context by `stop()` or by an exception.
*/
class StreamingContext private[streaming] (
sc_ : SparkContext,
cp_ : Checkpoint,
batchDur_ : Duration
) extends Logging {
重要的属性:DStreamGraph有点像简洁版的DAG scheduler,负责根据某个时间间隔生成一序列JobSet,以及按照依赖关系序列化
private[streaming] val graph: DStreamGraph = {
if (isCheckpointPresent) {
cp_.graph.setContext(this)
cp_.graph.restoreCheckpointData()
cp_.graph
} else {
assert(batchDur_ != null, "Batch duration for streaming context cannot be null")
val newGraph = new DStreamGraph()
newGraph.setBatchDuration(batchDur_)
newGraph
}
}

private val nextReceiverInputStreamId = new AtomicInteger(0)

JobScheduler
private[streaming] val scheduler = new JobScheduler(this)

private[streaming] val waiter = new ContextWaiter

private[streaming] val progressListener = new StreamingJobProgressListener(this)

添加一个listener,他们会处理InputStream输入的数据
/** Add a [[org.apache.spark.streaming.scheduler.StreamingListener]] object for
* receiving system events related to streaming.
*/
def addStreamingListener(streamingListener: StreamingListener) {
scheduler.listenerBus.addListener(streamingListener)
}
启动调度器
/**
* Start the execution of the streams.
*
* @throws SparkException if the context has already been started or stopped.
*/
def start(): Unit = synchronized {
if (state == Started) {
throw new SparkException("StreamingContext has already been started")
}
if (state == Stopped) {
throw new SparkException("StreamingContext has already been stopped")
}
validate()
sparkContext.setCallSite(DStream.getCreationSite())
scheduler.start()
state = Started
}

各种InputStream,后续再细看。
class PluggableInputDStream[T: ClassTag](
@transient ssc_ : StreamingContext,
receiver: Receiver[T]) extends ReceiverInputDStream[T](ssc_) {

def getReceiver(): Receiver[T] = {
receiver
}
}

class SocketInputDStream[T: ClassTag](
@transient ssc_ : StreamingContext,
host: String,
port: Int,
bytesToObjects: InputStream => Iterator[T],
storageLevel: StorageLevel
) extends ReceiverInputDStream[T](ssc_) {

def getReceiver(): Receiver[T] = {
new SocketReceiver(host, port, bytesToObjects, storageLevel)
}
}

/**
* An input stream that reads blocks of serialized objects from a given network address.
* The blocks will be inserted directly into the block store. This is the fastest way to get
* data into Spark Streaming, though it requires the sender to batch data and serialize it
* in the format that the system is configured with.
*/
private[streaming]
class RawInputDStream[T: ClassTag](
@transient ssc_ : StreamingContext,
host: String,
port: Int,
storageLevel: StorageLevel
) extends ReceiverInputDStream[T](ssc_ ) with Logging {

def getReceiver(): Receiver[T] = {
new RawNetworkReceiver(host, port, storageLevel).asInstanceOf[Receiver[T]]
}
}


/**
* Create a input stream that monitors a Hadoop-compatible filesystem
* for new files and reads them using the given key-value types and input format.
* Files must be written to the monitored directory by "moving" them from another
* location within the same file system. File names starting with . are ignored.
* @param directory HDFS directory to monitor for new file
* @tparam K Key type for reading HDFS file
* @tparam V Value type for reading HDFS file
* @tparam F Input format for reading HDFS file
*/
def fileStream[
K: ClassTag,
V: ClassTag,
F <: NewInputFormat[K, V]: ClassTag
] (directory: String): InputDStream[(K, V)] = {
new FileInputDStream[K, V, F](this, directory)
}


private[streaming]
class TransformedDStream[U: ClassTag] (
parents: Seq[DStream[_]],
transformFunc: (Seq[RDD[_]], Time) => RDD[U]
) extends DStream[U](parents.head.ssc) {

require(parents.length > 0, "List of DStreams to transform is empty")
require(parents.map(_.ssc).distinct.size == 1, "Some of the DStreams have different contexts")
require(parents.map(_.slideDuration).distinct.size == 1,
"Some of the DStreams have different slide durations")

override def dependencies = parents.toList

override def slideDuration: Duration = parents.head.slideDuration

override def compute(validTime: Time): Option[RDD[U]] = {
val parentRDDs = parents.map(_.getOrCompute(validTime).orNull).toSeq
Some(transformFunc(parentRDDs, validTime))
}
}








spark streaming 1: SparkContex的更多相关文章

  1. Spark踩坑记——Spark Streaming+Kafka

    [TOC] 前言 在WeTest舆情项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端,我们利用了spark strea ...

  2. Spark Streaming+Kafka

    Spark Streaming+Kafka 前言 在WeTest舆情项目中,需要对每天千万级的游戏评论信息进行词频统计,在生产者一端,我们将数据按照每天的拉取时间存入了Kafka当中,而在消费者一端, ...

  3. Storm介绍及与Spark Streaming对比

    Storm介绍 Storm是由Twitter开源的分布式.高容错的实时处理系统,它的出现令持续不断的流计算变得容易,弥补了Hadoop批处理所不能满足的实时要求.Storm常用于在实时分析.在线机器学 ...

  4. flume+kafka+spark streaming整合

    1.安装好flume2.安装好kafka3.安装好spark4.流程说明: 日志文件->flume->kafka->spark streaming flume输入:文件 flume输 ...

  5. spark streaming kafka example

    // scalastyle:off println package org.apache.spark.examples.streaming import kafka.serializer.String ...

  6. Spark Streaming中动态Batch Size实现初探

    本期内容 : BatchDuration与 Process Time 动态Batch Size Spark Streaming中有很多算子,是否每一个算子都是预期中的类似线性规律的时间消耗呢? 例如: ...

  7. Spark Streaming源码解读之No Receivers彻底思考

    本期内容 : Direct Acess Kafka Spark Streaming接收数据现在支持的两种方式: 01. Receiver的方式来接收数据,及输入数据的控制 02. No Receive ...

  8. Spark Streaming架构设计和运行机制总结

    本期内容 : Spark Streaming中的架构设计和运行机制 Spark Streaming深度思考 Spark Streaming的本质就是在RDD基础之上加上Time ,由Time不断的运行 ...

  9. Spark Streaming中空RDD处理及流处理程序优雅的停止

    本期内容 : Spark Streaming中的空RDD处理 Spark Streaming程序的停止 由于Spark Streaming的每个BatchDuration都会不断的产生RDD,空RDD ...

随机推荐

  1. Stream 分布式数据流的轻量级异步快照

    1. 概述 分布式有状态流处理支持在云中部署和执行大规模连续计算,主要针对低延迟和高吞吐量.这种模式的一个最根本的挑战就是在可能的失败情况下提供处理保证.现有方法依赖于可用于故障恢复的周期性全局状态快 ...

  2. Js 将图片的绝对路径转换为base64编码(2)

    <!DOCTYPE html><html lang="en"><head> <meta charset="UTF-8" ...

  3. 给Tomcat打开远程debug端口

    >cd apache-tomcat-8.5.24 >cd conf >vim catalina.sh 在文件开始处添加: CATALINA_OPTS="-server -X ...

  4. Linux内核管理子系统和进程管理子系统

    内核管理子系统职能:1.管理虚拟地址与物理地址的映射 2.物理内存的分配 程序:存放在磁盘上的一系列代码和数据的可执行映像,是一个静止的实体. 进程:是一个执行中的程序,它是动态的实体 进程四要素: ...

  5. Linux中的Mariadb数据库的主备

    对于一个mysql服务器, 一般有两个线程来负责复制和被复制.当开启复制之后. MySQL 复制的基本过程如下: 1. Slave 上面的IO线程连接上 Master,并请求从指定日志文件的指定位置( ...

  6. 学习Linux的准备

    学习方式: 主动学习: 动手实践:40% 讲给别人:70% 被动学习: 听课:10% 笔记:20% 写博客的要求: 写博客是对某一方面知识的总结,输出:是知识的书面化的表达方式.写博客不同于写笔记,笔 ...

  7. 【转】Java对象初始化详解

    来源:MySun 链接:http://mysun.iteye.com/blog/1596959 在Java中,一个对象在可以被使用之前必须要被正确地初始化,这一点是Java规范规定的.本文试图对Jav ...

  8. zencart搜索结果页面静态化 advanced_search_result

    首先,确认网站是否安装了ultimate_seo_urls 伪静态模块. 修改include/classes/seo.url.php 大约126行添加代码 'keyword' => 'sale' ...

  9. DevExpress ASP.NET Core v19.1版本亮点:Rich Text Editor

    行业领先的.NET界面控件DevExpress 发布了v19.1版本,本文将以系列文章的方式为大家介绍DevExpress ASP.NET Core Controls v19.1中新增的一些控件及增强 ...

  10. yaml格式介绍

    一.简介 YAML 语言(发音 /ˈjæməl/ )的设计目标,就是方便人类读写.它实质上是一种通用的数据串行化格式. 它的基本语法规则如下. 大小写敏感 使用缩进表示层级关系 缩进时不允许使用Tab ...