1 Overview
Spark Streaming is an extension of the core Spark API that enables scalable, high-throughput, fault-tolerant stream processing of live data streams. Data can be ingested from many sources like Kafka, Flume, Twitter, ZeroMQ, Kinesis, or TCP sockets, and can be processed using complex algorithms expressed with high-level functions like mapreducejoin and window. Finally, processed data can be pushed out to filesystems, databases, and live dashboards. 
Internally, it works as follows. Spark Streaming receives live input data streams and divides the data into batches, which are then processed by the Spark engine to generate the final stream of results in batches.
input data stream->spark streaming->batches of input data->spark engine->batches of processed data
Spark Streaming provides a high-level abstraction called discretized stream or DStream, which represents a continuous stream of data. DStreams can be created either from input data streams from sources such as Kafka, Flume, and Kinesis, or by applying high-level operations on other DStreams. Internally, a DStream is represented as a sequence of RDDs.
 
2 A quick example
//start the data server
# nc -lk 9999
   
3 Basic concepts
 
3.1 Linking
 
3.2 Initializing StreamingContext
step1 Define the input sources by creating input DStreams.
step2 Define the streaming computations by applying transformation and output operations to DStreams.
step3 Start receiving data and processing it using streamingContext.start().
step4 Wait for the processing to be stopped (manually or due to any error) using streamingContext.awaitTermination().
step5 The processing can be manually stopped using streamingContext.stop().
 
Points to remember:
  • Once a context has been started, no new streaming computations can be set up or added to it.
  • Once a context has been stopped, it cannot be restarted.
  • Only one StreamingContext can be active in a JVM at the same time.
  • stop() on StreamingContext also stops the SparkContext. To stop only the StreamingContext, set the optional parameter of stop() calledstopSparkContext to false.
  • A SparkContext can be re-used to create multiple StreamingContexts, as long as the previous StreamingContext is stopped (without stopping the SparkContext) before the next StreamingContext is created.
 
3.3 Discretized Streams (DStreams)
Discretized(离散化处理的) Stream or DStream is the basic abstraction provided by Spark Streaming. It represents a continuous stream of data, either the input data stream received from source, or the processed data stream generated by transforming the input stream. Internally, a DStream is represented by a continuous series of RDDs.
 
3.4 Input DStreams and Receivers
Every input DStream (except file stream, discussed later in this section) is associated with a Receiver (Scala doc, Java doc) object which receives the data from a source and stores it in Spark’s memory for processing.
Points to remember
  • When running a Spark Streaming program locally, do not use “local” or “local[1]” as the master URL. Either of these means that only one thread will be used for running tasks locally. If you are using a input DStream based on a receiver (e.g. sockets, Kafka, Flume, etc.), then the single thread will be used to run the receiver, leaving no thread for processing the received data. Hence, when running locally, always use “local[n]” as the master URL, where n > number of receivers to run (see Spark Properties for information on how to set the master).

  • Extending the logic to running on a cluster, the number of cores allocated to the Spark Streaming application must be more than the number of receivers. Otherwise the system will receive data, but not be able to process it.

Basic Sources:
scc.fileStream()
scc.queueStream()
scc.socketTextStream()
scc.actorStream()
Advanced Sources:
Kafka
Flume
Kinesis
Twitter
Custom Sources:
Input DStreams can also be created out of custom data sources. All you have to do is implement a user-defined receiver (see next section to understand what that is) that can receive data from the custom sources and push it into Spark. See the Custom Receiver Guide for details.
 
3.5 Transformations on DStreams
transformations that  worth discussing in more detail:
UpdateStateByKey Operation
Transform Operation
Window Operation
     Any window operation needs to specify two parameters:
         window length - The duration of the window (3 in the figure).
         sliding interval - The interval at which the window operation is performed (2 in the figure).
     I want to extend the earlier example by generating word counts over the last 30 seconds of data, every 10 seconds. 
         // Reduce last 30 seconds of data, every 10 seconds
         val windowedWordCounts = pairs.reduceByKeyAndWindow((a:Int,b:Int) => (a + b), Seconds(30), Seconds(10))
Join Operation : leftOuterJoin, rightOuterJoin, fullOuterJoin
 
3.6 Output Operations on DStreams
Output operations allow DStream’s data to be pushed out to external systems like a database or a file systems. Since the output operations actually allow the transformed data to be consumed by external systems, they trigger the actual execution of all the DStream transformations (similar to actions for RDDs).
 
3.7 DataFrame and SQL Operations
 
3.8 MLlib Operations
 
3.9 Caching / Persistence
 
3.10 Checkpointing
The default interval is a multiple of the batch interval that is at least 10 seconds. It can be set by using dstream.checkpoint(checkpointInterval). Typically, a checkpoint interval of 5 - 10 sliding intervals of a DStream is a good setting to try.
3.11 Deploying Applications
 
3.12 Monitoring Applications
 
4 Performance Tuning

4.1 Reducing the Batch Processing Times
 
4.2 Setting the Right Batch Interval
 
4.3 Memory Tuning

 

Spark Streaming - DStream的更多相关文章

  1. 58、Spark Streaming: DStream的output操作以及foreachRDD详解

    一.output操作 1.output操作 DStream中的所有计算,都是由output操作触发的,比如print().如果没有任何output操作,那么,压根儿就不会执行定义的计算逻辑. 此外,即 ...

  2. 54、Spark Streaming:DStream的transformation操作概览

    一. transformation操作概览 Transformation Meaning map 对传入的每个元素,返回一个新的元素 flatMap 对传入的每个元素,返回一个或多个元素 filter ...

  3. spark streaming(2) DAG静态定义及DStream,DStreamGraph

    DAG 中文名有向无环图.它不是spark独有技术.它是一种编程思想 ,甚至于hadoop阵营里也有运用DAG的技术,比如Tez,Oozie.有意思的是,Tez是从MapReduce的基础上深化而来的 ...

  4. 大数据技术之_19_Spark学习_04_Spark Streaming 应用解析 + Spark Streaming 概述、运行、解析 + DStream 的输入、转换、输出 + 优化

    第1章 Spark Streaming 概述1.1 什么是 Spark Streaming1.2 为什么要学习 Spark Streaming1.3 Spark 与 Storm 的对比第2章 运行 S ...

  5. Spark Streaming源码分析 – DStream

    A Discretized Stream (DStream), the basic abstraction in Spark Streaming, is a continuous sequence o ...

  6. spark streaming 2: DStream

    DStream是类似于RDD概念,是对数据的抽象封装.它是一序列的RDD,事实上,它大部分的操作都是对RDD支持的操作的封装,不同的是,每次DStream都要遍历它内部所有的RDD执行这些操作.它可以 ...

  7. Spark Streaming消费Kafka Direct方式数据零丢失实现

    使用场景 Spark Streaming实时消费kafka数据的时候,程序停止或者Kafka节点挂掉会导致数据丢失,Spark Streaming也没有设置CheckPoint(据说比较鸡肋,虽然可以 ...

  8. Spark Streaming

    Spark Streaming Spark Streaming 是Spark为了用户实现流式计算的模型. 数据源包括Kafka,Flume,HDFS等. DStream 离散化流(discretize ...

  9. spark streaming kafka1.4.1中的低阶api createDirectStream使用总结

    转载:http://blog.csdn.net/ligt0610/article/details/47311771 由于目前每天需要从kafka中消费20亿条左右的消息,集群压力有点大,会导致job不 ...

随机推荐

  1. JSP/Servlet开发——第十章 Ajax与JQuery

    1. 认识Ajax: ◆在传统的 Web 应用中,每次请求服务器都会生成新的页面,用户在提交请求后,总是要等待服务器的响应,如果前一个请求没有得到响应,则后一个请求就不能发送. ◆由于这是一种独占式的 ...

  2. php连接数据库(一)

    1.php链接数据库: 1.链接数据库 2.判断是否连接成功 3.设置字符集 4.选择数据库 5.准备SQL语句 6.发送SQL语句 7.处理结果集 8.释放资源(关闭数据库) $result = m ...

  3. laravel 安装添加多站点

    官方文档如下 https://learnku.com/laravel/t/1160/laravel-nginx-multi-site-configuration

  4. java中子类会继承父类的构造方法吗?

    参考: https://blog.csdn.net/wangyl_gain/article/details/49366505

  5. Linux GPIO键盘驱动开发记录_OMAPL138

    Linux GPIO键盘驱动开发记录_OMAPL138 Linux基本配置完毕了,这几天开始着手Linux驱动的开发,从一个最简单的键盘驱动开始,逐步的了解开发驱动的过程有哪些.看了一下Linux3. ...

  6. 20145209 2016-2017-2 《Java程序设计》第3周学习总结

    20145209 2016-2017-2 <Java程序设计>第3周学习总结 教材学习内容总结 1.构造方法决定类生成对象的方式 用this将已存在的参数的值指定给此参数. 用new建立新 ...

  7. Java集合——ArrayList源码详解

    ) ArrayList 实现了RandomAccess, Cloneable, java.io.Serializable三个标记接口,表示它自身支持快速随机访问,克隆,序列化. public clas ...

  8. 北京Uber优步司机奖励政策(2月28日)

    滴快车单单2.5倍,注册地址:http://www.udache.com/ 如何注册Uber司机(全国版最新最详细注册流程)/月入2万/不用抢单:http://www.cnblogs.com/mfry ...

  9. 天津市人民优步Uber司机奖励政策(8.31-9.6)

    "*结算周期为周一凌晨4点至下周一凌晨4点 滴滴快车单单2.5倍,注册地址:http://www.udache.com/ 如何注册Uber司机(全国版最新最详细注册流程)/月入2万/不用抢单 ...

  10. Mybatis之XML、注解

    前言 上篇简单介绍了Mybatis的简单实用,本篇先对上次实验环境的一些内容进行优化,然后验证Mybatis的XML配置以及注解方式. 实验环境优化 数据库配置 在mybatis的配置文件中,引入数据 ...