spark hadoop 对比 Resilient Distributed Datasets
hadoop 迭代消耗大 每次迭代启动一个完整的MapReduce作业
spark 首要目标就是避免运算时 过多的网络和磁盘IO开销
Resilient Distributed Datasets
http://www.cs.cmu.edu/~pavlo/courses/fall2013/static/slides/spark.pdf
Resilient Distributed Datasets
Presented by Henggang Cui
15799b Talk
1
Why not MapReduce
• Provide fault-tolerance, but:
• Hard to reuse intermediate results across
multiple computations
– stable storage for sharing data across jobs
• Hard to support interactive ad-hoc queries
2
Why not Other In-Memory Storage
• Examples: Piccolo
– Apply fine-grained updates to shared states
• Efficient, but:
• Hard to provide fault-tolerance
– need replication or checkpointing
3
Resilient Distributed Datasets (RDDs)
• Restricted form of distributed shared memory
– read-only, partitioned collection of records
– can only be built through coarse‐grained
deterministic transformations
• data in stable storage
• transformations from other RDDs.
• Express computation by
– defining RDDs
4
Fault Recovery
• Efficient fault recovery using lineage
– log one operation to apply to many elements
(lineage)
– recompute lost partitions on failure
5
Example
lines = spark.textFile("hdfs://...")
errors = lines.filter(_.startsWith("ERROR"))
hdfs_errors = errors.filter(_.contains(“HDFS"))
6
Advantages of the RDD Model
• Efficient fault recovery
– fine-grained and low-overhead using lineage
• Immutable nature can mitigate stragglers
– backup tasks to mitigate stragglers
• Graceful degradation when RAM is not
enough
7
Spark
• Implementation of the RDD abstraction
– Scala interface
• Two components
– Driver
– Workers
8
• Driver
– defines and invokes actions on RDDs
– tracks the RDDs’ lineage
• Workers
– store RDD partitions
– perform RDD
transformations
Spark Runtime
9
Supported RDD Operations
• Transformations
– map (f: T->U)
– filter (f: T->Bool)
– join()
– ... (and lots of others)
• Actions
– count()
– save()
– ... (and lots of others)
10
Representing RDDs
• A graph-based representation for RDDs
• Pieces of information for each RDD
– a set of partitions
– a set of dependencies on parent RDDs
– a function for computing it from its parents
– metadata about its partitioning scheme and data
placement
11
RDD Dependencies
• Narrow dependencies
– each partition of the parent RDD is used by at
most one partition of the child RDD
• Wide dependencies
– multiple child partitions may depend on it
12
RDD Dependencies
13
RDD Dependencies
• Narrow dependencies
– allow for pipelined execution on one cluster node
– easy fault recovery
• Wide dependencies
– require data from all parent partitions to be
available and to be shuffled across the nodes
– a single failed node might cause a complete reexecution.
14
Job Scheduling
• To execute an action on an RDD
– scheduler decide the stages from the RDD’s
lineage graph
– each stage contains as many pipelined
transformations with narrow dependencies as
possible
15
Job Scheduling
16
Memory Management
• Three options for persistent RDDs
– in-memory storage as deserialized Java objects
– in-memory storage as serialized data
– on-disk storage
• LRU eviction policy at the level of RDDs
– when there’s not enough memory, evict a
partition from the least recently accessed RDD
17
Checkpointing
• Checkpoint RDDs to prevent long lineage
chains during fault recovery
• Simpler to checkpoint than shared memory
– Read-only nature of RDDs
18
Discussions
19
Checkpointing or Versioning?
20
• Frequent checkpointing, or
Keep all versions of ranks?
spark hadoop 对比 Resilient Distributed Datasets的更多相关文章
- Apache Spark 2.2.0 中文文档 - Spark RDD(Resilient Distributed Datasets)论文 | ApacheCN
Spark RDD(Resilient Distributed Datasets)论文 概要 1: 介绍 2: Resilient Distributed Datasets(RDDs) 2.1 RDD ...
- Apache Spark RDD(Resilient Distributed Datasets)论文
Spark RDD(Resilient Distributed Datasets)论文 概要 1: 介绍 2: Resilient Distributed Datasets(RDDs) 2.1 RDD ...
- Apache Spark 2.2.0 中文文档 - Spark RDD(Resilient Distributed Datasets)
Spark RDD(Resilient Distributed Datasets)论文 概要 1: 介绍 2: Resilient Distributed Datasets(RDDs) 2.1 RDD ...
- Spark的核心RDD(Resilient Distributed Datasets弹性分布式数据集)
Spark的核心RDD (Resilient Distributed Datasets弹性分布式数据集) 原文链接:http://www.cnblogs.com/yjd_hycf_space/p/7 ...
- spark 笔记 2: Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing
http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf ucb关于spark的论文,对spark中核心组件RDD最原始.本质的理解, ...
- RDD内存迭代原理(Resilient Distributed Datasets)---弹性分布式数据集
Spark的核心RDD Resilient Distributed Datasets(弹性分布式数据集) Spark运行原理与RDD理论 Spark与MapReduce对比,MapReduce的计 ...
- Scala当中什么是RDD(Resilient Distributed Datasets)弹性分布式数据集
RDD(Resilient Distributed Datasets)弹性分布式数据集.你不好理解的话,可以把RDD就可以看成是一个简单的"动态数组"(比如ArrayList),对 ...
- 【Spark】RDD(Resilient Distributed Dataset)究竟是什么?
目录 基本概念 官方文档 概述 含义 RDD出现的原因 五大属性 以单词统计为例,一张图熟悉RDD当中的五大属性 解构图 RDD弹性 RDD特点 分区 只读 依赖 缓存 checkpoint 基本概念 ...
- 大数据 --> Spark与Hadoop对比
Spark与Hadoop对比 什么是Spark Spark是UC Berkeley AMP lab所开源的类Hadoop MapReduce的通用的并行计算框架,Spark基于map reduce算法 ...
随机推荐
- 并发-5CAS与AQS
juc: java.util.concurrent 锁: 悲观锁:写的比较多,对数据的增删改,读(查)少.Lock 乐观锁:反之,读多写少.版本 并发编程之 CAS 的原理 什么是CAS CAS (c ...
- torch.nn.Linear()函数的理解
import torch x = torch.randn(128, 20) # 输入的维度是(128,20)m = torch.nn.Linear(20, 30) # 20,30是指维度output ...
- margin负值应用
我理解的最关键的一点是: 在文档流中,只能是后面的流向前面的,即文档流只能向左或向上流动,不能向下或向右移动.第二个元素的基准线是第一个元素的右边界,第三个元素的基准线是第一.二个元素排好后最右边的边 ...
- Sql语句的一些事(二)
与sql语句的书写顺序并不是一样的,而是按照下面的顺序来执行 from--where--group by--having--select--order by, from:需要从哪个数据表检索数据 wh ...
- db2事务日志已满解决办法
查看事务日志配置(MICRO_11为数据库名称): db2 get db cfg for MICRO_11 运行结果: 日志文件大小(4KB) (LOG ...
- vector元素的删除 remove的使用 unique的使用
在vector删除指定元素可用以下语句 : v.erase(remove(v.begin(), v.end(), element), installed.end()); 可将vector中所有值为el ...
- python之抽象 2014-4-6
#抽象 8.40am-1.懒惰即美德2.抽象和结构3.创建函数 内建的callable 函数可以判定函数是否可以调用 >>> import math >>> x=1 ...
- Git和SVN共存的方法
刚工作的时候都是用的cvs和svn,对git不熟悉,随着工作的需要,打分支和版本管理的需要,熟悉起来了git,这一用不可收拾,比svn远远好用,尤其是版本分支管理上,切换分支的方便性,现在这家公司还是 ...
- hihoCoder#1048 状态压缩·二
原题地址 位运算的状态压缩太操蛋了,很容易出错...又是数组没开够导致诡异现象(明明某个值是1,莫名其妙就变成0了),害我debug一整天!fuck 代码: #include <iostream ...
- POJ1094 / ZOJ1060
#include <cstdio> #include <cstring> #include <stack> #include <iostream> us ...