spark持久化:cache 、persist、checkpoint

一、cache持久化

cache实际上是persist的一种简化方式,是一种懒执行的,执行action类算子才会触发,cahce后返回值要赋值给一个变量,下一个job直接基于变量进行操作。

cache操作:

public class Persist_Cache {
public static void main(String[] args) {
SparkConf conf = new SparkConf()
.setMaster("local")
.setAppName("Persist");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> stringJavaRDD = sc.textFile("E:/2018_cnic/learn/wordcount.txt");
JavaRDD<String> cache = stringJavaRDD.cache();
long startTime = System.currentTimeMillis();
long count = cache.count();
long endTime = System.currentTimeMillis(); System.out.println("no cahce duration:"+(endTime-startTime));
long startTime1 = System.currentTimeMillis();
long count1 = cache.count();
long endTime1 = System.currentTimeMillis();
System.out.println("cahce duration:"+(endTime1-startTime1));
}
}

结果输出:

// :: INFO DAGScheduler: Job  finished: count at Persist_Cache.java:, took 0.202060 s
no cahce duration:248
// :: INFO SparkContext: Starting job: count at Persist_Cache.java:
// :: INFO DAGScheduler: Got job (count at Persist_Cache.java:) with output partitions
// :: INFO DAGScheduler: Final stage: ResultStage (count at Persist_Cache.java:)
// :: INFO DAGScheduler: Parents of final stage: List()
// :: INFO DAGScheduler: Missing parents: List()
// :: INFO DAGScheduler: Submitting ResultStage (E:/2018_cnic/learn/wordcount.txt MapPartitionsRDD[] at textFile at Persist_Cache.java:), which has no missing parents
// :: INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.1 KB, free 413.7 MB)
// :: INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1925.0 B, free 413.7 MB)
// :: INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on hadoop: (size: 1925.0 B, free: 413.9 MB)
// :: INFO SparkContext: Created broadcast from broadcast at DAGScheduler.scala:
// :: INFO DAGScheduler: Submitting missing tasks from ResultStage (E:/2018_cnic/learn/wordcount.txt MapPartitionsRDD[] at textFile at Persist_Cache.java:) (first tasks are for partitions Vector())
// :: INFO TaskSchedulerImpl: Adding task set 1.0 with tasks
// :: INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID , localhost, executor driver, partition , PROCESS_LOCAL, bytes)
// :: INFO Executor: Running task 0.0 in stage 1.0 (TID )
// :: INFO BlockManager: Found block rdd_1_0 locally
// :: INFO Executor: Finished task 0.0 in stage 1.0 (TID ). bytes result sent to driver
// :: INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID ) in ms on localhost (executor driver) (/)
// :: INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
// :: INFO DAGScheduler: ResultStage (count at Persist_Cache.java:) finished in 0.027 s
// :: INFO DAGScheduler: Job finished: count at Persist_Cache.java:, took 0.028863 s
cahce duration:
// :: INFO SparkContext: Invoking stop() from shutdown hook

二、spark persist持久化

package SparkStreaming;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.storage.StorageLevel; public class Persist_Cache {
public static void main(String[] args) {
SparkConf conf = new SparkConf()
.setMaster("local")
.setAppName("Persist");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> stringJavaRDD = sc.textFile("E:/2018_cnic/learn/wordcount.txt");
JavaRDD<String> persist = stringJavaRDD.persist(StorageLevel.NONE());
long startTime = System.currentTimeMillis();
long count = persist.count();
long endTime = System.currentTimeMillis(); System.out.println("no cahce duration:"+(endTime-startTime));
long startTime1 = System.currentTimeMillis();
long count1 = persist.count();
long endTime1 = System.currentTimeMillis();
System.out.println("cahce duration:"+(endTime1-startTime1));
}
}

结果输出:结果加快是其内部优化的原因,不是持久化作用。

// :: INFO DAGScheduler: Job  finished: count at Persist_Cache.java:, took 0.228634 s
no cahce duration:
// :: INFO SparkContext: Starting job: count at Persist_Cache.java:
// :: INFO DAGScheduler: Got job (count at Persist_Cache.java:) with output partitions
// :: INFO DAGScheduler: Final stage: ResultStage (count at Persist_Cache.java:)
// :: INFO DAGScheduler: Parents of final stage: List()
// :: INFO DAGScheduler: Missing parents: List()
// :: INFO DAGScheduler: Submitting ResultStage (E:/2018_cnic/learn/wordcount.txt MapPartitionsRDD[] at textFile at Persist_Cache.java:), which has no missing parents
// :: INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.1 KB, free 413.7 MB)
// :: INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1919.0 B, free 413.7 MB)
// :: INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on hadoop: (size: 1919.0 B, free: 413.9 MB)
// :: INFO SparkContext: Created broadcast from broadcast at DAGScheduler.scala:
// :: INFO DAGScheduler: Submitting missing tasks from ResultStage (E:/2018_cnic/learn/wordcount.txt MapPartitionsRDD[] at textFile at Persist_Cache.java:) (first tasks are for partitions Vector())
// :: INFO TaskSchedulerImpl: Adding task set 1.0 with tasks
// :: INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID , localhost, executor driver, partition , PROCESS_LOCAL, bytes)
// :: INFO Executor: Running task 0.0 in stage 1.0 (TID )
// :: INFO HadoopRDD: Input split: file:/E:/2018_cnic/learn/wordcount.txt:+
// :: INFO Executor: Finished task 0.0 in stage 1.0 (TID ). bytes result sent to driver
// :: INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID ) in ms on localhost (executor driver) (/)
// :: INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
// :: INFO DAGScheduler: ResultStage (count at Persist_Cache.java:) finished in 0.023 s
// :: INFO DAGScheduler: Job finished: count at Persist_Cache.java:, took 0.025025 s
cahce duration:
// :: INFO SparkContext: Invoking stop() from shutdown hook

三、spark persist源码分析:

class StorageLevel private(
private var _useDisk: Boolean,
private var _useMemory: Boolean,
private var _useOffHeap: Boolean, //不使用堆外内存
private var _deserialized: Boolean, //不序列化
private var _replication: Int = 1
)
  val NONE = new StorageLevel(false, false, false, false)
val DISK_ONLY = new StorageLevel(true, false, false, false)
val DISK_ONLY_2 = new StorageLevel(true, false, false, false, )
val MEMORY_ONLY = new StorageLevel(false, true, false, true) 不序列化
val MEMORY_ONLY_2 = new StorageLevel(false, true, false, true, )
val MEMORY_ONLY_SER = new StorageLevel(false, true, false, false)
val MEMORY_ONLY_SER_2 = new StorageLevel(false, true, false, false, )
val MEMORY_AND_DISK = new StorageLevel(true, true, false, true) 内存中放不下剩余的放入到磁盘中
val MEMORY_AND_DISK_2 = new StorageLevel(true, true, false, true, )
val MEMORY_AND_DISK_SER = new StorageLevel(true, true, false, false)
val MEMORY_AND_DISK_SER_2 = new StorageLevel(true, true, false, false, )
val OFF_HEAP = new StorageLevel(true, true, true, false, ) 使用堆外内存

持久化的单位是partition

例如:RDD中有3个partition,持久化级别是MEMERY_AND_DISK,内存中可以存几个就存几个,不会存储半个的情况,剩下的存储到磁盘

MEMERY_AND_DISK_2:数据存储在几个节点上?不一定,不能确定,因为每一个partition存储在哪个节点不确定,其备份存储在哪里也不确定。

四、spark cache源码分析:调用的是persist方法,默认持久化级别为 StorageLevel.MEMORY_ONLY

/**
* Persist this RDD with the default storage level (`MEMORY_ONLY`).
*/
def persist(): this.type = persist(StorageLevel.MEMORY_ONLY) /**
* Persist this RDD with the default storage level (`MEMORY_ONLY`).
*/
def cache(): this.type = persist()

五、checkpoint操作

checkpoint:会将RDD的数据存储到HDFS中,安全系数较高,因为HDFS会有备份

checkpoint也是懒执行的。如何使用?

sc.checkpoint("path")
rdd3.checkpoint()

标注:在RDD的job执行完成后(action类算子被触发)

1、会从finalRDD(最后一个RDD)从后往前回溯,寻找调用checkpoint的rdd,对这个rdd做一个标记,做完标记后,重新启动一个job,来计算被checkpoint的RDD,然后将计算结果写入到相应的HDFS目录下面。

2、同时将被checkpoint的RDD的依赖关系切断,强制将依赖关系改变为checkpointRDD。

调用checkpoint的优化方法:

因为被checkpont的RDD被计算两次,在执行调用checkpoint之前,可以对RDD3进行cache,那么在rdd的job执行完成之后另外启动一个job,只是将内存中的数据迁移到HDFS就可以了,省去了计算的过程。

spark持久化的更多相关文章

  1. Spark持久化策略

    spark持久化策略_缓存优化persist.cache都是持久化到内存缓存策略 StorageLevel_useDisk:是否使用磁盘_useMemory:是否使用内存_useOffHeap:不用堆 ...

  2. spark 持久化机制

    spark的持久化机制做的相对隐晦一些,没有一个显示的调用入口. 首先通过rdd.persist(newLevel: StorageLevel)对此rdd的StorageLevel进行赋值,同chec ...

  3. Spark开发指南

    原文链接http://www.sxt.cn/info-2730-u-756.html 目录 Spark开发指南 简介 接入Spark Java 初始化Spark Java 弹性分布式数据集 并行集合 ...

  4. spark RDD编程,scala版本

    1.RDD介绍:     RDD,弹性分布式数据集,即分布式的元素集合.在spark中,对所有数据的操作不外乎是创建RDD.转化已有的RDD以及调用RDD操作进行求值.在这一切的背后,Spark会自动 ...

  5. spark 中的RDD编程 -以下基于Java api

    1.RDD介绍:     RDD,弹性分布式数据集,即分布式的元素集合.在spark中,对所有数据的操作不外乎是创建RDD.转化已有的RDD以及调用RDD操作进行求值.在这一切的背后,Spark会自动 ...

  6. Spark学习之RDD编程总结

    Spark 对数据的核心抽象——弹性分布式数据集(Resilient Distributed Dataset,简称 RDD).RDD 其实就是分布式的元素集合.在 Spark 中,对数据的所有操作不外 ...

  7. Spark调优 数据倾斜

    1. Spark数据倾斜问题 Spark中的数据倾斜问题主要指shuffle过程中出现的数据倾斜问题,是由于不同的key对应的数据量不同导致的不同task所处理的数据量不同的问题. 例如,reduce ...

  8. 07、RDD持久化

    为了避免多次计算同一个RDD(如上面的同一result RDD就调用了两次Action操作),可以让Spark对数据进行持久化.当我们让Spark持久化存储一个RDD时,计算出RDD的节点会分别保存它 ...

  9. SPARK快学大数据分析概要

    Spark 是一个用来实现快速而通用的集群计算的平台.在速度方面,Spark 扩展了广泛使用的MapReduce 计算模型,而且高效地支持更多计算模式,包括交互式查询和流处理.在处理大规模数据集时,速 ...

随机推荐

  1. Struts2,Spring3,Hibernate4整合--SSH框架

    Struts2,Spring3,Hibernate4整合--SSH框架(学习中) 一.包的导入 1.Spring包 2.Hibernate 包 3.struts 包 (还欠 struts2-sprin ...

  2. NLTK和Stanford NLP两个工具的安装配置

    这里安装的是两个自然语言处理工具,NLTK和Stanford NLP. 声明:笔者操作系统是Windows10,理论上Windows都可以: 版本号:NLTK 3.2 Stanford NLP 3.6 ...

  3. 分布式java应用基础与实践

      始读于2014年4月30日,完成于2014年6月6日15:43:39. 阿里巴巴高级研究员林昊早年的书了,这些理论放到今天估计工作一两年的人都耳熟能详了,我个人很早以前就知道此书一直没有找到资源, ...

  4. jquery从零开始学----选择器

     (2011-01-10 21:21:28) 转载▼ 后代选择器: $("mix mix"),当然可以是多个嵌套,但后代选择器可以是深层子代,所以$("mix mix m ...

  5. Multi-Sensor, Multi- Network Positioning

    Ruizhi Chen, Heidi Kuusniemi, Yuwei Chen, Ling Pei, Wei Chen, Jingbin Liu, Helena Leppäkoski, Jarmo ...

  6. Linux 基础教程 38-文件下载

    什么是wget     wget用原始帮助里面的英文来讲就是:The non-interactive network downloader,非交互式网络下载器.它支持HTTP.HTTPS.FTP等协议 ...

  7. 对ConditionQueue和锁的理解

    1. 什么时候使用conditionQueue 使用conditionQueue的一个最基本的条件是,操作和状态相关,而且是多线程同时访问的状态. 也就是说在使用conditionQueue的时候, ...

  8. 23 DesignPatterns学习笔记:C++语言实现 --- 2.7 Proxy

    23 DesignPatterns学习笔记:C++语言实现 --- 2.7 Proxy 2016-07-18 (www.cnblogs.com/icmzn) 模式理解

  9. OpenGl中的Nurbs B样条曲面绘制

    NURBS 贝塞尔曲线的缺点是当我们增加很多控制点的时候,曲线变得不可控,其连续性会变差差.如果控制点很多(高阶曲线),当我们调整一个控制点的位置,对 整个曲线的影响是很大的.要获得更高级的控制,可以 ...

  10. xaml mvvm(2)之属性绑定

    通过微软INotifyPropertyChanged接口,可以实现对UI实时更新,不管是数据源或者目标对象,可以实现相互通知. 下面我们根据INotifyPropertyChanged编写一个扩展类. ...