spark持久化:cache 、persist、checkpoint

一、cache持久化

cache实际上是persist的一种简化方式,是一种懒执行的,执行action类算子才会触发,cahce后返回值要赋值给一个变量,下一个job直接基于变量进行操作。

cache操作:

public class Persist_Cache {
public static void main(String[] args) {
SparkConf conf = new SparkConf()
.setMaster("local")
.setAppName("Persist");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> stringJavaRDD = sc.textFile("E:/2018_cnic/learn/wordcount.txt");
JavaRDD<String> cache = stringJavaRDD.cache();
long startTime = System.currentTimeMillis();
long count = cache.count();
long endTime = System.currentTimeMillis(); System.out.println("no cahce duration:"+(endTime-startTime));
long startTime1 = System.currentTimeMillis();
long count1 = cache.count();
long endTime1 = System.currentTimeMillis();
System.out.println("cahce duration:"+(endTime1-startTime1));
}
}

结果输出:

// :: INFO DAGScheduler: Job  finished: count at Persist_Cache.java:, took 0.202060 s
no cahce duration:248
// :: INFO SparkContext: Starting job: count at Persist_Cache.java:
// :: INFO DAGScheduler: Got job (count at Persist_Cache.java:) with output partitions
// :: INFO DAGScheduler: Final stage: ResultStage (count at Persist_Cache.java:)
// :: INFO DAGScheduler: Parents of final stage: List()
// :: INFO DAGScheduler: Missing parents: List()
// :: INFO DAGScheduler: Submitting ResultStage (E:/2018_cnic/learn/wordcount.txt MapPartitionsRDD[] at textFile at Persist_Cache.java:), which has no missing parents
// :: INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.1 KB, free 413.7 MB)
// :: INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1925.0 B, free 413.7 MB)
// :: INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on hadoop: (size: 1925.0 B, free: 413.9 MB)
// :: INFO SparkContext: Created broadcast from broadcast at DAGScheduler.scala:
// :: INFO DAGScheduler: Submitting missing tasks from ResultStage (E:/2018_cnic/learn/wordcount.txt MapPartitionsRDD[] at textFile at Persist_Cache.java:) (first tasks are for partitions Vector())
// :: INFO TaskSchedulerImpl: Adding task set 1.0 with tasks
// :: INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID , localhost, executor driver, partition , PROCESS_LOCAL, bytes)
// :: INFO Executor: Running task 0.0 in stage 1.0 (TID )
// :: INFO BlockManager: Found block rdd_1_0 locally
// :: INFO Executor: Finished task 0.0 in stage 1.0 (TID ). bytes result sent to driver
// :: INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID ) in ms on localhost (executor driver) (/)
// :: INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
// :: INFO DAGScheduler: ResultStage (count at Persist_Cache.java:) finished in 0.027 s
// :: INFO DAGScheduler: Job finished: count at Persist_Cache.java:, took 0.028863 s
cahce duration:
// :: INFO SparkContext: Invoking stop() from shutdown hook

二、spark persist持久化

package SparkStreaming;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.storage.StorageLevel; public class Persist_Cache {
public static void main(String[] args) {
SparkConf conf = new SparkConf()
.setMaster("local")
.setAppName("Persist");
JavaSparkContext sc = new JavaSparkContext(conf);
JavaRDD<String> stringJavaRDD = sc.textFile("E:/2018_cnic/learn/wordcount.txt");
JavaRDD<String> persist = stringJavaRDD.persist(StorageLevel.NONE());
long startTime = System.currentTimeMillis();
long count = persist.count();
long endTime = System.currentTimeMillis(); System.out.println("no cahce duration:"+(endTime-startTime));
long startTime1 = System.currentTimeMillis();
long count1 = persist.count();
long endTime1 = System.currentTimeMillis();
System.out.println("cahce duration:"+(endTime1-startTime1));
}
}

结果输出:结果加快是其内部优化的原因,不是持久化作用。

// :: INFO DAGScheduler: Job  finished: count at Persist_Cache.java:, took 0.228634 s
no cahce duration:
// :: INFO SparkContext: Starting job: count at Persist_Cache.java:
// :: INFO DAGScheduler: Got job (count at Persist_Cache.java:) with output partitions
// :: INFO DAGScheduler: Final stage: ResultStage (count at Persist_Cache.java:)
// :: INFO DAGScheduler: Parents of final stage: List()
// :: INFO DAGScheduler: Missing parents: List()
// :: INFO DAGScheduler: Submitting ResultStage (E:/2018_cnic/learn/wordcount.txt MapPartitionsRDD[] at textFile at Persist_Cache.java:), which has no missing parents
// :: INFO MemoryStore: Block broadcast_2 stored as values in memory (estimated size 3.1 KB, free 413.7 MB)
// :: INFO MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 1919.0 B, free 413.7 MB)
// :: INFO BlockManagerInfo: Added broadcast_2_piece0 in memory on hadoop: (size: 1919.0 B, free: 413.9 MB)
// :: INFO SparkContext: Created broadcast from broadcast at DAGScheduler.scala:
// :: INFO DAGScheduler: Submitting missing tasks from ResultStage (E:/2018_cnic/learn/wordcount.txt MapPartitionsRDD[] at textFile at Persist_Cache.java:) (first tasks are for partitions Vector())
// :: INFO TaskSchedulerImpl: Adding task set 1.0 with tasks
// :: INFO TaskSetManager: Starting task 0.0 in stage 1.0 (TID , localhost, executor driver, partition , PROCESS_LOCAL, bytes)
// :: INFO Executor: Running task 0.0 in stage 1.0 (TID )
// :: INFO HadoopRDD: Input split: file:/E:/2018_cnic/learn/wordcount.txt:+
// :: INFO Executor: Finished task 0.0 in stage 1.0 (TID ). bytes result sent to driver
// :: INFO TaskSetManager: Finished task 0.0 in stage 1.0 (TID ) in ms on localhost (executor driver) (/)
// :: INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks have all completed, from pool
// :: INFO DAGScheduler: ResultStage (count at Persist_Cache.java:) finished in 0.023 s
// :: INFO DAGScheduler: Job finished: count at Persist_Cache.java:, took 0.025025 s
cahce duration:
// :: INFO SparkContext: Invoking stop() from shutdown hook

三、spark persist源码分析:

class StorageLevel private(
private var _useDisk: Boolean,
private var _useMemory: Boolean,
private var _useOffHeap: Boolean, //不使用堆外内存
private var _deserialized: Boolean, //不序列化
private var _replication: Int = 1
)
  val NONE = new StorageLevel(false, false, false, false)
val DISK_ONLY = new StorageLevel(true, false, false, false)
val DISK_ONLY_2 = new StorageLevel(true, false, false, false, )
val MEMORY_ONLY = new StorageLevel(false, true, false, true) 不序列化
val MEMORY_ONLY_2 = new StorageLevel(false, true, false, true, )
val MEMORY_ONLY_SER = new StorageLevel(false, true, false, false)
val MEMORY_ONLY_SER_2 = new StorageLevel(false, true, false, false, )
val MEMORY_AND_DISK = new StorageLevel(true, true, false, true) 内存中放不下剩余的放入到磁盘中
val MEMORY_AND_DISK_2 = new StorageLevel(true, true, false, true, )
val MEMORY_AND_DISK_SER = new StorageLevel(true, true, false, false)
val MEMORY_AND_DISK_SER_2 = new StorageLevel(true, true, false, false, )
val OFF_HEAP = new StorageLevel(true, true, true, false, ) 使用堆外内存

持久化的单位是partition

例如:RDD中有3个partition,持久化级别是MEMERY_AND_DISK,内存中可以存几个就存几个,不会存储半个的情况,剩下的存储到磁盘

MEMERY_AND_DISK_2:数据存储在几个节点上?不一定,不能确定,因为每一个partition存储在哪个节点不确定,其备份存储在哪里也不确定。

四、spark cache源码分析:调用的是persist方法,默认持久化级别为 StorageLevel.MEMORY_ONLY

/**
* Persist this RDD with the default storage level (`MEMORY_ONLY`).
*/
def persist(): this.type = persist(StorageLevel.MEMORY_ONLY) /**
* Persist this RDD with the default storage level (`MEMORY_ONLY`).
*/
def cache(): this.type = persist()

五、checkpoint操作

checkpoint:会将RDD的数据存储到HDFS中,安全系数较高,因为HDFS会有备份

checkpoint也是懒执行的。如何使用?

sc.checkpoint("path")
rdd3.checkpoint()

标注:在RDD的job执行完成后(action类算子被触发)

1、会从finalRDD(最后一个RDD)从后往前回溯,寻找调用checkpoint的rdd,对这个rdd做一个标记,做完标记后,重新启动一个job,来计算被checkpoint的RDD,然后将计算结果写入到相应的HDFS目录下面。

2、同时将被checkpoint的RDD的依赖关系切断,强制将依赖关系改变为checkpointRDD。

调用checkpoint的优化方法:

因为被checkpont的RDD被计算两次,在执行调用checkpoint之前,可以对RDD3进行cache,那么在rdd的job执行完成之后另外启动一个job,只是将内存中的数据迁移到HDFS就可以了,省去了计算的过程。

spark持久化的更多相关文章

  1. Spark持久化策略

    spark持久化策略_缓存优化persist.cache都是持久化到内存缓存策略 StorageLevel_useDisk:是否使用磁盘_useMemory:是否使用内存_useOffHeap:不用堆 ...

  2. spark 持久化机制

    spark的持久化机制做的相对隐晦一些,没有一个显示的调用入口. 首先通过rdd.persist(newLevel: StorageLevel)对此rdd的StorageLevel进行赋值,同chec ...

  3. Spark开发指南

    原文链接http://www.sxt.cn/info-2730-u-756.html 目录 Spark开发指南 简介 接入Spark Java 初始化Spark Java 弹性分布式数据集 并行集合 ...

  4. spark RDD编程,scala版本

    1.RDD介绍:     RDD,弹性分布式数据集,即分布式的元素集合.在spark中,对所有数据的操作不外乎是创建RDD.转化已有的RDD以及调用RDD操作进行求值.在这一切的背后,Spark会自动 ...

  5. spark 中的RDD编程 -以下基于Java api

    1.RDD介绍:     RDD,弹性分布式数据集,即分布式的元素集合.在spark中,对所有数据的操作不外乎是创建RDD.转化已有的RDD以及调用RDD操作进行求值.在这一切的背后,Spark会自动 ...

  6. Spark学习之RDD编程总结

    Spark 对数据的核心抽象——弹性分布式数据集(Resilient Distributed Dataset,简称 RDD).RDD 其实就是分布式的元素集合.在 Spark 中,对数据的所有操作不外 ...

  7. Spark调优 数据倾斜

    1. Spark数据倾斜问题 Spark中的数据倾斜问题主要指shuffle过程中出现的数据倾斜问题,是由于不同的key对应的数据量不同导致的不同task所处理的数据量不同的问题. 例如,reduce ...

  8. 07、RDD持久化

    为了避免多次计算同一个RDD(如上面的同一result RDD就调用了两次Action操作),可以让Spark对数据进行持久化.当我们让Spark持久化存储一个RDD时,计算出RDD的节点会分别保存它 ...

  9. SPARK快学大数据分析概要

    Spark 是一个用来实现快速而通用的集群计算的平台.在速度方面,Spark 扩展了广泛使用的MapReduce 计算模型,而且高效地支持更多计算模式,包括交互式查询和流处理.在处理大规模数据集时,速 ...

随机推荐

  1. ubuntu库文件路径pkgconfig

    settings--->compiler and bug settings -->link settings 在左边添加libpthread.a  ,右边添加 -lpthread即可. u ...

  2. DataGridView相关代码

    加行号: private void dgv_RowPostPaint(object sender, DataGridViewRowPostPaintEventArgs e) { //添加行号 var ...

  3. BASE64Encoder及BASE64Decoder编译器找不到问题

    编译器自带这两个类,但是会报错找不到,需要手动让编译器识别这个类 第一步.右键项目,然后选择properties 第二步,打开如图位置 第三部,选择如图位置,双击 第四部,add添加 更改值 改为如图 ...

  4. swift 自定义弹框

    // //  ViewController.swift //  animationAlert // //  Created by su on 15/12/9. //  Copyright © 2015 ...

  5. linux数据库备份

    linux数据库备份 服务端启用二进制日志 如果日志没有启开,必须启用binlog,要重启mysql,首先,关闭mysql,打开/etc/my.cnf,加入以下几行: [mysqld] log-bin ...

  6. Java下的框架编程(反射,泛型,元数据,CGLib,代码动态生成,AOP,动态语言嵌入)

    Java 虽然没有动态语言般暴起,但仍然天连天,水接水的生出好多框架技术---反射(reflection),泛型(generics),元数据(annotation),proxies(proxy/cgl ...

  7. centos7 安装SSH

    1.安装OpenSSH服务(CentOS系统默认安装了openssh)      yum install openssh-server -y 2.配置OpenSSH服务(默认的配置已可以正常工作) O ...

  8. 使用centos官方镜像制作jdk8环境镜像

    首先将jdk文件或者tar包放在/var/local路径下 然后Dockerfile中写 # 以 centos7 为基础镜像 FROM centos:latest MAINTAINER chen # ...

  9. 简易网页 html

    <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <title> ...

  10. 基于ASP.NET MVC 利用(Aspose+Pdfobject.js) 实现在线预览Word、Excel、PPT、PDF文件

    #region VS2010版本以及以上版本源码下载地址:http://download.csdn.net/download/u012949335/10231812 VS2012版本以及以上版本源码下 ...