9.spark Core 进阶2--Cashe
Storage Level
|
Meaning
|
MEMORY_ONLY
|
Store RDD as deserialized Java objects in the JVM. If the RDD does not fit in memory, some partitions will not be cached and will be recomputed on the fly each time they're needed. This is the default level.
|
MEMORY_AND_DISK
|
Store RDD as deserialized Java objects in the JVM. If the RDD does not fit in memory, store the partitions that don't fit on disk, and read them from there when they're needed.
|
MEMORY_ONLY_SER (Java and Scala)
|
Store RDD as serialized Java objects (one byte array per partition). This is generally more space-efficient than deserialized objects, especially when using a fast serializer, but more CPU-intensive to read.
|
MEMORY_AND_DISK_SER (Java and Scala)
|
Similar to MEMORY_ONLY_SER, but spill partitions that don't fit in memory to disk instead of recomputing them on the fly each time they're needed.
|
DISK_ONLY
|
Store the RDD partitions only on disk.
|
MEMORY_ONLY_2, MEMORY_AND_DISK_2, etc.
|
Same as the levels above, but replicate each partition on two cluster nodes.
|
OFF_HEAP (experimental)
|
Similar to MEMORY_ONLY_SER, but store the data in off-heap memory. This requires off-heap memory to be enabled.
|
- If your RDDs fit comfortably with the default storage level (MEMORY_ONLY), leave them that way. This is the most CPU-efficient option, allowing operations on the RDDs to run as fast as possible.
- If not, try using MEMORY_ONLY_SER and selecting a fast serialization library to make the objects much more space-efficient, but still reasonably fast to access. (Java and Scala)
- Don’t spill to disk unless the functions that computed your datasets are expensive, or they filter a large amount of the data. Otherwise, recomputing a partition may be as fast as reading it from disk.
- Use the replicated storage levels if you want fast fault recovery (e.g. if using Spark to serve requests from a web application). All the storage levels provide full fault tolerance by recomputing lost data, but the replicated ones let you continue running tasks on the RDD without waiting to recompute a lost partition.
9.spark Core 进阶2--Cashe的更多相关文章
- 8.spark Core 进阶1
(e.g. standalone manager, Mesos, YARN) In "cluster" mode, the framework launches the ...
- Spark 3.x Spark Core详解 & 性能优化
Spark Core 1. 概述 Spark 是一种基于内存的快速.通用.可扩展的大数据分析计算引擎 1.1 Hadoop vs Spark 上面流程对应Hadoop的处理流程,下面对应着Spark的 ...
- Spark Streaming揭秘 Day35 Spark core思考
Spark Streaming揭秘 Day35 Spark core思考 Spark上的子框架,都是后来加上去的.都是在Spark core上完成的,所有框架一切的实现最终还是由Spark core来 ...
- 【Spark Core】任务运行机制和Task源代码浅析1
引言 上一小节<TaskScheduler源代码与任务提交原理浅析2>介绍了Driver側将Stage进行划分.依据Executor闲置情况分发任务,终于通过DriverActor向exe ...
- TypeError: Error #1034: 强制转换类型失败:无法将 mx.controls::DataGrid@9a7c0a1 转换为 spark.core.IViewport。
1.错误描述 TypeError: Error #1034: 强制转换类型失败:无法将 mx.controls::DataGrid@9aa90a1 转换为 spark.core.IViewport. ...
- Spark Core
Spark Core DAG概念 有向无环图 Spark会根据用户提交的计算逻辑中的RDD的转换(变换方法)和动作(action方法)来生成RDD之间的依赖关系,同时 ...
- Spark Streaming 进阶与案例实战
Spark Streaming 进阶与案例实战 1.带状态的算子: UpdateStateByKey 2.实战:计算到目前位置累积出现的单词个数写入到MySql中 1.create table CRE ...
- spark core (二)
一.Spark-Shell交互式工具 1.Spark-Shell交互式工具 Spark-Shell提供了一种学习API的简单方式, 以及一个能够交互式分析数据的强大工具. 在Scala语言环境下或Py ...
- Spark Core 资源调度与任务调度(standalone client 流程描述)
Spark Core 资源调度与任务调度(standalone client 流程描述) Spark集群启动: 集群启动后,Worker会向Master汇报资源情况(实际上将Worker的资 ...
随机推荐
- linux 上挂载硬盘或者读取u盘数据
查看服务器上有哪些设备 df -hl查询挂载 硬盘后或者插上u盘后sda 的变化,新增的就是我们添加上的. fdisk -ls /dev/sda
- ES6篇
ES6新特性你了解了多少呢? 珠峰培训 5月17日 ES6新特性 ES6的特性比较多,在 ES5 发布近 6 年(2009-11 至 2015-6)之后才将其标准化.两个发布版本之间时间跨度很大,所以 ...
- Markdown测试2
四级标题 内容测试 内容测试 内容测试 为知笔记发布博客时会添加一些HTML或CSS的标记,会影响文章的摘要显示. A B 一 二 α" role="presentation&q ...
- Pregel的计算过程
- Centos7 PXE Server Install Script
#安装前配置好centos和epel yum源 #网卡ip和localip一致 localip="192.168.88.200" eth_name='eth0' dnsmasq_i ...
- nextJS使用注意事项
项目参考 nextJs-yicha 1. 采用方案 create-next-app.antd (1)安装 npx create-next-app --example with-ant-design m ...
- MetalLB自建私有Kubernetes的LoadBalancer负载均衡类型服务
简介 在私有网络上运行 Kubernetes,和御三家相比,对 LoadBalancer 类型的服务的支持应该是众多表面差异中最醒目的一个了.类型为 LoadBalancer 的服务在 Kuberne ...
- iptables开通某些端口
#!/bin/bash #define all variance or parameter WAH_INT="eth0" WAH_INT_IP="222.222.101. ...
- HMaster高可用
1.确保HBase集群已正常停止 $ bin/stop-hbase.sh 2.在conf目录下创建backup-masters文件 $ touch conf/backup-masters 3.在bac ...
- NX二次开发-NX11创建表达式组NXOpen::ExpressionGroup
NX11+VS2013 #include <uf.h> #include <uf_modl.h> #include <uf_part.h> #include < ...