【转】Spark源码分析之-Storage模块
原文地址:http://blog.csdn.net/aiuyjerry/article/details/8595991
Storage模块主要负责数据存取,包括MapReduce Shuffle中间结果、MapReduce task中间stage结果、cache结果。下面从架构和源码细节上来分析Storage模块的实现。Storage模块主要由两大部分组成:
- BlockManager部分主要负责Master和Slave之间的block通信,主要包括BlockManager状态上报、心跳,add, remove, update block.
- BlockStore部分主要负责数据存取,Spark根据不同选择可以在Memory或(和)Disk中存储序列化数据.
.png)

- SparkEnv创建时会实例化BlockManagerMaster对象和BlockManager对象。
- BlockManagerMaster对象会根据自己是master还是slave来创建BlockManagerMasterActor或是连接到BlockManagerMasterActor。
- BlockManager承担两种角色:
- 负责向BlockManagerMaster上报block信息,保持心跳和接收block信息
- 负责通过BlockStore从Memory或Disk读取、写入block数据
- BlockManagerMessages封装与master传输的meta信息的具体格式。
- Slave通过BlockManager向BlockManagerMaster注册自己,在注册自己时会创建BlockManagerSlaveActor,用来Master向Slave通信,目前唯一request是请求Slave删除block。
- BlockManagerWorker则负责Slave之间的通信,包括get, put非本地的block
- BlockMessage类封装了与Master通信的block message的具体格式,而BlockMessageArray则是批处理接口。
- BlockStore提供持久化数据的接口,DiskStore和MemoryStore实例化了BlockStore接口,实现serialize, deserialize数据到Disk或Memory。
- Slave --------> Master
- RegisterBlockManager
- HeartBeat
- UpdateBlockInfo
- GetLocations
- GetLocationsMutipleBlockIds
- GetPeers
- RemoveExecutor
- StopBlockManagerMaster
- GetMemoryStatus
- ExpireDeadHosts
- GetStorageStatus
- Master ---------> Slave
- RemoveBlock
case class Entry(value: Any, size: Long, deserialized: Boolean, var dropPending: Boolean = false)
private val entries = new LinkedHashMap[String, Entry](32, 0.75f, true)
private def tryToPut(blockId: String, value: Any, size: Long, deserialized: Boolean): Boolean = {
putLock.synchronized {
if (ensureFreeSpace(blockId, size)) {
val entry = new Entry(value, size, deserialized)
entries.synchronized { entries.put(blockId, entry) }
currentMemory += size
if (deserialized) {
logInfo("Block %s stored as values to memory (estimated size %s, free %s)".format(
blockId, Utils.memoryBytesToString(size), Utils.memoryBytesToString(freeMemory)))
} else {
logInfo("Block %s stored as bytes to memory (size %s, free %s)".format(
blockId, Utils.memoryBytesToString(size), Utils.memoryBytesToString(freeMemory)))
}
true
} else {
// Tell the block manager that we couldn't put it in memory so that it can drop it to
// disk if the block allows disk storage.
val data = if (deserialized) {
Left(value.asInstanceOf[ArrayBuffer[Any]])
} else {
Right(value.asInstanceOf[ByteBuffer].duplicate())
}
blockManager.dropFromMemory(blockId, data)
false
}
}
}
private def ensureFreeSpace(blockIdToAdd: String, space: Long): Boolean = {
logInfo("ensureFreeSpace(%d) called with curMem=%d, maxMem=%d".format(
space, currentMemory, maxMemory))
if (space > maxMemory) {
logInfo("Will not store " + blockIdToAdd + " as it is larger than our memory limit")
return false
}
if (maxMemory - currentMemory < space) {
val rddToAdd = getRddId(blockIdToAdd)
val selectedBlocks = new ArrayBuffer[String]()
var selectedMemory = 0L
entries.synchronized {
val iterator = entries.entrySet().iterator()
while (maxMemory - (currentMemory - selectedMemory) < space && iterator.hasNext) {
val pair = iterator.next()
val blockId = pair.getKey
if (rddToAdd != null && rddToAdd == getRddId(blockId)) {
logInfo("Will not store " + blockIdToAdd + " as it would require dropping another " +
"block from the same RDD")
return false
}
selectedBlocks += blockId
selectedMemory += pair.getValue.size
}
}
if (maxMemory - (currentMemory - selectedMemory) >= space) {
logInfo(selectedBlocks.size + " blocks selected for dropping")
for (blockId <- selectedBlocks) {
val entry = entries.synchronized { entries.get(blockId) }
// This should never be null as only one thread should be dropping
// blocks and removing entries. However the check is still here for
// future safety.
if (entry != null) {
val data = if (entry.deserialized) {
Left(entry.value.asInstanceOf[ArrayBuffer[Any]])
} else {
Right(entry.value.asInstanceOf[ByteBuffer].duplicate())
}
blockManager.dropFromMemory(blockId, data)
}
}
return true
} else {
return false
}
}
return true
}
- 首先,Spark会根据spark.local.dir的配置在所有配置目录下建立localDir,localDir命名为spark-local-%s-%04x,其中%s是格式化后的当前时间(yyyyMMddHHmmss),%d是一个小于65535的随机16进制数字。
- 其次,每当要存储block时,Spark会根据blockId在localDir下建立子目录和相应的文件,block存储目录的选择规律是:
- 根据blockId的hash值计算出dirId和subDirId
- 取出或创建subDir
- 在subDir下面以blockId为名字创建文件
val subDirsPerLocalDir = System.getProperty("spark.diskStore.subDirectories", "64").toInt
val subDirs = Array.fill(localDirs.length)(new Array[File](subDirsPerLocalDir))
// Figure out which local directory it hashes to, and which subdirectory in that
val hash = math.abs(blockId.hashCode)
val dirId = hash % localDirs.length
val subDirId = (hash / localDirs.length) % subDirsPerLocalDir
// Create the subdirectory if it doesn't already exist
var subDir = subDirs(dirId)(subDirId)
if (subDir == null) {
subDir = subDirs(dirId).synchronized {
val old = subDirs(dirId)(subDirId)
if (old != null) {
old
} else {
val newDir = new File(localDirs(dirId), "%02x".format(subDirId))
newDir.mkdir()
subDirs(dirId)(subDirId) = newDir
newDir
}
}
}
new File(subDir, blockId)
- 最后,根据压缩和序列化方式选择将block存储到文件中
【转】Spark源码分析之-Storage模块的更多相关文章
- Spark源码分析之-Storage模块
原文链接:http://jerryshao.me/architecture/2013/10/08/spark-storage-module-analysis/ Background 前段时间琐事颇多, ...
- 【转】Spark源码分析之-deploy模块
原文地址:http://jerryshao.me/architecture/2013/04/30/Spark%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90%E4%B9%8B- ...
- 【转】Spark源码分析之-scheduler模块
原文地址:http://jerryshao.me/architecture/2013/04/21/Spark%E6%BA%90%E7%A0%81%E5%88%86%E6%9E%90%E4%B9%8B- ...
- Spark源码分析 – BlockManager
参考, Spark源码分析之-Storage模块 对于storage, 为何Spark需要storage模块?为了cache RDD Spark的特点就是可以将RDD cache在memory或dis ...
- Spark源码分析 – 汇总索引
http://jerryshao.me/categories.html#architecture-ref http://blog.csdn.net/pelick/article/details/172 ...
- Spark源码分析 – Deploy
参考, Spark源码分析之-deploy模块 Client Client在SparkDeploySchedulerBackend被start的时候, 被创建, 代表一个application和s ...
- Spark源码分析 – SparkContext
Spark源码分析之-scheduler模块 这位写的非常好, 让我对Spark的源码分析, 变的轻松了许多 这里自己再梳理一遍 先看一个简单的spark操作, val sc = new SparkC ...
- Spark源码分析之九:内存管理模型
Spark是现在很流行的一个基于内存的分布式计算框架,既然是基于内存,那么自然而然的,内存的管理就是Spark存储管理的重中之重了.那么,Spark究竟采用什么样的内存管理模型呢?本文就为大家揭开Sp ...
- spark 源码分析之十五 -- Spark内存管理剖析
本篇文章主要剖析Spark的内存管理体系. 在上篇文章 spark 源码分析之十四 -- broadcast 是如何实现的?中对存储相关的内容没有做过多的剖析,下面计划先剖析Spark的内存机制,进而 ...
随机推荐
- Python + Robotframework + Appium 之APP自动化测试实践(一)
前面的文章已经介绍了Robotframework+Appium的安装及小试牛刀(For Android) 下面来个简单的实践,话不多说,还以是计算器为例,直接上代码,详情如下: *** Setting ...
- C语言 fread()与fwrite()函数说明与示例
1.作用 读写文件数据块. 2.函数原型 (1)size_t fread ( void * ptr, size_t size, size_t count, FILE * stream ); 其中,pt ...
- hdu 1266 Reverse Number
题目 一道水题,但是超时了几次(Time Limit Exceeded) 题中说:then n 32-bit integers follow.私以为可以用int来做,结果就一直超时,博客之,发现全是用 ...
- hdu 4994 前后有序Nim游戏
http://acm.hdu.edu.cn/showproblem.php?pid=4994 Nim游戏变成从前往后有序的,谁是winner? 如果当前堆数目为1,玩家没有选择,只能取走.遇到到不为1 ...
- pl/sql 语言设置
1.select * from v$nls_parameters 查询nls的参数,获得数据库服务器端的字符编码 NLS_LANGUAGE NLS_CHARACTERSET 2.修改本地环境变量 NL ...
- const 用法全面总结
C++中的const关键字的用法非常灵活,而使用const将大大改善程序的健壮性,本人根据各方面查到的资料进行总结如下,期望对朋友们有所帮助. Const 是C++中常用的类型修饰符,常类型是指使用类 ...
- Asp.NetCore初步探究
1, 新建一个空的AspNetCore项目,默认Program下的代码如下: public static void Main(string[] args) { BuildWebHost(args ...
- Winform 搭建http服务器
using System; using System.Collections; using System.IO; using System.Net; using System.Net.Sockets; ...
- js图片自适应尺寸居中函数处理
/* | autoSerializePicture.js 自适应格式化图片 | auther : baichaohua/2017-09-21 +---------------------------- ...
- delete job definition
$badjobTake2 = Get-SPTimerJob -ID "job-service-instance-f2de35ab-bca3-4e53-b51a-98d3b06a6930&qu ...