The storage wars: Shadow Paging, Log Structured Merge and Write Ahead Logging
The
 storage wars: Shadow Paging, Log Structured Merge and Write Ahead Logging
I’ve been doing a lot of research lately on storage. And in general, it seems that the most popular ways of writing to disk today are divide into the following categories.
- Write Ahead Logging (WAL)–  Many databases use some sort of variant on that.  PostgreSQL,SQLite, MongoDB, SQL
 Server, etc. Oracle has Redo
 Log, which seems similar, but I didn’t check too deeply.
- Log Structured Merge (LSM)– a lot of NoSQL databases use this method. Cassandra, Riak, LevelDB, SQLite 4, etc.
- Shadow Paging – was quite popular a long time ago (80s), but still somewhat in use. LMDB, Tokyo Cabinet, CoucbDB (sort of).
WAL came into being for a very simple reason, it is drastically faster to write sequentially than it is to do random writes. Let us
 assume that you store the data on disk using some sort of a tree, when you need to insert / update something in that tree, the record can be anywhere. That means that you would need to do random writes, and have to suffer the perf issues associated with that.
 Instead, you can write to the log and have some sort of a background process that would update the on disk data.
It also means that you really only have to update in memory data, flush the log and you are safe. The recovery procedure is going to be pretty complex, but it gives you some nice performance. Note that you write everything at least twice, once for the log,
 and once for the read data file. The log writes are sequential, the data writes are random.
LSM also take advantage of sequential write speeds, but it takes it even further, instead of updating the actual data, you will wait until the log gets to a certain size, at which point you are going to merge it with the current data file(s). That means that
 you you will usually write things multiple times, in LevelDB, for example, a lot of the effort has actually gone into eradicating this
 cost. The cost of compacting your data. Because what ended up happening is that you have user writes competing with the compaction writes.
Shadow Paging is not actually trying to optimize sequential writes. Well, that is not really fair. Shadow Paging & sequential writes are just not related. The reason I said CouchDB is sort of using shadow paging is that it is using the exact same mechanics
 as other shadow paging system, but it always write at the end of the file. That means that is has excellent write speed, but it also means that it needs some way
 to reduce space. And that means it uses compaction, which brings you right back to the competing write story.
For our purposes, we will ignore the way CouchDB work and focus on systems that works like LMDB. In those sort of systems, instead of modifying the data directly, we create a shadow page (copy on write) and modify that. Because the shadow page is only wired
 up to the rest of the pages on commit, this is absolutely atomic. It also means that modifying a page is going to use one page, and leave another free (the old page). And that, in turn, means that you need to have some way of scavenging for free space. CouchDB
 does that by creating a whole new file.
LMDB does that by recording the free space and reusing that in the next transaction. That means that writes to LMDB can happen anywhere. We can apply policies on top of that to mitigate that, but that is beside the point.
Let us go back to another important aspect that we have to deal with in databases. Backups. As it turn out, it is actually really simple for most LSM / WAL systems to implement that, because you can just use the logs. For LMDB, you can create a backup really
 easily (in fact, since we are using shadow paging, you pretty much get it for free). However, one feature that I don’t think would be possible with LMDB would be incremental backups. WAL/LSM make it easy, just take the logs since a given point. But with LMDB
 style dbs, I don’t think that this would be possible.
The storage wars: Shadow Paging, Log Structured Merge and Write Ahead Logging的更多相关文章
- Log Structured Merge Trees (LSM)
		1 概念 LSM = Log Structured Merge Trees 来源于google的bigtable论文. 2 解决问题 传统的数据库如MySql采用B+树存放数据,B ... 
- Log Structured Merge Trees(LSM) 算法
		十年前,谷歌发表了 “BigTable” 的论文,论文中很多很酷的方面之一就是它所使用的文件组织方式,这个方法更一般的名字叫 Log Structured-Merge Tree. LSM是当前被用在许 ... 
- LSM(Log Structured Merge Trees ) 笔记
		目录 一.大幅度制约存储介质吞吐量的原因 二.传统数据库的实现机制 三.LSM Tree的历史由来 四.提高写吞吐量的思路 4.1 一种方式是数据来后,直接顺序落盘 4.2 另一种方式,是保证落盘的数 ... 
- Log Structured Merge Trees(LSM) 原理
		http://www.open-open.com/lib/view/open1424916275249.html 
- SSTable and Log Structured Storage: LevelDB
		If Protocol Buffers is the lingua franca of individual data record at Google, then the Sorted String ... 
- InfluxDB存储引擎Time Structured Merge Tree——本质上和LSM无异,只是结合了列存储压缩,其中引入fb的float压缩,字串字典压缩等
		The New InfluxDB Storage Engine: Time Structured Merge Tree by Paul Dix | Oct 7, 2015 | InfluxDB | 0 ... 
- Pull后产生多余的log(Merge branch 'master' of ...)
		第一步: git reset --hard 73d0d18425ae55195068d39b3304303ac43b521a 第二步: git push -f origin feature/PAC_1 ... 
- 一些开源搜索引擎实现——倒排使用原始文件,列存储Hbase,KV store如levelDB、mongoDB、redis,以及SQL的,如sqlite或者xxSQL
		本文说明:除开ES,Solr,sphinx系列的其他开源搜索引擎汇总于此. A search engine based on Node.js and LevelDB A persistent, n ... 
- 如何基于LSM-tree架构实现一写多读
		一 前言 PolarDB是阿里巴巴自研的新一代云原生关系型数据库,在存储计算分离架构下,利用了软硬件结合的优势,为用户提供具备极致弹性.海量存储.高性能.低成本的数据库服务.X-Engine是阿里巴 ... 
随机推荐
- [原创]java WEB学习笔记101:Spring学习---Spring Bean配置:IOC容器中bean的声明周期,Bean 后置处理器
			本博客的目的:①总结自己的学习过程,相当于学习笔记 ②将自己的经验分享给大家,相互学习,互相交流,不可商用 内容难免出现问题,欢迎指正,交流,探讨,可以留言,也可以通过以下方式联系. 本人互联网技术爱 ... 
- jQuery简介及语法
			jQuery引入 jQuery语法 
- C++之路进阶——codevs2439(降雨量)
			2439 降雨量 2007年省队选拔赛四川 时间限制: 1 s 空间限制: 64000 KB 题目等级 : 大师 Master 题目描述 Description 我们常常会说这样的话 ... 
- python windows安装
			一.下载并安装 下载地址http://www.python.org/download/ 安装 二.配置环境变量 配置python环境变量以便后面安装插件.D:\Program Files\Python ... 
- 单点登录 SSO 的实现原理
			单点登录SSO(Single Sign On)说得简单点就是在一个多系统共存的环境下,用户在一处登录后,就不用在其他系统中登录,也就是用户的一次登录能得到其他所有系统的信任. 单点登录在大型网站里使用 ... 
- ASP.NET下载远程图片保存到本地的方法、保存抓取远程图片
			以下介绍两种方法:1.利用WebRequest,WebResponse 类 WebRequest wreq=WebRequest.Create("http://www.xueit.com/e ... 
- Linux  phpbb论坛的安装(中文版)
			1:建立文件夹 
- Auty自动化测试框架第五篇——框架内部的调用支持、自动化安装库与配置说明
			[本文出自天外归云的博客园] 本次对Auty自动化测试框架做些收尾工作,由于在scripts文件夹中的脚本会需要调用其他包结构文件夹中的脚本,所以这里需要添加一下框架对于内部脚本间互相调用的支持,这里 ... 
- TortoiseGit 连接每次都要输入用户名和密码
			当你配置好git后,在C:\Documents and Settings\Administrator\ 或者 C:\Users\Administrator 目录下有一个 .gitconfig 的文件 ... 
- Postman Postman测试接口之POST提交本地文件数据
			举例: 文件同步接口 接口地址:http://183.xxx.xxx.xxx:23333/ditui/fileupload HTTP请求方式:POST 针对上述这种POST本地文件的接口,接口数据咋提 ... 
