[转载]文件系统缓存dirty_ratio与dirty_background_ra
这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影响,故调研了一些降低文件系统缓存大小的方法,其中一种是通过修改/proc/sys/vm/dirty_background_ration以及/proc/sys/vm/dirty_ratio两个参数的大小来实现。看了不少相关博文的介绍,不过一直弄不清楚这两个参数的区别在哪里,后来看了下面的一篇英文博客才大致了解了它们的不同。
附上原文:
Better Linux Disk Caching & Performance with vm.dirty_ratio
& vm.dirty_background_ratio
by BOB
PLANKERS on DECEMBER
22, 2013
in BEST
PRACTICES,CLOUD,SYSTEM
ADMINISTRATION,VIRTUALIZATION
This is post #16 in my
December 2013 series about Linux Virtual Machine Performance
Tuning. For more, please see the tag “Linux VM
Performance Tuning.”
In previous posts
on vm.swappiness and using
RAM disks we talked about how the memory on a
Linux guest is used for the OS itself (the kernel, buffers, etc.),
applications, and also for file cache. File caching is an important
performance improvement, and read caching is a clear win in most
cases, balanced against applications using the RAM directly. Write
caching is trickier. The Linux kernel stages disk writes into
cache, and over time asynchronously flushes them to disk. This has
a nice effect of speeding disk I/O but it is risky. When data isn’t
written to disk there is an increased chance of losing it.
There is also the
chance that a lot of I/O will overwhelm the cache, too. Ever
written a lot of data to disk all at once, and seen large pauses on
the system while it tries to deal with all that data? Those pauses
are a result of the cache deciding that there’s too much data to be
written asynchronously (as a non-blocking background operation,
letting the application process continue), and switches to writing
synchronously (blocking and making the process wait until the I/O
is committed to disk). Of course, a filesystem also has to preserve
write order, so when it starts writing synchronously it first has
to destage the cache. Hence the long pause.
The nice thing is
that these are controllable options, and based on your workloads
& data you can decide how you want to set them up. Let’s take a
look:
$ sysctl -a | grep dirty
vm.dirty_background_ratio = 10
vm.dirty_background_bytes = 0
vm.dirty_ratio = 20
vm.dirty_bytes = 0
vm.dirty_writeback_centisecs = 500
vm.dirty_expire_centisecs = 3000
vm.dirty_background_ratio is
the percentage of system memory that can be filled with “dirty”
pages — memory pages that still need to be written to disk — before
the pdflush/flush/kdmflush background processes kick in to write it
to disk. My example is 10%, so if my virtual server has 32 GB of
memory that’s 3.2 GB of data that can be sitting in RAM before
something is done.
vm.dirty_ratio is
the absolute maximum amount of system memory that can be filled
with dirty pages before everything must get committed to disk. When
the system gets to this point all new I/O blocks until dirty pages
have been written to disk. This is often the source of long I/O
pauses, but is a safeguard against too much data being cached
unsafely in memory.
vm.dirty_background_bytes and vm.dirty_bytes are
another way to specify these parameters. If you set the _bytes
version the _ratio version will become 0, and vice-versa.
vm.dirty_expire_centisecs is
how long something can be in cache before it needs to be written.
In this case it’s 30 seconds. When the pdflush/flush/kdmflush
processes kick in they will check to see how old a dirty page is,
and if it’s older than this value it’ll be written asynchronously
to disk. Since holding a dirty page in memory is unsafe this is
also a safeguard against data loss.
vm.dirty_writeback_centisecs is
how often the pdflush/flush/kdmflush processes wake up and check to
see if work needs to be done.
You can also see
statistics on the page cache in /proc/vmstat:
$ cat /proc/vmstat | egrep "dirty|writeback"
nr_dirty 878
nr_writeback 0
nr_writeback_temp 0
In my case I have
878 dirty pages waiting to be written to disk.
Approach 1: Decreasing the Cache
As with most
things in the computer world, how you adjust these depends on what
you’re trying to do. In many cases we have fast disk subsystems
with their own big, battery-backed NVRAM caches, so keeping things
in the OS page cache is risky. Let’s try to send I/O to the array
in a more timely fashion and reduce the chance our local OS will,
to borrow a phrase from the service industry, be “in the
weeds.” To do this we lower vm.dirty_background_ratio and
vm.dirty_ratio by adding new numbers to /etc/sysctl.conf and
reloading with “sysctl –p”:
vm.dirty_background_ratio = 5
vm.dirty_ratio = 10
This is a typical
approach on virtual machines, as well as Linux-based
hypervisors. I wouldn’t suggest setting these
parameters to zero, as some background I/O is nice to decouple
application performance from short periods of higher latency on
your disk array & SAN (“spikes”).
Approach 2: Increasing the Cache
There are
scenarios where raising the cache dramatically has positive effects
on performance. These situations are where the data contained on a
Linux guest isn’t critical and can be lost, and usually where an
application is writing to the same files repeatedly or in
repeatable bursts. In theory, by allowing more dirty pages to exist
in memory you’ll rewrite the same blocks over and over in cache,
and just need to do one write every so often to the actual disk. To
do this we raise the parameters:
vm.dirty_background_ratio = 50
vm.dirty_ratio = 80
Sometimes folks
also increase the vm.dirty_expire_centisecs parameter to allow more
time in cache. Beyond the increased risk of data loss, you also run
the risk of long I/O pauses if that cache gets full and needs to
destage, because on large VMs there will be a lot of data in
cache.
Approach 3: Both Ways
There are also
scenarios where a system has to deal with infrequent, bursty
traffic to slow disk (batch jobs at the top of the hour, midnight,
writing to an SD card on a Raspberry Pi, etc.). In that case an
approach might be to allow all that write I/O to be deposited in
the cache so that the background flush operations can deal with it
asynchronously over time:
vm.dirty_background_ratio = 5
vm.dirty_ratio = 80
Here the
background processes will start writing right away when it hits
that 5% ceiling but the system won’t force synchronous I/O until it
gets to 80% full. From there you just size your system RAM and
vm.dirty_ratio to be able to consume all the written data. Again,
there are tradeoffs with data consistency on disk, which translates
into risk to data. Buy a UPS and make sure you can destage cache
before the UPS runs out of power. :)
No matter the
route you choose you should always be gathering hard data to
support your changes and help you determine if you are improving
things or making them worse. In this case you can get data from
many different places, including the application itself,
/proc/vmstat, /proc/meminfo, iostat, vmstat, and many of the things
in /proc/sys/vm. Good luck!
[转载]文件系统缓存dirty_ratio与dirty_background_ra的更多相关文章
- Linux 文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别
文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别 (2014-03-16 17:54:32) 转载▼ 标签: linux 文件系统缓存 cache dirt ...
- 文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别
这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影响,故调研了一些降低文件系统缓存大小的方法,其中一种是通过修改/proc/sys/vm/dirty_background_r ...
- (转)文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别
这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影响,故调研了一些降低文件系统缓存大小的方法,其中一种是通过修改/proc/sys/vm/dirty_background_r ...
- Linux 文件系统缓存 -针对不同数据库有不同作用
文件系统缓存 filesystem cache 许多人没有意识到.文件系统缓存对于性能的影响.Linux系统默认的设置倾向于把内存尽可能的用于文件cache,所以在一台大内存机器上,往往我们可能发现没 ...
- [转载]Linux缓存机制
[转载]Linux缓存机制 来源:https://blog.csdn.net/weixin_38278334/article/details/96478405 linux下的缓存机制及清理buffer ...
- 转载-springboot缓存开发
转载:https://www.cnblogs.com/wyq178/p/9840985.html 前言:缓存在开发中是一个必不可少的优化点,近期在公司的项目重构中,关于缓存优化了很多点,比如在加载 ...
- ES 调优查询亿级数据毫秒级返回!怎么做到的?--文件系统缓存
一道面试题的引入: 如果面试的时候碰到这样一个面试题:ElasticSearch(以下简称ES) 在数据量很大的情况下(数十亿级别)如何提高查询效率? 这个问题说白了,就是看你有没有实际用过 ES,因 ...
- [转载] 文件系统vs对象存储——选型和趋势
原文: http://www.testlab.com.cn/Index/article/id/1082.html#rd?sukey=fc78a68049a14bb2699b479d5e730f6f45 ...
- [转载]WEB缓存技术概述
[原文地址]http://www.hbjjrb.com/Jishu/ASP/201110/319372.html 引言 WWW是互联网上最受欢迎的应用之一,其快速增长造成网络拥塞和服务器超载,导致客户 ...
随机推荐
- TS流基本概念
在MPEG-2标准中,有两种不同类型的码流输出到信道:一种是节目码流(Program Stream, PS),适用于没有误差产生的媒体存储,如DVD等存储介质:另一种是传送流(Transport st ...
- 【Step By Step】将Dotnet Core部署到Docker下
一.使用.Net Core构建WebAPI并访问Docker中的Mysql数据库 这个的过程大概与我之前的文章<尝试.Net Core—使用.Net Core + Entity FrameWor ...
- centos6.4 安装code::blocks
今天下午闲着没事尝试在自己的PC中的CentOS上装一个Code::Blocks,因为是Linux菜鸟折腾了一下午才基本算搞定但依然有疑惑: 在网上各种谷哥度娘最后才发现还是官方的文档最靠谱:看这里. ...
- SaltStack 自动化工具
1.服务端安装master: # yum -y install salt-master # yum -y install salt-minion 2.客户端安装minion: # yum -y ins ...
- 分布式架构学习-Consul集群配置
简介 之前公司用的是Consul进行服务发现以及服务管理,自己一直以来只是用一下,但是没有具体的深入,觉得学习不可以这样,所以稍微研究了一下. 网上有很多关于Consul的介绍和对比,我这里也不献丑了 ...
- 如何使用tomcat,使用域名直接访问javaweb项目首页
准备工作: 1:一台虚拟机 2:配置好jdk,将tomcat上传到服务器并解压 3:将项目上传到tomcat的webaap目录下 4:配置tomcat的conf目录下的server.xml文件 确保8 ...
- Objective-C 之深拷贝和浅拷贝
3月箴言 人的思想是了不起的,只要专注于某一项事业,就一定会做出使自己感到吃惊的成绩来.—— 马克·吐温 1.iOS中关于深拷贝和浅拷贝的概念 浅拷贝:浅拷贝并不拷贝对象本身,只是对指向对象的指针进行 ...
- (转载)SendKeys.Send()的使用
SendKeys.Send() 使用SendKeys将键击和组合键击发送到活动应用程序.此类无法实例化.若要发送一个键击给某个类并立即继续程序流,请使用Send.若要等待键击启动的任何进程,请使用Se ...
- DELPHI一个对付内存汇漏的办法和技巧
DELPHI是要手动释放内存的,如果客户端程序有泄漏,可能不是很大问题, 但是如果你是用DELPHI做服务端程序,有泄漏的话,时间一长会占用很多内存,直到服务端程序要关闭重启.所以内存泄漏还是有害的. ...
- 两张图证明 WolframAlpha 的强大
引用于:https://capbone.com/wolfram-alpha/ 两张图证明 WolframAlpha 的强大 之前在" 我手机中有哪些应用 "里提到过 Wolfram ...