这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影响,故调研了一些降低文件系统缓存大小的方法,其中一种是通过修改/proc/sys/vm/dirty_background_ration以及/proc/sys/vm/dirty_ratio两个参数的大小来实现。看了不少相关博文的介绍,不过一直弄不清楚这两个参数的区别在哪里,后来看了下面的一篇英文博客才大致了解了它们的不同。

vm.dirty_background_ratio:这个参数指定了当文件系统缓存脏页数量达到系统内存百分之多少时(如5%)就会触发pdflush/flush/kdmflush等后台回写进程运行,将一定缓存的脏页异步地刷入外存;

vm.dirty_ratio:而这个参数则指定了当文件系统缓存脏页数量达到系统内存百分之多少时(如10%),系统不得不开始处理缓存脏页(因为此时脏页数量已经比较多,为了避免数据丢失需要将一定脏页刷入外存);在此过程中很多应用进程可能会因为系统转而处理文件IO而阻塞。

之前一直错误的一位dirty_ratio的触发条件不可能达到,因为每次肯定会先达到vm.dirty_background_ratio的条件,后来才知道自己理解错了。确实是先达到vm.dirty_background_ratio的条件然后触发flush进程进行异步的回写操作,但是这一过程中应用进程仍然可以进行写操作,如果多个应用进程写入的量大于flush进程刷出的量那自然会达到vm.dirty_ratio这个参数所设定的坎,此时操作系统会转入同步地处理脏页的过程,阻塞应用进程。

附上原文:

Better Linux Disk Caching & Performance with vm.dirty_ratio
& vm.dirty_background_ratio

by BOB
PLANKERS on DECEMBER
22, 2013

in BEST
PRACTICES
,CLOUD,SYSTEM
ADMINISTRATION
,VIRTUALIZATION

This is post #16 in my
December 2013 series about Linux Virtual Machine Performance
Tuning. For more, please see the tag “Linux VM
Performance Tuning
.”

In previous posts
on vm.swappiness and using
RAM disks
 we talked about how the memory on a
Linux guest is used for the OS itself (the kernel, buffers, etc.),
applications, and also for file cache. File caching is an important
performance improvement, and read caching is a clear win in most
cases, balanced against applications using the RAM directly. Write
caching is trickier. The Linux kernel stages disk writes into
cache, and over time asynchronously flushes them to disk. This has
a nice effect of speeding disk I/O but it is risky. When data isn’t
written to disk there is an increased chance of losing it.

There is also the
chance that a lot of I/O will overwhelm the cache, too. Ever
written a lot of data to disk all at once, and seen large pauses on
the system while it tries to deal with all that data? Those pauses
are a result of the cache deciding that there’s too much data to be
written asynchronously (as a non-blocking background operation,
letting the application process continue), and switches to writing
synchronously (blocking and making the process wait until the I/O
is committed to disk). Of course, a filesystem also has to preserve
write order, so when it starts writing synchronously it first has
to destage the cache. Hence the long pause.

The nice thing is
that these are controllable options, and based on your workloads
& data you can decide how you want to set them up. Let’s take a
look:


$ sysctl -a | grep dirty
vm.dirty_background_ratio = 10
vm.dirty_background_bytes = 0
vm.dirty_ratio = 20
vm.dirty_bytes = 0
vm.dirty_writeback_centisecs = 500
vm.dirty_expire_centisecs = 3000

vm.dirty_background_ratio is
the percentage of system memory that can be filled with “dirty”
pages — memory pages that still need to be written to disk — before
the pdflush/flush/kdmflush background processes kick in to write it
to disk. My example is 10%, so if my virtual server has 32 GB of
memory that’s 3.2 GB of data that can be sitting in RAM before
something is done.

vm.dirty_ratio is
the absolute maximum amount of system memory that can be filled
with dirty pages before everything must get committed to disk. When
the system gets to this point all new I/O blocks until dirty pages
have been written to disk. This is often the source of long I/O
pauses, but is a safeguard against too much data being cached
unsafely in memory.

vm.dirty_background_bytes and vm.dirty_bytes are
another way to specify these parameters. If you set the _bytes
version the _ratio version will become 0, and vice-versa.

vm.dirty_expire_centisecs is
how long something can be in cache before it needs to be written.
In this case it’s 30 seconds. When the pdflush/flush/kdmflush
processes kick in they will check to see how old a dirty page is,
and if it’s older than this value it’ll be written asynchronously
to disk. Since holding a dirty page in memory is unsafe this is
also a safeguard against data loss.

vm.dirty_writeback_centisecs is
how often the pdflush/flush/kdmflush processes wake up and check to
see if work needs to be done.

You can also see
statistics on the page cache in /proc/vmstat:


$ cat /proc/vmstat | egrep "dirty|writeback"
nr_dirty 878
nr_writeback 0
nr_writeback_temp 0

In my case I have
878 dirty pages waiting to be written to disk.

Approach 1: Decreasing the Cache

As with most
things in the computer world, how you adjust these depends on what
you’re trying to do. In many cases we have fast disk subsystems
with their own big, battery-backed NVRAM caches, so keeping things
in the OS page cache is risky. Let’s try to send I/O to the array
in a more timely fashion and reduce the chance our local OS will,
to borrow a phrase from the service industry, be “in the
weeds
.” To do this we lower vm.dirty_background_ratio and
vm.dirty_ratio by adding new numbers to /etc/sysctl.conf and
reloading with “sysctl –p”:


vm.dirty_background_ratio = 5
vm.dirty_ratio = 10

This is a typical
approach on virtual machines, as well as Linux-based
hypervisors. I wouldn’t suggest setting these
parameters to zero, as some background I/O is nice to decouple
application performance from short periods of higher latency on
your disk array & SAN (“spikes”).

Approach 2: Increasing the Cache

There are
scenarios where raising the cache dramatically has positive effects
on performance. These situations are where the data contained on a
Linux guest isn’t critical and can be lost, and usually where an
application is writing to the same files repeatedly or in
repeatable bursts. In theory, by allowing more dirty pages to exist
in memory you’ll rewrite the same blocks over and over in cache,
and just need to do one write every so often to the actual disk. To
do this we raise the parameters:


vm.dirty_background_ratio = 50
vm.dirty_ratio = 80

Sometimes folks
also increase the vm.dirty_expire_centisecs parameter to allow more
time in cache. Beyond the increased risk of data loss, you also run
the risk of long I/O pauses if that cache gets full and needs to
destage, because on large VMs there will be a lot of data in
cache.

Approach 3: Both Ways

There are also
scenarios where a system has to deal with infrequent, bursty
traffic to slow disk (batch jobs at the top of the hour, midnight,
writing to an SD card on a Raspberry Pi, etc.). In that case an
approach might be to allow all that write I/O to be deposited in
the cache so that the background flush operations can deal with it
asynchronously over time:


vm.dirty_background_ratio = 5
vm.dirty_ratio = 80

Here the
background processes will start writing right away when it hits
that 5% ceiling but the system won’t force synchronous I/O until it
gets to 80% full. From there you just size your system RAM and
vm.dirty_ratio to be able to consume all the written data. Again,
there are tradeoffs with data consistency on disk, which translates
into risk to data. Buy a UPS and make sure you can destage cache
before the UPS runs out of power. :)

No matter the
route you choose you should always be gathering hard data to
support your changes and help you determine if you are improving
things or making them worse. In this case you can get data from
many different places, including the application itself,
/proc/vmstat, /proc/meminfo, iostat, vmstat, and many of the things
in /proc/sys/vm. Good luck!

[转载]文件系统缓存dirty_ratio与dirty_background_ra的更多相关文章

  1. Linux 文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别

    文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别 (2014-03-16 17:54:32) 转载▼ 标签: linux 文件系统缓存 cache dirt ...

  2. 文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别

    这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影响,故调研了一些降低文件系统缓存大小的方法,其中一种是通过修改/proc/sys/vm/dirty_background_r ...

  3. (转)文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别

    这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影响,故调研了一些降低文件系统缓存大小的方法,其中一种是通过修改/proc/sys/vm/dirty_background_r ...

  4. Linux 文件系统缓存 -针对不同数据库有不同作用

    文件系统缓存 filesystem cache 许多人没有意识到.文件系统缓存对于性能的影响.Linux系统默认的设置倾向于把内存尽可能的用于文件cache,所以在一台大内存机器上,往往我们可能发现没 ...

  5. [转载]Linux缓存机制

    [转载]Linux缓存机制 来源:https://blog.csdn.net/weixin_38278334/article/details/96478405 linux下的缓存机制及清理buffer ...

  6. 转载-springboot缓存开发

    转载:https://www.cnblogs.com/wyq178/p/9840985.html   前言:缓存在开发中是一个必不可少的优化点,近期在公司的项目重构中,关于缓存优化了很多点,比如在加载 ...

  7. ES 调优查询亿级数据毫秒级返回!怎么做到的?--文件系统缓存

    一道面试题的引入: 如果面试的时候碰到这样一个面试题:ElasticSearch(以下简称ES) 在数据量很大的情况下(数十亿级别)如何提高查询效率? 这个问题说白了,就是看你有没有实际用过 ES,因 ...

  8. [转载] 文件系统vs对象存储——选型和趋势

    原文: http://www.testlab.com.cn/Index/article/id/1082.html#rd?sukey=fc78a68049a14bb2699b479d5e730f6f45 ...

  9. [转载]WEB缓存技术概述

    [原文地址]http://www.hbjjrb.com/Jishu/ASP/201110/319372.html 引言 WWW是互联网上最受欢迎的应用之一,其快速增长造成网络拥塞和服务器超载,导致客户 ...

随机推荐

  1. linux--yum源,源码包

    一.企业版 搜狐:http://mirrors.sohu.com/ 网易:http://mirrors.163.com/ 阿里云:http://mirrors.aliyun.com/ 腾讯:http: ...

  2. JOB SERVER 负载均衡

    JOB SERVER 负载均衡 一.体系结构 1.job server group job server group 是由一个或者多个job server 组成的,做为一个整体对外提供服务,在内部实现 ...

  3. Hive 的collect_set使用详解

    Hive 的collect_set使用详解   https://blog.csdn.net/liyantianmin/article/details/48262109 对于非group by字段,可以 ...

  4. Springboot 启动文件报错,原因是@ComponentScan写成了@ComponentScans

    Springboot 启动文件报错,原因是@ComponentScan写成了@ComponentScans

  5. vue+echarts实现可拖动节点的折现图(支持拖动方向和上下限的设置)

    本篇文档主要是利用echarts实现可拖动节点的折现图,在echarts中找到了一个demo,传送门:https://echarts.baidu.com/examples/editor.html?c= ...

  6. 读取本地json文件另一种方式

    function getScenemapData(){ $.ajax({     url: "/js/currency.json",    type: "GET" ...

  7. CentOS7 更换OpenStack-queens源

    根据官网的安装文档来对OpenStack搭建时碰到一个问题,安装完centos-release-openstack-queens后相当于是增加了一个OpenStack的源,但是因为这个源是在国外安装一 ...

  8. 基于 HTML5 WebGL 的 3D 水泥工厂生产线

    前言 今天为大家带来一个很酷的作品,依然运用了强大的 HT for Web 的 3D 图形组件,动作流畅性能好,大家可以先来欣赏一下效果! 点我进入! 整体风格为科技金属风,制作精良,由于上传 gif ...

  9. A1037

    给两个序列,一一对应相乘,求最大和. 0不算数,输入时按正负共分为4个数组. #include<cstdio> #include<algorithm> #include< ...

  10. Parallel Pattern Library(PPL)学习笔记

    关于PPL是什么,可以搜索关键字MSDN PPL了解详情.这里谈一下困扰自己好一阵子的一个地方--task chain中抛出异常的处理,一来可为其他码农同行参考,二来给自己做个记录,以免日后忘却. V ...