这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影响,故调研了一些降低文件系统缓存大小的方法,其中一种是通过修改/proc/sys/vm/dirty_background_ration以及/proc/sys/vm/dirty_ratio两个参数的大小来实现。看了不少相关博文的介绍,不过一直弄不清楚这两个参数的区别在哪里,后来看了下面的一篇英文博客才大致了解了它们的不同。

vm.dirty_background_ratio:这个参数指定了当文件系统缓存脏页数量达到系统内存百分之多少时(如5%)就会触发pdflush/flush/kdmflush等后台回写进程运行,将一定缓存的脏页异步地刷入外存;

vm.dirty_ratio:而这个参数则指定了当文件系统缓存脏页数量达到系统内存百分之多少时(如10%),系统不得不开始处理缓存脏页(因为此时脏页数量已经比较多,为了避免数据丢失需要将一定脏页刷入外存);在此过程中很多应用进程可能会因为系统转而处理文件IO而阻塞。

之前一直错误的一位dirty_ratio的触发条件不可能达到,因为每次肯定会先达到vm.dirty_background_ratio的条件,后来才知道自己理解错了。确实是先达到vm.dirty_background_ratio的条件然后触发flush进程进行异步的回写操作,但是这一过程中应用进程仍然可以进行写操作,如果多个应用进程写入的量大于flush进程刷出的量那自然会达到vm.dirty_ratio这个参数所设定的坎,此时操作系统会转入同步地处理脏页的过程,阻塞应用进程。

附上原文:

Better Linux Disk Caching & Performance with vm.dirty_ratio
& vm.dirty_background_ratio

by BOB
PLANKERS on DECEMBER
22, 2013

in BEST
PRACTICES
,CLOUD,SYSTEM
ADMINISTRATION
,VIRTUALIZATION

This is post #16 in my
December 2013 series about Linux Virtual Machine Performance
Tuning. For more, please see the tag “Linux VM
Performance Tuning
.”

In previous posts
on vm.swappiness and using
RAM disks
 we talked about how the memory on a
Linux guest is used for the OS itself (the kernel, buffers, etc.),
applications, and also for file cache. File caching is an important
performance improvement, and read caching is a clear win in most
cases, balanced against applications using the RAM directly. Write
caching is trickier. The Linux kernel stages disk writes into
cache, and over time asynchronously flushes them to disk. This has
a nice effect of speeding disk I/O but it is risky. When data isn’t
written to disk there is an increased chance of losing it.

There is also the
chance that a lot of I/O will overwhelm the cache, too. Ever
written a lot of data to disk all at once, and seen large pauses on
the system while it tries to deal with all that data? Those pauses
are a result of the cache deciding that there’s too much data to be
written asynchronously (as a non-blocking background operation,
letting the application process continue), and switches to writing
synchronously (blocking and making the process wait until the I/O
is committed to disk). Of course, a filesystem also has to preserve
write order, so when it starts writing synchronously it first has
to destage the cache. Hence the long pause.

The nice thing is
that these are controllable options, and based on your workloads
& data you can decide how you want to set them up. Let’s take a
look:


$ sysctl -a | grep dirty
vm.dirty_background_ratio = 10
vm.dirty_background_bytes = 0
vm.dirty_ratio = 20
vm.dirty_bytes = 0
vm.dirty_writeback_centisecs = 500
vm.dirty_expire_centisecs = 3000

vm.dirty_background_ratio is
the percentage of system memory that can be filled with “dirty”
pages — memory pages that still need to be written to disk — before
the pdflush/flush/kdmflush background processes kick in to write it
to disk. My example is 10%, so if my virtual server has 32 GB of
memory that’s 3.2 GB of data that can be sitting in RAM before
something is done.

vm.dirty_ratio is
the absolute maximum amount of system memory that can be filled
with dirty pages before everything must get committed to disk. When
the system gets to this point all new I/O blocks until dirty pages
have been written to disk. This is often the source of long I/O
pauses, but is a safeguard against too much data being cached
unsafely in memory.

vm.dirty_background_bytes and vm.dirty_bytes are
another way to specify these parameters. If you set the _bytes
version the _ratio version will become 0, and vice-versa.

vm.dirty_expire_centisecs is
how long something can be in cache before it needs to be written.
In this case it’s 30 seconds. When the pdflush/flush/kdmflush
processes kick in they will check to see how old a dirty page is,
and if it’s older than this value it’ll be written asynchronously
to disk. Since holding a dirty page in memory is unsafe this is
also a safeguard against data loss.

vm.dirty_writeback_centisecs is
how often the pdflush/flush/kdmflush processes wake up and check to
see if work needs to be done.

You can also see
statistics on the page cache in /proc/vmstat:


$ cat /proc/vmstat | egrep "dirty|writeback"
nr_dirty 878
nr_writeback 0
nr_writeback_temp 0

In my case I have
878 dirty pages waiting to be written to disk.

Approach 1: Decreasing the Cache

As with most
things in the computer world, how you adjust these depends on what
you’re trying to do. In many cases we have fast disk subsystems
with their own big, battery-backed NVRAM caches, so keeping things
in the OS page cache is risky. Let’s try to send I/O to the array
in a more timely fashion and reduce the chance our local OS will,
to borrow a phrase from the service industry, be “in the
weeds
.” To do this we lower vm.dirty_background_ratio and
vm.dirty_ratio by adding new numbers to /etc/sysctl.conf and
reloading with “sysctl –p”:


vm.dirty_background_ratio = 5
vm.dirty_ratio = 10

This is a typical
approach on virtual machines, as well as Linux-based
hypervisors. I wouldn’t suggest setting these
parameters to zero, as some background I/O is nice to decouple
application performance from short periods of higher latency on
your disk array & SAN (“spikes”).

Approach 2: Increasing the Cache

There are
scenarios where raising the cache dramatically has positive effects
on performance. These situations are where the data contained on a
Linux guest isn’t critical and can be lost, and usually where an
application is writing to the same files repeatedly or in
repeatable bursts. In theory, by allowing more dirty pages to exist
in memory you’ll rewrite the same blocks over and over in cache,
and just need to do one write every so often to the actual disk. To
do this we raise the parameters:


vm.dirty_background_ratio = 50
vm.dirty_ratio = 80

Sometimes folks
also increase the vm.dirty_expire_centisecs parameter to allow more
time in cache. Beyond the increased risk of data loss, you also run
the risk of long I/O pauses if that cache gets full and needs to
destage, because on large VMs there will be a lot of data in
cache.

Approach 3: Both Ways

There are also
scenarios where a system has to deal with infrequent, bursty
traffic to slow disk (batch jobs at the top of the hour, midnight,
writing to an SD card on a Raspberry Pi, etc.). In that case an
approach might be to allow all that write I/O to be deposited in
the cache so that the background flush operations can deal with it
asynchronously over time:


vm.dirty_background_ratio = 5
vm.dirty_ratio = 80

Here the
background processes will start writing right away when it hits
that 5% ceiling but the system won’t force synchronous I/O until it
gets to 80% full. From there you just size your system RAM and
vm.dirty_ratio to be able to consume all the written data. Again,
there are tradeoffs with data consistency on disk, which translates
into risk to data. Buy a UPS and make sure you can destage cache
before the UPS runs out of power. :)

No matter the
route you choose you should always be gathering hard data to
support your changes and help you determine if you are improving
things or making them worse. In this case you can get data from
many different places, including the application itself,
/proc/vmstat, /proc/meminfo, iostat, vmstat, and many of the things
in /proc/sys/vm. Good luck!

[转载]文件系统缓存dirty_ratio与dirty_background_ra的更多相关文章

  1. Linux 文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别

    文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别 (2014-03-16 17:54:32) 转载▼ 标签: linux 文件系统缓存 cache dirt ...

  2. 文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别

    这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影响,故调研了一些降低文件系统缓存大小的方法,其中一种是通过修改/proc/sys/vm/dirty_background_r ...

  3. (转)文件系统缓存dirty_ratio与dirty_background_ratio两个参数区别

    这两天在调优数据库性能的过程中需要降低操作系统文件Cache对数据库性能的影响,故调研了一些降低文件系统缓存大小的方法,其中一种是通过修改/proc/sys/vm/dirty_background_r ...

  4. Linux 文件系统缓存 -针对不同数据库有不同作用

    文件系统缓存 filesystem cache 许多人没有意识到.文件系统缓存对于性能的影响.Linux系统默认的设置倾向于把内存尽可能的用于文件cache,所以在一台大内存机器上,往往我们可能发现没 ...

  5. [转载]Linux缓存机制

    [转载]Linux缓存机制 来源:https://blog.csdn.net/weixin_38278334/article/details/96478405 linux下的缓存机制及清理buffer ...

  6. 转载-springboot缓存开发

    转载:https://www.cnblogs.com/wyq178/p/9840985.html   前言:缓存在开发中是一个必不可少的优化点,近期在公司的项目重构中,关于缓存优化了很多点,比如在加载 ...

  7. ES 调优查询亿级数据毫秒级返回!怎么做到的?--文件系统缓存

    一道面试题的引入: 如果面试的时候碰到这样一个面试题:ElasticSearch(以下简称ES) 在数据量很大的情况下(数十亿级别)如何提高查询效率? 这个问题说白了,就是看你有没有实际用过 ES,因 ...

  8. [转载] 文件系统vs对象存储——选型和趋势

    原文: http://www.testlab.com.cn/Index/article/id/1082.html#rd?sukey=fc78a68049a14bb2699b479d5e730f6f45 ...

  9. [转载]WEB缓存技术概述

    [原文地址]http://www.hbjjrb.com/Jishu/ASP/201110/319372.html 引言 WWW是互联网上最受欢迎的应用之一,其快速增长造成网络拥塞和服务器超载,导致客户 ...

随机推荐

  1. php多进程编程实现与优化

    PHP多进程API 创建子进程 @params void @returns int int pcntl_fork(void) 成功时,在父进程执行线程内返回产生的子进程PID,在子进程执行线程内返回0 ...

  2. 【.net开发者自学java系列】使用Eclipse开发SpringMVC(2)

    大概熟悉了 Eclipse. 然后先上Spring MVC 官网看看. 可是英文太差?翻译咯.现在翻译可屌了,真高兴生活在现在科技发达的时代.活着在中国太美好了. 没出过国门就能看懂英文.我都崇拜自己 ...

  3. kvo本质探寻

    一.概述 1.本文章内容,须参照本人的另一篇博客文章“class和object_getClass方法区别”加以理解: 2.基本使用: //给实例对象instance添加观察者,监听该实例对象的某个属性 ...

  4. ABAP术语-Update Task

    Update Task 原文:http://www.cnblogs.com/qiangsheng/archive/2008/03/20/1114184.html Part of an ABAP pro ...

  5. activemq的高级特性:消息的可靠性

    高级特性之消息的可靠性 可靠性分为:生产者.消费者.生产者必须让mq收到消息,消费者必须能够接收到消息并且消费成功,这就是消息的可靠性. 1:生产者可靠性 Session session = conn ...

  6. 基于 HTML5 Canvas 的 Web SCADA 组态电机控制面板

    前言 HT For Web 提供完整的基于 HTML5 图形界面组件库.您可以轻松构建现代化的,跨桌面和移动终端的企业应用,无需担忧跨平台兼容性,及触屏手势交互等棘手问题.也可用于快速创建和部署,高度 ...

  7. Flash的swf文件破解

    1.准备好flash文件,xxx.swf(后缀为swf),将其重构swf文件为fla源文件. 2.asv软件(5以上版本)的操作 1.点击左上角 文件 --> 打开 --> 运行已准备好的 ...

  8. Delphi 调试连接 任意Android手机/平板/盒子

    Delphi有时候无法连接调试一些手机,解决方案: 1.安装Google USB Driver 2.通过设备管理器查看手机或平板USB的VID,PID 3.修改你的电脑上的android_winusb ...

  9. MySQL用户账户管理/权限管理/资源限制

    MySQL 的权限表在数据库启动的时候就载入内存,当用户通过身份认证后,就在内存中进行相应权限的存取,这样,此用户就可以在数据库中做权限范围内的各种操作了. mysql 的权限体系大致分为5个层级: ...

  10. Python学习 :面向对象 -- 成员修饰符

    成员修饰符 两种成员 - 公有成员 - 私有成员, __字段名 - 无法直接访问,只能通过内部方法来间接访问私有成员 简例:公有成员与私有成员  class Info: country = '中国' ...