Posted by William Cohen on March 10, 2014

All modern processors use page-based mechanisms to translate the user-space processes virtual addresses into physical addresses for RAM. The pages are commonly 4KB in size and the processor can hold a limited number of virtual-to-physical address mappings in the Translation Lookaside Buffers (TLB). The number TLB entries ranges from tens to hundreds of mappings. This limits a processor to a few
megabytes of memory it can address without changing the TLB entries. When a virtual-to-physical address mapping is not in the TLB the processor must do an expensive computation to generate a new virtual-to-physical address mapping.

To increase the amount of memory the processor can address without performing the expensive TLB updates many processors allow larger page sizes to be used. On x86_64 processors huge pages are 2MB, 512 times larger than regular 4KB pages. In ideal situations huge pages can decrease the overhead of the TLB updates (misses). However, huge page use can increase memory pressure, add latency for minor pages faults, and add overhead when splitting huge pages or coalescing normal sized pages into huge pages.

There are two mechanisms available for huge pages in Linux: the hugepages and Transparent Huge Pages (THP). Explicit configuration is required for the original hugepages mechanism. The newer transparent hugepage (THP) mechanism will automatically use larger pages for dynamically allocated memory in Red Hat Enterprise Linux 6.

To determine whether the newer Transparent HugePages (THP) or the older HugePages mechanism are being used, look at the output of /proc/meminfo as below:

$ cat /proc/meminfo|grep Huge
AnonHugePages: 3049472 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB

The AnonHugePages entry lists the number of pages that the newer Transparent Huge Page mechanism currently has in use. For this machine there are 309472kB, 1489 huge pages each 2048kB in size.

In this case there are zero pages in the pool of the older hugepage mechanism as shown by HugePages_Total of 0. The HugePages_Free shows how many pages are still available for allocation, which is going to be less than or equal to HugePages_Total. The number of HugePages in use can be computed as HugePages_TotalHugePagesFree. For more information about the configuration of HugePages see Tuning and Optimizing Red Hat Enterprise Linux for Oracle 9i and 10g Databases.

Determining whether page fault latency is due to huge pages use

Huge page use can reduce the number of TLB updates required to access large regions of memory and reducing the overall cost of TLB updates but increase costs and latency for other operations. When a user-space application is given a range of addresses for a memory allocation the assignment of a physical page is deferred until the first time the page is accessed. To prevent information leakage from the previous user of the page the kernel writes zeros in the entire page. For a 4096 byte page this is a relatively short operation and will only take a couple of microseconds. The x86 hugepages are 2MB in size, 512 times larger than the normal page. Thus, the operation may take hundreds of microseconds and impact the operation of latency sensitive code. Below is a simple SystemTap command line script to show which applications have huge pages zeroed out and how long those operations take. It will run until cntl-c is pressed.

stap  -e 'global huge_clear probe kernel.function("clear_huge_page").return {huge_clear [execname(), pid()] <<< (gettimeofday_us() - @entry(gettimeofday_us()))}'

Below is the a run of the above SystemTap clear huge page script. The script will output a list sorted from the executable name and process with the most huge page clears to the least. The @count is the number of times that process encountered a huge page clear operation. Following that information is time statistics displayed in microseconds of wall clock time. The @min and the @max are the minimum and the maximum time respectively to clear out a page. The @sum is the total wall clock time. In the example below the ld process 17050 took a total 1924 microseconds to clear out huge pages and on average those page clears took 128 microseconds.

#  stap  -e 'global huge_clear probe kernel.function("clear_huge_page").return {huge_clear [execname(), pid()] <<< (gettimeofday_us() - @entry(gettimeofday_us()))}'
^CChuge_clear["ld",17050] @count=15 @min=114 @max=148 @sum=1924 @avg=128
huge_clear["ld",27996] @count=13 @min=121 @max=160 @sum=1674 @avg=128
huge_clear["ld",19595] @count=11 @min=86 @max=181 @sum=1251 @avg=113
huge_clear["cc1",22840] @count=6 @min=108 @max=180 @sum=862 @avg=143
huge_clear["ld",15640] @count=5 @min=160 @max=599 @sum=1274 @avg=254
huge_clear["ld",27733] @count=4 @min=95 @max=145 @sum=443 @avg=110
huge_clear["cc1",24455] @count=4 @min=103 @max=159 @sum=535 @avg=133
huge_clear["cc1",20431] @count=3 @min=112 @max=172 @sum=408 @avg=136
huge_clear["cc1",21906] @count=3 @min=125 @max=159 @sum=431 @avg=143

The system may attempt to save memory by using the same physical page for multiple processes. When one of the processes attempts to modify the contents of the page a new copy needs to be made of the page. The Copy-On-Write (COW) operation for the huge page can be observed with a script very similar to the one watching for huge pages to be zeroed out. Below is the script to watch for Copy-On-Write for huge pages and it will output data in a similar format.

stap  -e 'global huge_cow probe kernel.function("copy_user_huge_page").return {huge_cow [execname(), pid()] <<< (gettimeofday_us() - @entry(gettimeofday_us()))}'

Determining whether huge page split and collapse operations are affecting performance

Because some portions of the kernel code only work with normal-sized pages the kernel may convert a huge page into a set of normal-sized pages using a split operation. One can identify if split operations are occurring with the following systemtap script:

stap -e 'probe kernel.function("split_huge_page") { printf("%s: %s(%d)n", pp(), execname(), pid());}'

Below is an example run of the script showing which processes are performing split huge page operations. In this case the same virtualized guest machine (qemu-system-x86_64) has some huge pages splits.

# stap -e 'probe kernel.function("split_huge_page") { printf("%s: %s(%d)n", pp(), execname(), pid());}'
kernel.function("split_huge_page@include/linux/huge_mm.h:103"): qemu-system-x86(9473)
kernel.function("split_huge_page@include/linux/huge_mm.h:103"): qemu-system-x86(9473)
kernel.function("split_huge_page@include/linux/huge_mm.h:103"): plugin-containe(16582)
kernel.function("split_huge_page@include/linux/huge_mm.h:103"): StreamT~ns #697(2942)

The inverse of the huge page split operation is the huge page collapse operation that converts a set of normal-sized pages into a single huge page. It is desirable to have a range of addresses need fewer TLB entries, but the conversion process is expensive because the system needs to find a candidate set of pages to group together and then copy all the memory from the possibly scattered normal-sized pages into a single huge page. The khugepaged kernel thread searches for candidates pages to collapse into a single huge page. Even if khugepaged is not successful converting normal-sized pages into huge pages it may still be taking processor time to search for candidate pages. You can see if the khugepaged kernel thread is taking a significant amount of processor time with:

top -p `pidof khugepaged`

If you want to see when the huge page collapse operations occur, the following will note each time khugepaged is able to collapse normal-sized pages into huge pages:

stap -e 'probe kernel.function("collapse_huge_page") {  printf("%-25s: %s (%d) collapse_huge_pagen", tz_ctime(gettimeofday_s()), execname(), pid())}'

The above one line script will generate output like the following:

$ stap -e 'probe kernel.function("collapse_huge_page") {  printf("%-25s: %s (%d) collapse_huge_pagen", ctime(gettimeofday_s()), execname(), pid())}'
Mon Oct 21 15:12:44 2013 : khugepaged (88) collapse_huge_page
Mon Oct 21 15:13:44 2013 : khugepaged (88) collapse_huge_page
Mon Oct 21 15:13:54 2013 : khugepaged (88) collapse_huge_page
Mon Oct 21 15:14:54 2013 : khugepaged (88) collapse_huge_page
Mon Oct 21 15:15:04 2013 : khugepaged (88) collapse_huge_page

TIPS:

if stap run failed:

# semantic error: missing x86_64 kernel/module debuginfo [man warning::debuginfo] under '/lib/modules/3.10.0-327.ali2000.alios7.x86_64/build'

please run:

# debuginfo-install kernel

References

Examining Huge Pages or Transparent Huge Pages performance的更多相关文章

  1. Linux传统Huge Pages与Transparent Huge Pages再次学习总结

      Linux下的大页分为两种类型:标准大页(Huge Pages)和透明大页(Transparent Huge Pages).Huge Pages有时候也翻译成大页/标准大页/传统大页,它们都是Hu ...

  2. Linux Transparent Huge Pages 对 Oracle 的影响

    1 Transparent Huge Pages 说明 官网上有2篇文章对THP 做了说明: https://access.redhat.com/solutions/46111 https://acc ...

  3. redis启动后出现"WARNING you have Transparent Huge Pages (THP) support enabled in your kernel"问题

    问题描述:启动redis后出现:WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This w ...

  4. Linux中禁用THP(Transparent Huge Pages)

    一.简介 Centos6开始引入THP,Centos7时默认启用,用来提升内存性能. 二.说明 争对一些数据库,如Oracle.MariaDB.MongoDB.VoltDB在使用时,要求关闭此功能. ...

  5. Transparent Huge Pages

    在RHEL6中,透明大页功能是默认开启的. 开启该选项后,内核会尽可能地尝试分配大页,如果mmap区域是2mb,那么每个linux进程都会分配到2mb大小的页.如果大页不够用了(比如物理内存不够了), ...

  6. Configuring HugePages for Oracle on Linux (x86-64)

    Introduction Configuring HugePages Force Oracle to use HugePages (USE_LARGE_PAGES) Disabling Transpa ...

  7. MongoDB 生产环境笔记

    目录 MongoDB 生产环境笔记 一.vm.zone_reclaim_mode 参数 二.添加 swap 分区 三.设置 swappiness 参数 四.内核和文件系统版本 五.禁用 Transpa ...

  8. HBase最佳实践-用好你的操作系统

    终于又切回HBase模式了,之前一段时间因为工作的原因了解接触了一段时间大数据生态的很多其他组件(诸如Parquet.Carbondata.Hive.SparkSQL.TPC-DS/TPC-H等),虽 ...

  9. 现在的 Linux 内核和 Linux 2.6 的内核有多大区别?

    作者:larmbr宇链接:https://www.zhihu.com/question/35484429/answer/62964898来源:知乎著作权归作者所有.商业转载请联系作者获得授权,非商业转 ...

随机推荐

  1. ssh tunnel

    https://peppoj.net/2012/10/tunnel-http-traffic-encrypted-using-polipo-and-ssh/ --------------------- ...

  2. wifidog接口文档(转)

    目录(?)[-] 网关心跳协议 请求信息 回复格式 例子 用户状态心跳协议 请求格式 注意 回复格式 状态码 例子 跳转协议 请求格式 例子 注册协议 请求格式 例子 wifidog是搭建无线热点认证 ...

  3. 我的四轴专用PID参数整定方法及原理---超长文慎入(转)

    给四轴调了好久的PID,总算是调好了,现分享PID参数整定的心得给大家,还请大家喷的时候手下留情. 首先说明一下,这篇文章的主旨并不是直接教你怎么调,而是告诉你这么调有什么道理,还要告诉大家为什么'只 ...

  4. 转:android root tcpdump抓包强烈推荐

    转:http://www.cnblogs.com/findyou/p/3491035.html 写的相当详细且完整,业界良心. adb push d:\tcpdump /data/local/ adb ...

  5. 编译安装openssl报错:POD document had syntax errors at /usr/bin/pod2man line 69. make: *** [install_docs]

    错误如下: cms.pod around line 457: Expected text after =item, not a number cms.pod around line 461: Expe ...

  6. Spring Integration概述

    1.   Spring Integration概述 1.1     背景 Spring框架的一个重要主题是控制反转.从广义上来说,Spring处理其上下文中管理的组件的职责.只要组件减轻了职责,它们同 ...

  7. Android Studio打开出现:Default activity not found

    昨天项目可以正常打开,没有问题,今天打开的时候就出现了这个问题.可以编译,但是无法生成APK调试.当然,如果选择 Do not launch Activity就可以成功编译.出现这个 Default ...

  8. android中Fragment的使用

    android中的Fragment跟网页中的iframe很像,用于在界面上嵌入局部动态内容,我的描述可能不准确,只是我的理解吧 创建Fragment很简单,在Android Studio中是这么创建的 ...

  9. Apache和Nginx对比

    面试过程中被问到Apache和Nginx服务器的对比,因为之前没有关注过这个问题,所以也没能回答上来. 今天在网上搜索资料,发现中文资料极少,还是英文资料多一下. 原文链接:https://www.w ...

  10. 带你走进EJB--将EJB发布为Webservice(2)

    在企业级的应用程序中经常都要把用不同语言写成的.在不同平台上运行的各种程序集成起来,而这种集成将花费很大的开发力量. 简单的一个例子:应用程序经常需要从运行在A主机上的程序中获取数据:或者把数据发送到 ...