Anatomy of a Program in Memory
Memory management is the heart of operating systems; it is crucial for both programming and system administration. In the next few posts I’ll cover memory with an eye towards practical aspects, but without shying away from internals. While the concepts are generic, examples are mostly from Linux and Windows on 32-bit x86. This first post describes how programs are laid out in memory.
Each process in a multi-tasking OS runs in its own memory sandbox. This sandbox is the virtual address space, which in 32-bit mode is always a 4GB block of memory addresses. These virtual addresses are mapped to physical memory by page tables, which are maintained by the operating system kernel and consulted by the processor. Each process has its own set of page tables, but there is a catch. Once virtual addresses are enabled, they apply to all software running in the machine, including the kernel itself. Thus a portion of the virtual address space must be reserved to the kernel:
This does not mean the kernel uses that much physical memory, only that it has that portion of address space available to map whatever physical memory it wishes. Kernel space is flagged in the page tables as exclusive to privileged code (ring 2 or lower), hence a page fault is triggered if user-mode programs try to touch it. In Linux, kernel space is constantly present and maps the same physical memory in all processes. Kernel code and data are always addressable, ready to handle interrupts or system calls at any time. By contrast, the mapping for the user-mode portion of the address space changes whenever a process switch happens:
Blue regions represent virtual addresses that are mapped to physical memory, whereas white regions are unmapped. In the example above, Firefox has used far more of its virtual address space due to its legendary memory hunger. The distinct bands in the address space correspond to memory segments like the heap, stack, and so on. Keep in mind these segments are simply a range of memory addresses and have nothing to do with Intel-style segments. Anyway, here is the standard segment layout in a Linux process:
When computing was happy and safe and cuddly, the starting virtual addresses for the segments shown above were exactly the same for nearly every process in a machine. This made it easy to exploit security vulnerabilities remotely. An exploit often needs to reference absolute memory locations: an address on the stack, the address for a library function, etc. Remote attackers must choose this location blindly, counting on the fact that address spaces are all the same. When they are, people get pwned. Thus address space randomization has become popular. Linux randomizes the stack, memory mapping segment, and heap by adding offsets to their starting addresses. Unfortunately the 32-bit address space is pretty tight, leaving little room for randomization and hampering its effectiveness.
The topmost segment in the process address space is the stack, which stores local variables and function parameters in most programming languages. Calling a method or function pushes a new stack frame onto the stack. The stack frame is destroyed when the function returns. This simple design, possible because the data obeys strict LIFO order, means that no complex data structure is needed to track stack contents – a simple pointer to the top of the stack will do. Pushing and popping are thus very fast and deterministic. Also, the constant reuse of stack regions tends to keep active stack memory in the cpu caches, speeding up access. Each thread in a process gets its own stack.
It is possible to exhaust the area mapping the stack by pushing more data than it can fit. This triggers a page fault that is handled in Linux by expand_stack(), which in turn calls acct_stack_growth() to check whether it’s appropriate to grow the stack. If the stack size is below RLIMIT_STACK (usually 8MB), then normally the stack grows and the program continues merrily, unaware of what just happened. This is the normal mechanism whereby stack size adjusts to demand. However, if the maximum stack size has been reached, we have a stack overflow and the program receives a Segmentation Fault. While the mapped stack area expands to meet demand, it does not shrink back when the stack gets smaller. Like the federal budget, it only expands.
Dynamic stack growth is the only situation in which access to an unmapped memory region, shown in white above, might be valid. Any other access to unmapped memory triggers a page fault that results in a Segmentation Fault. Some mapped areas are read-only, hence write attempts to these areas also lead to segfaults.
Below the stack, we have the memory mapping segment. Here the kernel maps contents of files directly to memory. Any application can ask for such a mapping via the Linux mmap() system call (implementation) or CreateFileMapping() / MapViewOfFile() in Windows. Memory mapping is a convenient and high-performance way to do file I/O, so it is used for loading dynamic libraries. It is also possible to create an anonymous memory mapping that does not correspond to any files, being used instead for program data. In Linux, if you request a large block of memory via malloc(), the C library will create such an anonymous mapping instead of using heap memory. ‘Large’ means larger than MMAP_THRESHOLD bytes, 128 kB by default and adjustable via mallopt().
Speaking of the heap, it comes next in our plunge into address space. The heap provides runtime memory allocation, like the stack, meant for data that must outlive the function doing the allocation, unlike the stack. Most languages provide heap management to programs. Satisfying memory requests is thus a joint affair between the language runtime and the kernel. In C, the interface to heap allocation is malloc() and friends, whereas in a garbage-collected language like C# the interface is the new keyword.
If there is enough space in the heap to satisfy a memory request, it can be handled by the language runtime without kernel involvement. Otherwise the heap is enlarged via the brk() system call (implementation) to make room for the requested block. Heap management is complex, requiring sophisticated algorithms that strive for speed and efficient memory usage in the face of our programs’ chaotic allocation patterns. The time needed to service a heap request can vary substantially. Real-time systems have special-purpose allocators to deal with this problem. Heaps also become fragmented, shown below:
Finally, we get to the lowest segments of memory: BSS, data, and program text. Both BSS and data store contents for static (global) variables in C. The difference is that BSS stores the contents of uninitialized static variables, whose values are not set by the programmer in source code. The BSS memory area is anonymous: it does not map any file. If you say static int cntActiveUsers, the contents of cntActiveUsers live in the BSS.
The data segment, on the other hand, holds the contents for static variables initialized in source code. This memory area is not anonymous. It maps the part of the program’s binary image that contains the initial static values given in source code. So if you say static int cntWorkerBees = 10, the contents of cntWorkerBees live in the data segment and start out as 10. Even though the data segment maps a file, it is a private memory mapping, which means that updates to memory are not reflected in the underlying file. This must be the case, otherwise assignments to global variables would change your on-disk binary image. Inconceivable!
The data example in the diagram is trickier because it uses a pointer. In that case, the contents of pointer gonzo – a 4-byte memory address – live in the data segment. The actual string it points to does not, however. The string lives in the text segment, which is read-only and stores all of your code in addition to tidbits like string literals. The text segment also maps your binary file in memory, but writes to this area earn your program a Segmentation Fault. This helps prevent pointer bugs, though not as effectively as avoiding C in the first place. Here’s a diagram showing these segments and our example variables:
You can examine the memory areas in a Linux process by reading the file /proc/pid_of_process/maps. Keep in mind that a segment may contain many areas. For example, each memory mapped file normally has its own area in the mmap segment, and dynamic libraries have extra areas similar to BSS and data. The next post will clarify what ‘area’ really means. Also, sometimes people say “data segment” meaning all of data + bss + heap.
You can examine binary images using the nm and objdump commands to display symbols, their addresses, segments, and so on. Finally, the virtual address layout described above is the “flexible” layout in Linux, which has been the default for a few years. It assumes that we have a value for RLIMIT_STACK. When that’s not the case, Linux reverts back to the “classic” layout shown below:
That’s it for virtual address space layout. The next post discusses how the kernel keeps track of these memory areas. Coming up we’ll look at memory mapping, how file reading and writing ties into all this and what memory usage figures mean.
Anatomy of a Program in Memory的更多相关文章
- 【翻译】Anatomy of a Program in Memory—剖析内存中的一个程序(进程的虚拟存储器映像布局详解)
[翻译]Anatomy of a Program in Memory—剖析内存中的一个程序(进程的虚拟存储器映像布局详解) . . .
- [转]Anatomy of a Program in Memory
Memory management is the heart of operating systems; it is crucial for both programming and system a ...
- Anatomy of a Program in Memory.剖析程序的内存布局
原文标题:Anatomy of a Program in Memory 原文地址:http://duartes.org/gustavo/blog/ [注:本人水平有限,只好挑一些国外高手的精彩文章翻译 ...
- 【译】Getting Physical With Memory
当我们试图去了解复杂系统时,去除其抽象层,直接关注最底层,我们会更容易去理解.使用这种方法,我们来看一下内存和 I/O 接口的最简单和基础的层:处理器和总线的接口.这些细节是更上层问题的基础,例如线程 ...
- Understanding glibc malloc【待译】
今天尝试用Valgrind调试程序时,发现堆和栈的一些问题没有理解透彻,于是Google了下"Memory Layout C",接着就通过Memory Layout of C Pr ...
- 转:如何实现一个malloc
如何实现一个malloc 转载后排版效果很差,看原文! 任何一个用过或学过C的人对malloc都不会陌生.大家都知道malloc可以分配一段连续的内存空间,并且在不再使用时可以通过free释放掉. ...
- CPU与内存的那些事
下面是网上看到的一些关于内存和CPU方面的一些很不错的文章. 整理如下: 转: CPU的等待有多久? 原文标题:What Your Computer Does While You Wait 原文地址: ...
- What Your Computer Does While You Wait
转: CPU的等待有多久? 原文标题:What Your Computer Does While You Wait 原文地址:http://duartes.org/gustavo/blog/ [注:本 ...
- 转:CPU与内存的那些事
下面是网上看到的一些关于内存和CPU方面的一些很不错的文章. 整理如下: 转: CPU的等待有多久? 原文标题:What Your Computer Does While You Wait 原文地址: ...
随机推荐
- bootstrap2.3.2常用标签的使用
<!DOCTYPE html> <html lang="zh_CN"> <head> <title>Bootstrap 101 Te ...
- 学会用这二个键,你就是电脑高手了,一个是Win键,另一个是Ctrl!
学会用这二个键,你就是电脑高手了,一个是windows键,另一个是Ctrl键. 一.windows键 1. 很多时候,需要离开座位去做别的事情,如果对自己的电脑安全很重视,不妨按住windows键后, ...
- 全面修复IE,注册IE所有dll
全面修复IE,注册IE所有dll 复制,粘贴到文本文档里,保存成.bat文件,双击运行. rundll32.exe advpack.dll /DelNodeRunDLL32 %systemroot%\ ...
- 对常量的引用(reference to const)的一般用途(转载)
如果是对一个常量进行引用,则编译器首先建立一个临时变量,然后将该常量的值置入临时变量中,对该引用的操作就是对该临时变量的操作.对C++常量引用可以用其它任何引用来初始化:但不能改变. 关于引用的初始化 ...
- linux php文件include失败 一个很神奇的问题
1. html生成失败,权限问题 前天用php生成模板,在windows上开发,include文件,读缓存写文件,顺风顺水,晚上写完后传到linux上html生成失败,发现文件和文件夹都没生成,网上查 ...
- 论try/catch的重要性,我们经常遇到代码出现无法调试的错误,程序退出的时候崩溃。这跟我们代码日常保护的习惯息息相关。
每当构造函数或析构函数中出现溢出,会导致调试非常困难,而使用try/catch来处理构造中的初始化就非常重要了. 如上图,在构造函数中,我们的很多初始化动作会放在这里,但是却忽视了,一旦初始化出错了, ...
- BZOJ 2324: [ZJOI2011]营救皮卡丘( floyd + 费用流 )
昨晚写的题...补发一下题解... 把1~N每个点拆成xi, yi 2个. 预处理i->j经过编号不超过max(i,j)的最短路(floyd) S->0(K, 0), S->xi(1 ...
- 虚拟机NAT模式主机ping不通虚拟机解决方案
本篇没有抓包,只是简单一个实施.需要的童鞋可以拿走这个方法. 虚拟机与真机通信三种模式, 桥接模式,NAT 模式 ,HOST 模式. 桥接就是在真机的网络上模拟一个网卡,给虚拟机申请一个和真机在同一个 ...
- 中国 省会 地级市 经纬度 city array
<?php $city_arr = array ( '北京' => array ( 'gis_lng' => '116.405285', 'gis_lat' => '39.90 ...
- Android Studio 代码混淆
新建一个项目,Android Studio默认关闭代码混淆开关,在build.gradle文件中,如下图所示的minifyEnabled 开关,因此如果需要混淆代码,需将false改为true,然后在 ...