Boston Linux & Unix (BLU) Home | Calendar | Mail Lists | List Archives | Desktop SIG | Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings
Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

memory usage



I explained buffers in my previous email.
Memory itself is chunked into pages (page size depends on the OS, hardware, 
etc.). Pages in physical can be in several different states(generally):
1. unused or free.
2. used - clean - means that they don't need to be paged out.
3. used - dirty - means that if the VM needs to used that page, it needs to 
be swapped out to the paging area (eg. swap), 
4. wired - cannot be paged out or reused. The OS will wire a page for 
several purposes. When an I/O is in progress, a page may be temporarily 
wired. The OS may permanently wire some pages for its own use. 

When a page is assigned to a process, it is mapped to a virtual address 
within a process. The system's hardware generally does the mapping.

Also note that many pages are shared. When you run a program, generally the 
program's executable code is generally mmaped, and will be shared by any 
other process using the same executable. Additionally, shared libraries are 
mmaped to the process. Each process also gets three kinds of data, 
initialized data (data that is initialized at compile.link), unininit 
data(BSS which is not initialized), and heap. The heap grows as the process 
uses malloc (or other allocators). Most memory managers handle bss pages as 
page table entries, and don't create the pages until they are used. The 
program's stack is also a form of uninitialized data. I don't know the 
Linux way of mapping the stack. Most stacks grow downward from some virtual 
adress, but some Unixes (eg.HP-UX on PA-RISC) grows the stack upwards. 
The classic Unix model for a process is:
---------- top of process' Virtual Memory
stack |
	   v 
heap  ^
         |
bss
data
text(eg code)
-------- Bottom

Most Unixes today allocate very large virtual address spaces and depart 
form this old pre-virtual model). But, conceptuallym that's what it looks 
like. Shared libraries fit in a similar mode except that their data and bss 
belong to each process, and they allocate onto the process' heap. 

In Tru64 Unix, which is a 64 bit version of OSF1, The stack starts at the 
program's start of text, and grows down to the "zero page".

Also most Unix (et. al) systems map a zero page that a user process cannot 
address, such that address 0 will always result in a segfault. HP-UX is one 
of the few that allow a user process to address 0 without a fault. 

When physical memory starts to fill up, dirty pages get swapped out based 
on various Al Gore Ithms. Normally, the LRU (least recently used) algorithm 
is used, but that depends on the memory management hardware and the way the 
OS' VM wants to work. Another algorithm uses what's called working sets. 
The VM will try to avoid a page out operation. So, wthin the LRU algorithm 
there is a cost subalgorithm. So, in an application program, you have some 
code that is used only at startup and shutdown. If those routines are on a 
different page from the currently executing section, then when the VM needs 
some memory, it may just grab a page from an app's text segment that either 
has not been used yet, or has not been used in a while in preference to 
data, which must first be pages out. (Paging operations are called page 
faults). When a system starts to use more memory than it has in its real 
memory pool, things start swapping. This can get excessive and the system 
"thrashes". I've tried to be a bit generic for several reasons. One is that 
the Linux VM does change every once in a while, and also differennt chips 
(like PowerPC) have different memory management modules. 
On 2 May 2002 at 11:12, Drew Taylor wrote:

> I wasn't so worried about "free" memory since I knew that it was available 
> upon demand anyway. But I was curious if anyone knows the difference 
> between "buffers" and "cached" memory? Is the buffers number the number of 
> memory buckets, and cached the actual RAM used by these buckets?

--
Jerry Feldman <gaf at blu.org>
Associate Director
Boston Linux and Unix user group
http://www.blu.org PGP key id:C5061EA9
PGP Key fingerprint:053C 73EC 3AC1 5C44 3E14 9245 FB00 3ED5 C506 1EA9





BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org