Home
| Calendar
| Mail Lists
| List Archives
| Desktop SIG
| Hardware Hacking SIG
Wiki | Flickr | PicasaWeb | Video | Maps & Directions | Installfests | Keysignings Linux Cafe | Meeting Notes | Linux Links | Bling | About BLU |
On Sep 14, 2010, at 3:45 PM, Derek Martin wrote: > > Yeah, except the trouble is what you said was vague, and as far as I > can tell, didn't make any sense: Of course it's vague. There *IS* no right answer. It depends on what your servers are doing. > What does that have to do with Virtual Memory (VM)? Virtual Machine as in Java Virtual Machine? > In any event, as Jerry pointed out, application memory leaks don't > require a reboot to fix; they just require restarting the application. > That will also solve any real or imagined performance degredation > caused by memory fragmentation, since all of the associated blocks > will be freed, and (if there's any significant pressure on memory) > immediately reused. Except that in production it isn't that simple, and the Linux kernel has been *very* vulnerable to memory fragmentation issues in the past. The 2.6 kernel has some features that help alleviate most of these, but you will still run into fragmentation issues with wired, contiguous memory allocations. To pose an example, an application allocating and deallocating large chunks of wired memory and the kernel allocating and deallocating I/O buffers. Eventually, the memory map is going to look like a chess board on an acid trip, and restarting the application won't magically defragment the kernel's buffers. So back to the "ambiguity": when to do that depends entirely on the server and what you are making it do. There is no one answer that covers all combinations. --Rich P.
BLU is a member of BostonUserGroups | |
We also thank MIT for the use of their facilities. |