[Discuss] memory management

Matthew Gillen me at mattgillen.net
Tue Jun 23 11:38:30 EDT 2015


On 06/23/2015 10:18 AM, John Abreau wrote:
> A bit of googling turned up a page about using cgroups to limit
> firefox's memory usage.
> 
> http://jlebar.com/2011/6/15/Limiting_the_amount_of_RAM_a_program_can_use.html


ulimit, and prlimit could do the trick I suppose, but the hard-limits
there would be quite a bit of use-case-specific tuning.
CGroups are much closer to what I want, but not for the rogue processes:
 I think making a cgroup for core processes and setting their swappiness
to zero actually gets me closer to what I'm looking for.

What I really wanted was the rogue process to pay the cost of memory
access, instead of spreading that pain throughout the system.  But
CGroups gets me close I think:

According to this:
 https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Resource_Management_Guide/sec-memory.html

you can create a group, and set swappiness and oom-killer eligibility
for that group.  So ideally I would put certain critical things needed
for recovery (e.g. ssh daemon, agetty, maybe even the window manager;
any process that allows me to find and kill what I need to help the
system recover) in a group that would effectively exempt them from the
thrashing.

HOWEVER, there is still a problem. For instance, my current system
doesn't actually launch an 'agetty' (login) process on the virtual
terminals until you switch to them.  That means you need some 'reserved'
memory, which my quick reading of cgroups doesn't seem to allow you to
do.  I'd be happy if there were just a small amount of memory reserved,
enough to:
 - launch agetty and login
 - launch root's login shell
 - run killall eclipse

That you can't easily do this is odd, because other system resources
have reserves (e.g. filesystems tend to have a small percentage reserved
for root; the system-wide limit for process handles has some set-aside
for root).  For some reason having some mlock()'d memory reserved is
harder...

Thanks,
Matt




More information about the Discuss mailing list