Boston Linux & UNIX was originally founded in 1994 as part of The Boston Computer Society. We meet on the third Wednesday of each month at the Massachusetts Institute of Technology, in Building E51.

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] memory management



I'll chime in on this one more time just to be clear about what my beef
with linux is here.  Several people have said, in effect, "Have more
RAM" or "Have enough RAM for what you need".  Which is obviously true,
but missing the point.

For my day-to-day, I do have enough RAM.  What sometimes happens is that
the cesspool of problems that is javascript engines, or eclipse, go
completely off the rails, and start gobbling up memory.  Seeing the
forest for the trees, what I'm getting at here is that this will always
be an issue, and vastly over-provisioning RAM might mask the problem for
a while, eventually your day-to-day is going to start including multiple
VMs and you're back to square one.

Operating systems have concerned themselves for a long time with not
letting unprivileged processes destroy a system.  If a process tries to
touch memory that doesn't belong to it, BAM! The OS says "bad process!"
and hits it with a seg-fault.  The signal can kill the offending
process, or the process can catch it and try to recover, but either way,
the integrity of the rest of the system is preserved at the expense of
not letting the naughty process do what it was trying to do.

What strikes me as odd and wrong is that the OS doesn't seem to protect
itself from thrashing.  The system is perfectly happy to render itself
inoperative in the service of some lone process sucking up memory.

I guess we should count ourselves lucky that more of the processes we
run every day don't go off the rails in terms of memory usage.  And it
might be that this behavior isn't easy to create in a test environment:
it's possible that what causes the problem is sucking up a lot of memory
and then trying to /use/ all of it constantly.  Hard to say what the JS
engine is doing...

I'll note that I've pushed linux systems to resource limits in other
ways on plenty of occasions.  For example: running out of file
descriptors.  While there are certainly ways that you can abuse the
system and make the /system/ run out of file descriptors, the
protections that were in place seemed reasonable: my user was out of
file-descriptors and therefore couldn't start any new processes, but
'root' could still do things.

Matt



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org