Boston Linux & UNIX was originally founded in 1994 as part of The Boston Computer Society. We meet on the third Wednesday of each month at the Massachusetts Institute of Technology, in Building E51.

BLU Discuss list archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Discuss] memory management



Matthew Gillen wrote:
> If I try to run Firefox, and a few java apps (e.g., Eclipse), my
> machine thrashes about and effectively locks up because of
> out-of-memory issues.
> 
> After going on like this for literally 10 minutes, OOM-killer sometimes
> kills the right thing...

I've noticed this as well on my laptop with 6 GB RAM. Once you get over
a certain threshold of free RAM, things just drop off a cliff, and even
with manual killing of memory hogs, it takes a while for the needed
stuff to swap back into RAM. Meanwhile the system is locked up or near
locked up.

I've considered a low-tech solution, like having a background script
pop-up a notification when free RAM drops below some threshold to prompt
me to restart various long running and leaking processes.

(This wouldn't help on the occasion when I've hit a bug in a Thunderbird
add-on that results in runaway memory allocations.)


> The behavior when low on memory seems atrociously bad.

Yes. It's disappointing that after decades Linux systems still have
their performance fall off a cliff when free RAM runs low. I expect
performance to be significantly impacted, but it should be swapping
pages out more gradually as RAM starts running low, and the swapping
shouldn't lockup the entire machine. It seems to assume the memory
allocation for that last program that pushed over the limit is so
important that every CPU and I/O cycle should go to swapping to meet
that need. I'd rather just see that one process block. (Of course just
about every interaction allocates some RAM, so once you are out, if
there are no reserves you're stuck waiting to swap on all processes.)


Dan Ritter wrote:
> - play with config settings for the kernel; see
>   https://www.kernel.org/doc/Documentation/sysctl/vm.txt
>   for documentation. Swappiness, OOM behavior, reserved
>   memory...

Swappiness has some potential to help here. I've tweaked it on servers,
but haven't experimented with it on a desktop. It could be time
consuming to arrive at a value that has a benefit. Anyone have
recommendations for values, backed up by experiments that showed it helped?

There are lots of knobs to twiddle there. I'd be surprised if the
situation couldn't be somewhat improved by adjusting them right.


Matthew Gillen wrote:
> Swappiness doesn't really help here. That just controls how eager
> linux is to swap something out.  Once you're out of physical RAM, the 
> OS is going to start swapping if it can.

That doesn't entirely make sense. If setting swappiness to zero means
swap is never used, and setting it to a high value means it is
aggressively used, then in the latter case it means the kernel is moving
stuff from RAM to swap. If it is moved to swap, it ought to free up
physical RAM.

What I do believe is that even with a high swappiness you still might
(will) get DoSed when you run out of physical RAM, but when you hit that
point should take longer.

Although I don't know the granularity with which the swapping decisions
occur. My current FF is running 2 GB VM, 650 MB resident. That ratio is
about typical. Does increasing swappiness result in a lower resident to
VM ratio for FF, or does it just mean other idle processes are more apt
to be swapped out? If the later, and most of your RAM is taken up by a
few huge foreground processes, then yeah, increasing swappiness won't
change much.


> What I want for desktop environments is behavior like: if you run out 
> of memory, kill the thing that's hogging the most.

See Dan's link:
https://www.kernel.org/doc/Documentation/sysctl/vm.txt

  oom_kill_allocating_task

  If this is set to zero, the OOM killer will scan through the entire
  tasklist and select a task based on heuristics to kill.  This normally
  selects a rogue memory-hogging task that frees up a large amount of
  memory when killed.

  If this is set to non-zero, the OOM killer simply kills the task that
  triggered the out-of-memory condition. This avoids the expensive
  tasklist scan.

The default is zero, so it supposedly is trying to implement exactly
what you want. Clearly their heuristics aren't working for you. Maybe
you should disable it and use a custom OOM killer. (Or adjust the
scoring, as you suggested.)

The problem with improving OOM killer process choice is that by the time
it gets invoked, your system is already locking up. You really want a
solution that intervenes before the swapping starts.

I wonder if overcommit_memory set to 2 would help here. See:
https://www.kernel.org/doc/Documentation/vm/overcommit-accounting

That prevents the allocating process from asking for too much memory. If
they do, it returns an error (rather than triggering the OOM killer).
Not clear if this applies to an accumulation of small allocations, or
only single large allocations.

 -Tom

-- 
Tom Metro
The Perl Shop, Newton, MA, USA
"Predictable On-demand Perl Consulting."
http://www.theperlshop.com/



BLU is a member of BostonUserGroups
BLU is a member of BostonUserGroups
We also thank MIT for the use of their facilities.

Valid HTML 4.01! Valid CSS!



Boston Linux & Unix / webmaster@blu.org