2

Minimal test case when Linux system does not have swap (or run sudo swapoff -a before testing). Run following bash one-liner as normal user:

while true; do date; nice -20 stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", $2 + 4000;}' < /proc/meminfo)k --vm-keep -m 1 --timeout 10s; sleep 5s; done

and run following bash one-liner with high priority root shell (e.g. sudo nice -n -19 bash):

while true; do NS=$(date '+%N' | sed 's/^0*//'); let "S=998000000 - $NS"; S=$(( S > 0 ? S : 0)); LC_ALL=C sleep "0.$S"; date --iso=ns; done

The high priority process is supposed to run date every second as accurately as possible. However, even if this process is running with priority -19, the background process running on priority 20 is able to cause major delays. It seems that there's no limit for the latency induced by the low priority background process because higher delays can be activated by increasing the stress --timeout value.

Is there a way to limit maximum latency and automatically kill the stress if needed to accomplish that? Increasing /proc/sys/vm/user_reserve_kbytes or /proc/sys/vm/admin_reserve_kbytes or /proc/sys/vm/min_free_kbytes does not seem to help.

5
  • CPU pinning to at least the most priority process might somewhat mitigate it. used it with some success in the past for similar real-world situations. have a look at unix.stackexchange.com/questions/417672/how-to-disable-cpu/… Commented Feb 10, 2018 at 18:01
  • I believe that the latency is caused by near OOM situation and the high priority process still needs to launch small new processes. Pinning to another CPU does not help if there is not enough RAM to start even a small new process such as date. As far as I can see it, the problem is memory starvation, not CPU starvation. Commented Feb 10, 2018 at 19:15
  • When you have one, you usually end up having another. Granted, there are situations where it would not help. Depending on the situation, having a controlled reboot under a watchdog might be preferable of starting killing things remotely. unix.stackexchange.com/questions/366973/… Commented Feb 10, 2018 at 19:17
  • I think I'm hitting some kernel bug. lkml.org/lkml/2017/5/20/186 Commented Feb 11, 2018 at 18:19
  • See also: elinux.org/images/a/a9/… Commented Apr 2, 2020 at 14:05

2 Answers 2

3

Please consider trying* the kernel patch from this question, as it seems to do the job(avoid high latency near oom) for me so far(even using your code from the question to test it) and I'm also avoiding a ton of disk thrashing(for example when I compile firefox which usually caused the OS to freeze due to running out of memory).
The patch avoids evicting Active(file) pages, thus keeping (at least) the executable code pages in RAM so that context switches don't cause kswapd0(?) to re-read them(which would cause lots of disk reading and a frozen OS).

* or even suggesting a better way?

16
  • 1
    Interesting patch. I think it's a bit heavy handed but triggering OOM Killer sooner is definitely the correct behavior. I guess triggering on increased mm allocation latency is better than avoiding kswapd but real time benchmarking could be different. Commented Aug 30, 2018 at 9:52
  • 1
    Were you able to get any dmesg output by using the patch that you mentioned in a comment to your question ( lkml.org/lkml/2017/5/20/186 ) which I adapted to 4.18.5 here: github.com/constantoverride/qubes-linux-kernel/blob/… I ask because I got no output from it (unless I missed it? or something) and I think I should get some output according to your previous comment: if mm allocation latency is at play.
    – user306023
    Commented Aug 30, 2018 at 13:03
  • 1
    @MikkoRantalainen it is the same Memory allocation stall watchdog patch that you mentioned in a comment in OP and that's where I originally found it from. But the one you just linked is older (15 March 2017 now, vs 20 May 2017 in OP comment). It is a good patch and I'm keeping both(it and mine) currently applied. Cheers!
    – user306023
    Commented Aug 31, 2018 at 11:03
  • 1
    The kernel does not have enough history per page to do really clever stuff. As far as I know, it basically knows if a page has been loaded back from swap sometimes in history but it has no idea how long ago that happened. And I'm not sure if it even remembers that it had to re-read the file from the disk (in case of executable file). Commented Sep 2, 2018 at 15:36
  • 1
    you're right that if one sets up a hard malloc timeout trigger for OOM Killer, the system may end up killing a process even with half the memory still free. It should not happen in normal case but if you're not running PREEMPT or RT kernel, I guess it could happen because of locking between different kernel threads if multiple user processes use lots of CPU. However, if you're looking for guaranteed latency, killing processes even with 50% free may be exactly what you want! Commented Sep 2, 2018 at 15:46
3

There are a few tools designed to avoid this particular issue, listed with increasing complexity/configurability:

  • earlyoom, probably good enough for desktop/laptop computers
  • nohang, a more configurable solution
  • Facebook's solution oomd for their own servers.
2
  • Thanks. I've been running earlyoom but when I overcommit memory a lot, it will start to be too trigger happy. (I often run MemTotal: 32GB and Committed_AS: 45-55 GB which makes MemAvailable often display zero even if system keeps running fine.) Cannot run oomd due dependencies. I guess I need to check out nohang when I have time. Commented Mar 19, 2019 at 14:17
  • 2
    I also use zram to extend the available amount of memory. It works pretty well. I setup a zram device with 75% of my total ram, with lz4 algo. I observe compression factors around 4 to 6. This means that when the zram device is full, it takes less than 20% of my total ram, effectively adding more than 50% of ram... (when the zram device is empty, it consumes almost no memory).
    – nat chouf
    Commented Mar 29, 2019 at 22:43

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .