Yes this is a broad question but I would argue quite a valid one. Sometimes programs and scripts take too long or use too much memory and really start to slow my system down. Fine. Sometimes the system slows down so much that I can barely slide-show my mouse to the terminal and spam Ctrl+C. It baffles me as to why an OS does not give scheduling priority to allow the user to use the mouse, keyboard and kill things. Ever seen this?
> ./program
^C^C^C^C^C^C^C^C^C^C^C^Z^Z^Z^C^C^C^C^C^Clsdhafjkasdf
Now, Ctrl+C isn't as stern as some others (it can be handled by the app and even ignored, but that's not the case here). Ctrl+Z would do the job just fine too as I would be able to kill -9 %1
right after but it doesn't work either.
Another method might be to jump to a virtual console Ctrl+Alt+F2, login and kill the offending app, but since the system is busy this doesn't work and I get a black screen. Likewise I can't open new terminals (the window pops up but fails to drop me in a shell). Other terminals that are open may not respond or run commands.
I suspect one reason the system is so inoperable is the offending program is hitting swap and pushing the more core apps out of main memory. Not even the simplest command to give me a bash prompt or execute kill can get a cycle in edgeways. I have no evidence of this because I can't run top
when it happens. Are there any options to improve the chances of the original Ctrl+C working?. Maybe something along the lines of increasing X and terminal priority or automatically killing programs that use a large portion of memory or start to swap too much?
Is there any other linux-fu I could use to regain control when this happens (e.g. SysRq commands)?
Update: After some more tests I'm pretty sure it's apps using too much memory and hitting swap. Even after killing the app in question, others take a very long time to start being responsive as though they've been pushed out of main memory. I'd really like some way to automatically limit high memory usage programs to main memory. If it hits swap it's going to be too slow anyway so what's the point of letting it continue.
NOTE: I'm not after a solution to a specific app and don't know ahead of time when some operation is going to chew up memory. I want to solve this kind of slowdown system wide. I.e. many programs cause this. AFAIK I haven't messed with the system config and it's a pretty standard fedora install. I'm not surprised by these slowdowns but I do want more control.
I'd like to keep my window manager running and these are my last resorts that I'm hoping to avoid. I generally only need these if my GPU is stuck in a loop and blocking X. If enabled, Ctrl+Alt+backspace is a handy shortcut to kill X and all your apps, taking you back to login. A more potent command, again if enabled, is Alt+SysRq+K. If that doesn't work it's holding the power button time.
Alt+SysRq+F (thanks, @Hastur), which kills memory hogging processes is quite destructive but can help as a last resort.
Update: Not entirely sure of all the consequences here but @Xen2050's suggestion of ulimit
seems to solve many problems...
TOTAL_PHYSICAL_MEMORY=$(grep MemTotal /proc/meminfo | awk '{print $2}')
ulimit -Sv $(( $TOTAL_PHYSICAL_MEMORY * 4 / 8))
Going to leave this in my bashrc and see how things go.
Update: Things mostly seem good, except I guess some apps that share large libraries and map large files. Even if they consume barely any actual memory and its not likely to hit swap frequently. There doesn't seem to be a number low enough to kill deadly swap-hitting apps but leave regular ones (such as 4.6gb VIRT amarok
) running.
Related: https://unix.stackexchange.com/questions/134414/how-to-limit-the-total-resources-memory-of-a-process-and-its-children/174894, but still the issue of limiting applications that start to hit swap a lot.
This is exactly the kind solution it turns out I'm after: Is it possible to make the OOM killer intervent earlier?
nice -n 19
before? Of course if you need more priority you can try e.g. with -n 15... BTW I knewALT
+SysRq
+ one of the following letter : +R
to take control from X , +S
to sync , +E
to terminate all gracefully, +I
to kill abruptly, +U
to remount read only the filesystems and +R
to reboot. +k I didn't know...kde2d
inR
which it turns out is very slow with large datasets (I had no idea but can definitely say I do now). This issue general, recurring, not related to a specific task or application. I've had similar issues running filters in gimp, big rigid body simulations in blender, CUDA applications, accidentally opening binary files in text editors etc...