3

Yes this is a broad question but I would argue quite a valid one. Sometimes programs and scripts take too long or use too much memory and really start to slow my system down. Fine. Sometimes the system slows down so much that I can barely slide-show my mouse to the terminal and spam Ctrl+C. It baffles me as to why an OS does not give scheduling priority to allow the user to use the mouse, keyboard and kill things. Ever seen this?

> ./program
^C^C^C^C^C^C^C^C^C^C^C^Z^Z^Z^C^C^C^C^C^Clsdhafjkasdf

Now, Ctrl+C isn't as stern as some others (it can be handled by the app and even ignored, but that's not the case here). Ctrl+Z would do the job just fine too as I would be able to kill -9 %1 right after but it doesn't work either.

Another method might be to jump to a virtual console Ctrl+Alt+F2, login and kill the offending app, but since the system is busy this doesn't work and I get a black screen. Likewise I can't open new terminals (the window pops up but fails to drop me in a shell). Other terminals that are open may not respond or run commands.

I suspect one reason the system is so inoperable is the offending program is hitting swap and pushing the more core apps out of main memory. Not even the simplest command to give me a bash prompt or execute kill can get a cycle in edgeways. I have no evidence of this because I can't run top when it happens. Are there any options to improve the chances of the original Ctrl+C working?. Maybe something along the lines of increasing X and terminal priority or automatically killing programs that use a large portion of memory or start to swap too much?

Is there any other linux-fu I could use to regain control when this happens (e.g. SysRq commands)?

Update: After some more tests I'm pretty sure it's apps using too much memory and hitting swap. Even after killing the app in question, others take a very long time to start being responsive as though they've been pushed out of main memory. I'd really like some way to automatically limit high memory usage programs to main memory. If it hits swap it's going to be too slow anyway so what's the point of letting it continue.

NOTE: I'm not after a solution to a specific app and don't know ahead of time when some operation is going to chew up memory. I want to solve this kind of slowdown system wide. I.e. many programs cause this. AFAIK I haven't messed with the system config and it's a pretty standard fedora install. I'm not surprised by these slowdowns but I do want more control.


I'd like to keep my window manager running and these are my last resorts that I'm hoping to avoid. I generally only need these if my GPU is stuck in a loop and blocking X. If enabled, Ctrl+Alt+backspace is a handy shortcut to kill X and all your apps, taking you back to login. A more potent command, again if enabled, is Alt+SysRq+K. If that doesn't work it's holding the power button time.


Alt+SysRq+F (thanks, @Hastur), which kills memory hogging processes is quite destructive but can help as a last resort. Update: Not entirely sure of all the consequences here but @Xen2050's suggestion of ulimit seems to solve many problems...

TOTAL_PHYSICAL_MEMORY=$(grep MemTotal /proc/meminfo | awk '{print $2}')
ulimit -Sv $(( $TOTAL_PHYSICAL_MEMORY * 4 / 8))

Going to leave this in my bashrc and see how things go.

Update: Things mostly seem good, except I guess some apps that share large libraries and map large files. Even if they consume barely any actual memory and its not likely to hit swap frequently. There doesn't seem to be a number low enough to kill deadly swap-hitting apps but leave regular ones (such as 4.6gb VIRT amarok) running.

Related: https://unix.stackexchange.com/questions/134414/how-to-limit-the-total-resources-memory-of-a-process-and-its-children/174894, but still the issue of limiting applications that start to hit swap a lot.


This is exactly the kind solution it turns out I'm after: Is it possible to make the OOM killer intervent earlier?

10
  • 2
    What about to run the script with nice -n 19 before? Of course if you need more priority you can try e.g. with -n 15... BTW I knew ALT+SysRq+ one of the following letter : +R to take control from X , +S to sync , +E to terminate all gracefully, +I to kill abruptly, +U to remount read only the filesystems and +R to reboot. +k I didn't know...
    – Hastur
    Commented Feb 18, 2015 at 13:24
  • 1
    Sounds like you need to improve your scripts and add pauses so the system can find moments to catch up. Either whatever your script does is too much, or your script just does it the wrong way and can be optimised to not give you the slow downs in the first place. Often, problems like this is caused due to bad programming, so I would start looking into fixing the issue itself rather than to fix the issues it spawns.
    – LPChip
    Commented Feb 18, 2015 at 13:25
  • @LPChip yes, I need to improve my scripts, but as good as I am I do make the occasional mistake :). I don't like having to reboot my pc when I make mistakes. The most recent time was calling kde2d in R which it turns out is very slow with large datasets (I had no idea but can definitely say I do now). This issue general, recurring, not related to a specific task or application. I've had similar issues running filters in gimp, big rigid body simulations in blender, CUDA applications, accidentally opening binary files in text editors etc...
    – jozxyqk
    Commented Feb 18, 2015 at 13:37
  • Perhaps you should test out individual commands before placing them in a script, just to gauge their impact. And I'm sure you can build in some pauses to your script as you are testing, so you can abort at each pause moment? Like a question: Continue? [y,n] And once the script is complete remove those pauses.
    – LPChip
    Commented Feb 18, 2015 at 13:39
  • Sounds like the priority/niceness of your display is too low, or the scripts/mystery programs is to high. Or your system is generally unstable.
    – Xen2050
    Commented Feb 18, 2015 at 14:06

2 Answers 2

1

Your particular case doesn't sound like just a process using all the available CPU, more like a display or possibly out of RAM issue. Limiting RAM should be possible with something like cgroups or ulimit / user limits.

But if you want to try limiting the CPU usage of some processes, this might work:
If you know exactly what process(es) is running away with your CPU, you could use cpulimit to slow it down. I use it regularly on a low-priority process that sometimes runs away with the CPU, works great. It:

sends the SIGSTOP and SIGCONT signals to a process, both to verify that it can control it and to limit the average amount of CPU it consumes. This can result in misleading (annoying) job control messages that indicate that the job has been stopped (when actually it was, but immediately restarted). This can also cause issues with interactive shells that detect or otherwise depend on SIGSTOP/SIGCONT. For example, you may place a job in the foreground, only to see it immediately stopped and restarted in the background. (See also http://bugs.debian.org/558763.)

There are examples on running it in it's man page, like:

   Assuming you have started `foo --bar` and you find out with  top(1)  or
   ps(1) that this process uses all your CPU time you can either

   # cpulimit -e foo -l 50
          limits  the CPU usage of the process by acting on the executable
          program file (note: the argument "--bar" is omitted)

   # cpulimit -p 1234 -l 50
          limits the CPU usage of the process by acting  on  its  PID,  as
          shown by ps(1)

   # cpulimit -P /usr/bin/foo -l 50
          same as -e but uses the absolute path name
0

You may install "xkill" application and assign "xkill" to some keyboard short cut like Ctrl+shift+k and whenever any script or program lags, just press crtl+shift+k and click on the application u want to kill. That's it

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .