51

I operate a Linux system which has a lot of users but sometimes an abuse occurs; where a user might run a single process that uses up more than 80% of the CPU/Memory.

So is there a way to prevent this from happening by limiting the amount of CPU usage a process can use (to 10% for example)? I'm aware of cpulimit, but it unfortunately applies the limit to the processes I instruct it to limit (e.g single processes). So my question is, how can I apply the limit to all of the running processes and processes that will be run in the future without the need of providing their id/path for example?

5
  • Are you experiencing performance problems? or is it just the numbers that bother you? Commented Aug 24, 2014 at 12:08
  • @richard Performance problems, so that's why I was trying to kill/limit/put an end to processes which seem to be using a lot of CPU, but I already did so by writing a bash script. This is also a virtual machine if that helps Commented Aug 24, 2014 at 12:16
  • 2
    Be careful of killing processes that may be 100% for very short time, also system processes. Consider cpulimit in conjunction with your search script. Have a policy and recommend the use of cpulimit, then search for over 10% and then limit to 5% (so users are encouraged to use cpulimit). Also make sure you can detect multiple processes adding up to more that 10% for a single user. Commented Aug 24, 2014 at 12:28
  • @richard Thanks Richard for all of these pretty useful comments! They have helped me greatly! Your suggestion to use cpulimit is way better than just killing the process since it can be restarted by the user later on (as pointed in one of your comments). Thank you! Commented Aug 24, 2014 at 12:31
  • 1

8 Answers 8

48

nice / renice

nice is a great tool for 'one off' tweaks to a system.

 nice COMMAND

cpulimit

cpulimit if you need to run a CPU intensive job and having free CPU time is essential for the responsiveness of a system.

cpulimit -l 50 -- COMMAND

cgroups

cgroups apply limits to a set of processes, rather than to just one

cgcreate -g cpu:/cpulimited
cgset -r cpu.shares=512 cpulimited
cgexec -g cpu:cpulimited COMMAND_1
cgexec -g cpu:cpulimited COMMAND_2
cgexec -g cpu:cpulimited COMMAND_3

Resources

http://blog.scoutapp.com/articles/2014/11/04/restricting-process-cpu-usage-using-nice-cpulimit-and-cgroups
http://manpages.ubuntu.com/manpages/xenial/man1/cpulimit.1.html

5
  • 5
    For those looking to set a hard limit on CPU usage, even when no other process is running, look at cpu.cfs_quota_us parameter (see manual)
    – Diego
    Commented Jan 18, 2016 at 14:59
  • cgroups are easier to use thanks to systemd directives... create a unit either system or user for that purpose is the best option Commented Dec 7, 2016 at 9:03
  • for a running process ex.: sudo cgclassify -g cpu:cpulimited 2315444 Commented Aug 17, 2017 at 21:27
  • 2
    my understanding is that nice only sets the relative CPU compared to other processes. If no other process is using CPU, then your process will use 100% CPU, not limit to 10%.
    – johny why
    Commented Sep 19, 2019 at 17:43
  • For hard limit cgroups2 now has cpu.max (on my system min value that was accepted was 1000). For simpler control I've found out cputool works (and cpulimit does not). See my answer for details if you are not familiar with cgroups. Commented Sep 16, 2023 at 23:46
26

While it can be an abuse for memory, it isn't for CPU: when a CPU is idle, a running process (by "running", I mean that the process isn't waiting for I/O or something else) will take 100% CPU time by default. And there's no reason to enforce a limit.

Now, you can set up priorities thanks to nice. If you want them to apply to all processes for a given user, you just need to make sure that the user's login shell is run with nice: the child processes will inherit the nice value. This depends on how the users log in. See Prioritise ssh logins (nice) for instance.

Alternatively, you can set up virtual machines. Indeed setting a per-process limit doesn't make much sense since the user can start many processes, abusing the system. With a virtual machine, all the limits will be global to the virtual machine.

Another solution is to set /etc/security/limits.conf limits; see the limits.conf(5) man page. For instance, you can set the maximum CPU time per login and/or the maximum number of processes per login. You can also set maxlogins to 1 for each user.

5
  • 1
    @GiovanniMounir I meant: one virtual machine per user.
    – vinc17
    Commented Aug 24, 2014 at 12:24
  • 1
    I see, but unfortunately this will be resource-consuming and not really useful for my purpose, as the users may require the usage of some common development packages, and I won't be installing this on every single new machine; I think it's better to leave it that way and do the monitoring automatically by a bash script. Commented Aug 24, 2014 at 12:26
  • 1
    @GiovanniMounir You can share a partition between several virtual machines.
    – vinc17
    Commented Aug 24, 2014 at 12:35
  • @GiovanniMounir You can use LXC or Docker to decrease virtualization overhead to nearly zero. Also "liking" isn't a strong reason. For example I'd go with your solution if you're managing a shared PHP host because doing LXCs or Virtual Machines will require rewrite of a $15/$5 licensed software which is overkill. Commented Aug 25, 2014 at 1:28
  • my understanding is that nice only sets the relative CPU compared to other processes. If no other process is using CPU, then your process will use 100% CPU, not limit to 10%.
    – johny why
    Commented Sep 19, 2019 at 17:42
12

Did you look at cgroups? There is some information on the Arch Wiki about them. Read the section about cpu.shares, it looks like it's doing what you need, and they can operate on a user-level, so you can limit all user processes at once.

2
  • CGroups is the way to go, though. I also run (many) shared computer servers and we use cgroups to limit the maximum number of cores an entire login session can use. This way, if the person keeps starting new processes, each one gets a smaller slice. Same for memory use. You can have users automatically put into a cgroup with pam_cgroups and the cgrulesengd service. You can use a 'template' in the cgconfig file to put each user into their own cgroup. cgrulesengd acts like your script, except instead of killing processes, it just makes sure each process is in the right cgroup.
    – jsbillings
    Commented Aug 24, 2014 at 14:28
  • Even if you don't use cgroups to limit resource use, you can use it to evaluate how much resources an individual is using, by looking at the 'stat' file for each resource, then use that information for your 5 minute script.
    – jsbillings
    Commented Aug 24, 2014 at 14:31
7

For memory, what you are looking for is ulimit -v. Note that ulimit is inherited by child processes, so if you apply it to the login shell of the user at the time of login, it applies to all his processes.

If your users all use bash as login shell, putting the following line in /etc/profile should cause all user processes to have a hard limit of 1 gigabyte (more exactly, one million kilobytes):

ulimit -vH 1000000

The option H makes sure it's a hard limit, that is, the user cannot set it back up afterwards. Of course the user can still fill memory by starting sufficiently many processes at once.

For other shells, you'll have to find out what initialization files they read instead (and what other command instead of ulimit they use).

For CPU, what you wish for doesn't seem to make sense for me. What would be the use of letting 90% of the CPU unused when only one process is running? I think what you really want is nice (and possibly ionice). Note that, like ulimit, nice values are inherited by child processes, so applying it to the login shell at login time suffices. I guess that also applies to ionice but I'm not sure.

9
  • Thanks for the memory suggestion! Is there any chance you can show me an example to apply this to the login shell of the user at the time of login? I'm not really sure how to do this. I'm also sorry for not being clear enough; what I'm trying to do is not allow any process to use more than 10% of the CPU. So do you think that nice will be nice enough to do this? If so, do you think you can show me an example to achieve this? Commented Aug 24, 2014 at 10:04
  • I still don't get the point of keeping the CPU 90% idle when only one process is running.
    – celtschk
    Commented Aug 24, 2014 at 10:07
  • 1
    If there are currently less than 10 processes running concurrently (and by running I mean really running, not just waiting for user input or disk I/O), then it is virtually guaranteed that one of them will have more than 10% of CPU. Otherwise the CPU would be virtually idling. And if you just kill any process that goes above 10%, I'm sure you'll have many users who will want to kill you. Or at least, will try to get you replaced by someone who has a clue about what those numbers mean, because you don't seem to.
    – celtschk
    Commented Aug 24, 2014 at 10:19
  • In contrast to @celtschk 's comment, if there are 11 or more processes running(cpu bound), then they will be at less than 9.09%. So if I am a user of a system that bans over 10% cpu usage I can run 11 or more processes, and hide under the radar. Commented Aug 24, 2014 at 12:13
  • @richard You are right, perhaps it would be better if the script would sum up the total amount of memory/CPU used by a user, and then terminates all of this user's processes when the percentage reaches a specific amount (so would also log him out) Commented Aug 24, 2014 at 12:24
6

Since your tags have centos, you can use systemd.

For example if you want to limit user with ID of 1234:

sudo systemctl edit --force user-1234.slice

Then type and save this:

[Slice] CPUQuota=10%

Next time that user logs in, it will affect.

Man pages: systemctl, systemd.slice, systemd.resource-control...

1
  • Somehow it did not work for me on Linux Mint 21 based (which in turn Ubuntu based) system even though man pages contained info about slices and CPUQuota. Using cgroups's cpu.nax directly worked though. Commented Sep 16, 2023 at 23:03
3

Since you are stating that cpulimit would not be practical in your case, then I suggest you look at nice, renice, and taskset, which may come close to what you want to achieve, although taskset allows to set a processes’s CPU affinity, so it might be not immediately helpful in your case.

1
  • 1
    nice and renice? That's nice! I have looked at their manual pages, but I still don't think they can help with this as you still have to set a process ID. If you could however give me an example which involves these packages to apply the limit on all running processes/future processes that would be awesome! Commented Aug 24, 2014 at 9:54
2

If you want to limit the processes that are already started, you will have to do it one by one by PID, but you can have a batch script to do that like the one below:

#!/bin/bash
LIMIT_PIDS=$(pgrep tesseract)   # PIDs in queue replace tesseract with your name
echo $LIMIT_PIDS
for i in $LIMIT_PIDS
do
    cpulimit -p $i -l 10 -z &   # to 10 percent processes
done

In my case pypdfocr launches the greedy tesseract.

Also in some cases were your CPU is pretty good you can just use a renice like this:

watch -n5 'pidof tesseract | xargs -L1 sudo renice +19'
1

Disclaimer: this answer is for benefit of those who find that QA and want to control processes run by themselves and want to limit CPU usage regardless of current total loads of the system.

I've found on my Linux Mint cpulimit did not help. Two ways worked though:

  1. cputool: e.g. cputool -c 10 -- stress -c 4

(stress is (IMO) small useful testing tool to stress test the system)

Downside: cannot as easily change usage once started.

  1. cgroups

Code (surprisingly if I delete this line code formatting below messes up):

sudo cgcreate -g cpu:mygroup1
cat /sys/fs/cgroup/mygroup1/cpu.max # not necessary, tried to find reason for error and tech details for reference 
max 100000
sudo cgset -r cpu.max="200000 100000" mygroup1 
sudo cgexec -g cpu:mygroup1 sudo -u username1 -g groupname1 stress -c 4
stress: info: [125425] dispatching hogs: 4 cpu, 0 io, 0 vm, 0 hdd

User cannot start process with cgroups unless gives access rights, I have not learned how to add rights (I hope it is possible) but instead use sudo.

# in other terminal to change usage (this syntax changes 1st value only):
sudo cgset -r cpu.max=100000 mygroup1

Notes for cgroups:

cpu.max has two values: the first value is the allowed time quota in microseconds for which all processes collectively in a child group can run during one period. The second value specifies the length of the period. For multicore/multiprocessor systems 1st is quota for all cores, 2nd is for one, so setting 1st two times greater than 2nd is expected to result in CPU usage of 2 divided by total number of cores.

On my system min valid value is 1000, max is 1000000.

Using second value of 100000 (default) resulted in additional 2x-3x speed penalty when I run ffmpeg, using 1000000 resulted in no noticable penalty.

Surprise for me - why for GHz processors interrupts each hundred milliseconds matter so much but each second is not?

cgroups can be used w/out cgcreate, cgset, cgexec (they are in cgroup-tools package which for Linux Mint distro required additional installation). IMO good description how to do that: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/managing_monitoring_and_updating_the_kernel/using-cgroups-v2-to-control-distribution-of-cpu-time-for-applications_managing-monitoring-and-updating-the-kernel, how to start a process in cgroup: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/starting_a_process.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .