253

I'm running pdftoppm to convert a user-provided PDF into a 300DPI image. This works great, except if the user provides an PDF with a very large page size. pdftoppm will allocate enough memory to hold a 300DPI image of that size in memory, which for a 100 inch square page is 100*300 * 100*300 * 4 bytes per pixel = 3.5GB. A malicious user could just give me a silly-large PDF and cause all kinds of problems.

So what I'd like to do is put some kind of hard limit on memory usage for a child process I'm about to run--just have the process die if it tries to allocate more than, say, 500MB of memory. Is that possible?

I don't think ulimit can be used for this, but is there a one-process equivalent?

4

12 Answers 12

188

Another way to limit this is to use Linux's control groups. This is especially useful if you want to limit a process's (or group of processes') allocation of physical memory distinctly from virtual memory. For example:

cgcreate -g memory:myGroup
echo 500M > /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes
echo 5G > /sys/fs/cgroup/memory/myGroup/memory.memsw.limit_in_bytes

will create a control group named myGroup, cap the set of processes run under myGroup up to 500 MB of physical memory with memory.limit_in_bytes and up to 5000 MB of physical and swap memory together with memory.memsw.limit_in_bytes. More info about these options can be found here: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/6/html/resource_management_guide/sec-memory

To run a process under the control group:

cgexec -g memory:myGroup pdftoppm

Note that on a modern Ubuntu distribution this example requires installing the cgroup-tools package (previously cgroup-bin):

sudo apt install cgroup-tools

and editing /etc/default/grub to change GRUB_CMDLINE_LINUX_DEFAULT to:

GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1"

and then running sudo update-grub and rebooting to boot with the new kernel boot parameters.

10
  • 7
    The firejail program will also let you start a process with memory limits (using cgroups and namespaces to limit more than just memory). On my systems I did not have to change the kernel command line for this to work!
    – Ned64
    Commented Feb 15, 2018 at 12:20
  • 3
    Do you need the GRUB_CMDLINE_LINUX_DEFAULT modification to make the setting persistent? I found another way to make it persistent here.
    – stason
    Commented Aug 5, 2018 at 18:36
  • 3
    See also: What does swapaccount=1 in GRUB_CMDLINE_LINUX_DEFAULT do? Commented Jul 22, 2019 at 11:04
  • 6
    It would be useful to note in this answer that on some distributions (eg Ubuntu) sudo is required for cgcreate, and also the later commands unless permission is given to the current user. This would save the reader from having to find this information elsewhere (eg askubuntu.com/questions/345055). I suggested an edit to this effect but it was rejected.
    – stewbasic
    Commented Jul 23, 2019 at 5:28
  • 4
    In Ubuntu 22.04, the /sys/fs/cgroup/memory directory does not exist but creating a cgroup with sudo cgcreate -g memory:myGroup does create a directory named /sys/fs/cgroup/myGroup. However, this directory has no memory.limit_in_bytes file and is not writable, not even by the superuser. The files that it does contain appear to be reporting statistics about resources used in the cgroup, but I don't see anything that controls them. I could cgexec a process in this cgroup (as superuser) and I see the peak memory use increase. Are these instructions RedHat-centric? Commented Jan 9, 2023 at 18:00
138

On any systemd-based distro you can also use cgroups indirectly through systemd-run. E.g. for your case of limiting pdftoppm to 500M of RAM, starting with cgroupsv2, you can simply do:

systemd-run --scope -p MemoryMax=500M --user pdftoppm

Previously, this required booting with the systemd.unified_cgroup_hierarchy kernel parameter, but tested as of Ubuntu 22.04 cgroup-tools 2.0-2, this does not seem to be the case anymore, the command just worked without any changes to the kernel parameters, and systemd.unified_cgroup_hierarchy is not set.

Before cgroupsv2 you could not use --user, and would instead run:

systemd-run --scope -p MemoryMax=500M pdftoppm

but without --user, this will ask you for a password every time, even though the app gets launched as your user. Do not allow this to delude you into thinking that the command needs sudo, because that would cause the command to run under root, which was hardly your intention.

12
  • 1
    Short and sweet. Before I tried firejail, but that seemed to be overkill with too many side effects for just limiting memory consumption. Thanks! Commented Dec 13, 2019 at 7:31
  • 1
    @EdiD I don't know a systemd option for that, but I presume disabling OOM-killer system-wide may make apps to receive ENOMEM instead of being killed. But in practice I can't think of much projects beside systems kernels that can gracefully handle OOM condition. Even those pure C projects that keep checking for malloc results, I don't they ever being tested in OOM conditions. So they'll misbehave, most likely crash. With that said, please check if you find MemoryHigh more useful for your usecase
    – Hi-Angel
    Commented Jul 10, 2020 at 12:21
  • 4
    Of note: If you use the --user option but don't have cgroupsv2 enabled (which is still the default in many distros), systemd-run will fail to set a memory limit but won't throw any errors. Github issue Commented Nov 27, 2020 at 6:54
  • 2
    I've just retested, and it does seem to work, a C hello world gets killed with systemd-run --scope -p MemoryMax=1 --user ./hello.out or 1K, but runs fine with systemd-run --scope -p MemoryMax=1M --user ./hello.out. And sudo dmesg | less shows CLI: Command line: BOOT_IMAGE=/BOOT/ubuntu_uvs1fq@/vmlinuz-5.15.0-27-generic root=ZFS=rpool/ROOT/ubuntu_uvs1fq ro. I did get the notification here as well BTW :-) Commented May 3, 2022 at 15:52
  • 1
    Tried it on browsers, firefox and chrome. Works perfect. Both browsers spawn several processes, but in the end, they are limited to the memory assigned in the command. I assigned 500 for firefox and 2GB for chrome. I don't feel any performance issue.
    – Mijo
    Commented Feb 14, 2023 at 10:30
103

If your process doesn't spawn more children that consume the most memory, you may use setrlimit function. More common user interface for that is using ulimit command of the shell:

$ ulimit -Sv 500000     # Set ~500 mb limit
$ pdftoppm ...

This will only limit "virtual" memory of your process, taking into account—and limiting—the memory the process being invoked shares with other processes, and the memory mapped but not reserved (for instance, Java's large heap). Still, virtual memory is the closest approximation for processes that grow really large, making the said errors insignificant.

If your program spawns children, and it's them which allocate memory, it becomes more complex, and you should write auxiliary scripts to run processes under your control. I wrote in my blog, why and how.

13
  • 4
    why is setrlimit more complex for more children? man setrlimit tells me that "A child process created via fork(2) inherits its parents resource limits. Resource limits are preserved across execve(2)"
    – akira
    Commented Feb 13, 2011 at 8:13
  • 11
    Because the kernel does not sum the vm size for all child processes; if it did it would get the answer wrong anyway. The limit is per-process, and is virtual address space, not memory usage. Memory usage is harder to measure.
    – MarkR
    Commented Feb 13, 2011 at 8:17
  • 1
    if i understand the question correctly then OP whats the limit per subprocess (child) .. not in total.
    – akira
    Commented Feb 13, 2011 at 8:21
  • 5
    Just wanted to say thanks - this ulimit approach helped me with firefox's bug 622816 – Loading a large image can "freeze" firefox, or crash the system; which on a USB boot (from RAM) tends to freeze the OS, requiring hard restart; now at least firefox crashes itself, leaving the OS alive... Cheers!
    – sdaau
    Commented Apr 4, 2013 at 15:51
  • 2
    What are the soft and hard limits?
    – wsdzbm
    Commented Aug 29, 2016 at 17:16
78

There's some problems with ulimit. Here's a useful read on the topic: Limiting time and memory consumption of a program in Linux, which lead to the timeout tool, which lets you cage a process (and its forks) by time or memory consumption.

The timeout tool requires Perl 5+ and the /proc filesystem mounted. After that you copy the tool to e.g. /usr/local/bin like so:

curl https://raw.githubusercontent.com/pshved/timeout/master/timeout | \
  sudo tee /usr/local/bin/timeout && sudo chmod 755 /usr/local/bin/timeout

After that, you can 'cage' your process by memory consumption as in your question like so:

timeout -m 500 pdftoppm Sample.pdf

Alternatively you could use -t <seconds> and -x <hertz> to respectively limit the process by time or CPU constraints.

The way this tool works is by checking multiple times per second if the spawned process has not oversubscribed its set boundaries. This means there actually is a small window where a process could potentially be oversubscribing before timeout notices and kills the process.

A more correct approach would hence likely involve cgroups, but that is much more involved to set up, even if you'd use Docker or runC, which among things, offer a more user-friendly abstraction around cgroups.

5
  • Seems to be working for me now (again?) but here's the google cache version: webcache.googleusercontent.com/…
    – kvz
    Commented Apr 27, 2017 at 12:32
  • Can we use timeout together with taskset (we need to limit both memory and cores) ?
    – ransh
    Commented Oct 24, 2017 at 12:47
  • 28
    It should be noted that this answer is not referring to the linux standard coreutils utility of the same name! Thus, the answer is potentially dangerous if anywhere on your system, some package has a script expecting timeout to be the linux standard coreutils package! I am unaware of this tool being packaged for distributions such as debian. Commented Apr 8, 2018 at 7:03
  • Does -t <seconds> constraint kill the process after that many seconds?
    – xxx374562
    Commented Nov 26, 2018 at 2:05
  • 1
    Might also be helpful that -m is accepting kilobytes. The example above suggests its using MB.
    – Daniel
    Commented Nov 4, 2019 at 13:34
17

As of 2022 / Ubuntu 22.04 the below script is obsolete. Ubuntu 22.04 no longer mounts cgroups v1 by default, and systemd-run now supports everything needed. The command to run a program with a hard memory limit is

systemd-run --user --scope -p MemoryMax=<memorylimit> \
  -p MemorySwapMax=<swaplimit> <command>
  • Note that memory and swap have separate limits, unlike the memory.memsw.* control files in cgroups v1 which controlled the total amount of memory + swap used. I have so far not found a way to set a limit on the combined memory + swap.

  • There is also a MemoryHigh parameter which is less strict than MemoryMax. It won't kill the processes but starts to throttle them and agressively swap out memory.

The below script can be adapted to run on cgroups v2 as per Ciro Santilli's answer, but with systemd-run now doing everything necessary there is no need to. I have a script similar to that from my original answer that works with systemd-run posted in a new answer.


Original answer:

I'm using the below script, which works great. It uses cgroups through cgmanager. Update: it now uses the commands from cgroup-tools. Name this script limitmem and put it in your $PATH and you can use it like limitmem 100M bash. This will limit both memory and swap usage. To limit just memory remove the line with memory.memsw.limit_in_bytes.

edit: On default Linux installations this only limits memory usage, not swap usage. To enable swap usage limiting, you need to enable swap accounting on your Linux system. Do that by setting/adding swapaccount=1 in /etc/default/grub so it looks something like

GRUB_CMDLINE_LINUX="swapaccount=1"

Then run sudo update-grub and reboot.

Disclaimer: I wouldn't be surprised if cgroup-tools also breaks in the future. The correct solution would be to use the systemd api's for cgroup management but there are no command line tools for that a.t.m.

edit (2021): Until now this script still works, but it goes against Linux's recommendation to have a single program manage your cgroups. Nowadays that program is usually systemd. Unfortunately systemd has a number of limitations that make it difficult to replace this script with systemd invocations. The systemd-run --user command should allow a user to run a program with resource limitations, but that isn't supported on cgroups v1. (Everyone uses cgroups v1 because docker doesn't work on cgroupsv2 yet except for the very latest versions.) With root access (which this script also requires) it should be possible to use systemd-run to create the correct systemd-supported cgroups, and then manually set the memory and swap properties in the right cgroup, but that is still to be implemented. See also this bug comment for context, and here and here for relevant documentation.

According to @Mikko's comment using a script like this with systemd runs the risk of systemd losing track of processes in a sessions. I haven't noticed such problems, but I use this script mostly on a single-user machine.

#!/bin/sh

# This script uses commands from the cgroup-tools package. The cgroup-tools commands access the cgroup filesystem directly which is against the (new-ish) kernel's requirement that cgroups are managed by a single entity (which usually will be systemd). Additionally there is a v2 cgroup api in development which will probably replace the existing api at some point. So expect this script to break in the future. The correct way forward would be to use systemd's apis to create the cgroups, but afaik systemd currently (feb 2018) only exposes dbus apis for which there are no command line tools yet, and I didn't feel like writing those.

# strict mode: error if commands fail or if unset variables are used
set -eu

if [ "$#" -lt 2 ]
then
    echo Usage: `basename $0` "<limit> <command>..."
    echo or: `basename $0` "<memlimit> -s <swaplimit> <command>..."
    exit 1
fi

cgname="limitmem_$$"

# parse command line args and find limits

limit="$1"
swaplimit="$limit"
shift

if [ "$1" = "-s" ]
then
    shift
    swaplimit="$1"
    shift
fi

if [ "$1" = -- ]
then
    shift
fi

if [ "$limit" = "$swaplimit" ]
then
    memsw=0
    echo "limiting memory to $limit (cgroup $cgname) for command $@" >&2
else
    memsw=1
    echo "limiting memory to $limit and total virtual memory to $swaplimit (cgroup $cgname) for command $@" >&2
fi

# create cgroup
sudo cgcreate -g "memory:$cgname"
sudo cgset -r memory.limit_in_bytes="$limit" "$cgname"
bytes_limit=`cgget -g "memory:$cgname" | grep memory.limit_in_bytes | cut -d\  -f2`

# try also limiting swap usage, but this fails if the system has no swap
if sudo cgset -r memory.memsw.limit_in_bytes="$swaplimit" "$cgname"
then
    bytes_swap_limit=`cgget -g "memory:$cgname" | grep memory.memsw.limit_in_bytes | cut -d\  -f2`
else
    echo "failed to limit swap"
    memsw=0
fi

# create a waiting sudo'd process that will delete the cgroup once we're done. This prevents the user needing to enter their password to sudo again after the main command exists, which may take longer than sudo's timeout.
tmpdir=${XDG_RUNTIME_DIR:-$TMPDIR}
tmpdir=${tmpdir:-/tmp}
fifo="$tmpdir/limitmem_$$_cgroup_closer"
mkfifo --mode=u=rw,go= "$fifo"
sudo -b sh -c "head -c1 '$fifo' >/dev/null ; cgdelete -g 'memory:$cgname'"

# spawn subshell to run in the cgroup. If the command fails we still want to remove the cgroup so unset '-e'.
set +e
(
set -e
# move subshell into cgroup
sudo cgclassify -g "memory:$cgname" --sticky `sh -c 'echo $PPID'`  # $$ returns the main shell's pid, not this subshell's.
exec "$@"
)

# grab exit code 
exitcode=$?

set -e

# show memory usage summary

peak_mem=`cgget -g "memory:$cgname" | grep memory.max_usage_in_bytes | cut -d\  -f2`
failcount=`cgget -g "memory:$cgname" | grep memory.failcnt | cut -d\  -f2`
percent=`expr "$peak_mem" / \( "$bytes_limit" / 100 \)`

echo "peak memory used: $peak_mem ($percent%); exceeded limit $failcount times" >&2

if [ "$memsw" = 1 ]
then
    peak_swap=`cgget -g "memory:$cgname" | grep memory.memsw.max_usage_in_bytes | cut -d\  -f2`
    swap_failcount=`cgget -g "memory:$cgname" |grep memory.memsw.failcnt | cut -d\  -f2`
    swap_percent=`expr "$peak_swap" / \( "$bytes_swap_limit" / 100 \)`

    echo "peak virtual memory used: $peak_swap ($swap_percent%); exceeded limit $swap_failcount times" >&2
fi

# remove cgroup by sending a byte through the pipe
echo 1 > "$fifo"
rm "$fifo"

exit $exitcode
11
  • 1
    call to cgmanager_create_sync failed: invalid request for every process I try to run with limitmem 100M processname. I'm on Xubuntu 16.04 LTS and that package is installed. Commented Mar 12, 2017 at 9:58
  • Ups, I get this error message: $ limitmem 400M rstudio limiting memory to 400M (cgroup limitmem_24575) for command rstudio Error org.freedesktop.DBus.Error.InvalidArgs: invalid request any idea? Commented Feb 15, 2018 at 7:19
  • @RKiselev cgmanager is deprecated now, and not even available in Ubuntu 17.10. The systemd api that it uses was changed at some point, so that's probably the reason. I have updated the script to use cgroup-tools commands.
    – JanKanis
    Commented Feb 15, 2018 at 11:39
  • 1
    @mikko you're right, see also the first comment line of the script. When I wrote the scripts originally the right tools didn't exist yet. It still works for me but maybe it's time to figure out how systemd-run works.
    – JanKanis
    Commented Sep 4, 2021 at 15:30
  • 1
    @mikko I have never noticed any session handling problems, but I run this on a single user desktop. For my use case the risk is an acceptable price for my whole system not crashing due to OOM.
    – JanKanis
    Commented Sep 4, 2021 at 15:39
8

In addition to the tools from daemontools, suggested by Mark Johnson, you can also consider chpst, which is found in runit.  Runit itself is bundled in busybox, so you might already have it installed.

The man page of chpst shows the option:

-m bytes

    limit memory.  Limit the data segment, stack segment, locked physical pages, and total of all segment per process to bytes bytes each.

5

While this isn't what the OP originally asked for, but for the sake of completeness, for processes that are already running, I use prlimit

E.G. $ prlimit -v1073741824 -pid <xx>

Sets the max limit for memory to 1 Gibibyte. One can set both soft limits (and carry out custom actions such as email) and hard limits.

3
  • 1
    prlimit(1) has never been limited to already running commands! You can do e.g. prlimit -v1073741824 pdftoppm.
    – Devon
    Commented Nov 1, 2023 at 10:22
  • @Devon Thanks for completing the completeness attempt :D , I'd focussed on just what some of the other options mentioned here can't do.
    – 0xc0de
    Commented Nov 1, 2023 at 16:32
  • This should be the accepted answer. Although note that to set a hard limit you´d need to do prlimit -v:107... <xx>
    – P Varga
    Commented Jun 14 at 17:23
5

cgroupsv2 update (Ubuntu 22.04)

Things have moved around a bit. Compared to https://unix.stackexchange.com/a/125024/32558 now you now need:

sudo cgcreate -a $USER:$USER -g memory:myGroup -t $USER:$USER
sudo cgset -r memory.max=500M myGroup
sudo cgset -r memory.swap.max=0 myGroup
sudo chmod o+w /sys/fs/cgroup/cgroup.procs
cgexec -g memory:myGroup mycmd arg0 arg1

The line:

sudo chmod o+w /sys/fs/cgroup/cgroup.procs

is needed for it to work without sudo: https://askubuntu.com/questions/1406329/how-to-run-cgexec-without-sudo-as-current-user-on-ubuntu-22-04-with-cgroups-v2/1450845#1450845 otherwise it fails with:

cgroup change of group failed

If you don't run that command, you can also use:

sudo cgexec -g memory:myGroup mycmd arg0 arg1

but then that runs as root, which you usually don't want it to do, this can be tested with sudo cgexec -g memory:myGroup id.

Compared to https://unix.stackexchange.com/a/125024/32558 which was originally for v1:

  • /sys/fs/cgroup/memory/myGroup/memory.limit_in_bytes is now /sys/fs/cgroup/myGroup/memory.max
  • /sys/fs/cgroup/memory/myGroup/memory.memsw.limit_in_bytes was split, now you just set the swap separatelly in /sys/fs/cgroup/myGroup/memory.swap.max rather than the sum

For the specific case of memory however, just use systemd-run as mentioned at: https://unix.stackexchange.com/a/536046/32558 that just worked and is by far the simplest approach.

Testing it out

malloc_touch.c

#include <assert.h>
#include <stdio.h>
#include <stdlib.h>

int main(int argc, char **argv) {
    size_t nbytes, step;
    if (argc > 1) {
        nbytes = strtoull(argv[1], NULL, 0);
    } else {
        nbytes = 0x10;
    }
    if (argc > 2) {
        step = strtoull(argv[2], NULL, 0);
    } else {
        step = 1;
    }

    char *base = malloc(nbytes);
    assert(base);
    char *i = base;
    while (i < base + nbytes) {
        *i = 13;
        i += step;
    }
    return EXIT_SUCCESS;
}

GitHub upstream.

First we find that 1M is about the minimum at which a C hello world will run:

sudo cgset -r memory.max=1M myGroup
cgexec -g memory:myGroup ./malloc_touch.out

Then starting from there we can try to malloc 100k:

sudo cgset -r memory.max=1M myGroup
cgexec -g memory:myGroup ./malloc_touch.out 100000

OK, there was still some left. Then if try 1M:

sudo cgset -r memory.max=1M myGroup
cgexec -g memory:myGroup ./malloc_touch.out 1000000

Killed as expected. Increase limit to 10M:

sudo cgset -r memory.max=10M myGroup
cgexec -g memory:myGroup ./malloc_touch.out 1000000

OK. Malloc 9M:

sudo cgset -r memory.max=10M myGroup
cgexec -g memory:myGroup ./malloc_touch.out 9000000

OK. Malloc 10M:

sudo cgset -r memory.max=10M myGroup
cgexec -g memory:myGroup ./malloc_touch.out 10000000

OK. Humm, not sure why, expected it to die. MiB vs MB? Try 11M

sudo cgset -r memory.max=10M myGroup
cgexec -g memory:myGroup ./malloc_touch.out 11000000

Killed as expected.

Tested on Ubuntu 22.10.

cgcreate is completely broken on Ubuntu 21.10

https://askubuntu.com/questions/1376093/is-cgroup-tools-using-cgroup-v1-or-v2

Fails with:

cgcreate: libcgroup initialization failed: Cgroup is not mounted

Apparently they moved part of the system to v2 but not the other.

0
4

I'm running Ubuntu 18.04.2 LTS and JanKanis script doesn't work for me quite as he suggests. Running limitmem 100M script is limiting 100MB of RAM with unlimited swap.

Running limitmem 100M -s 100M script fails silently as cgget -g "memory:$cgname" has no parameter named memory.memsw.limit_in_bytes.

So I disabled swap:

# create cgroup
sudo cgcreate -g "memory:$cgname"
sudo cgset -r memory.limit_in_bytes="$limit" "$cgname"
sudo cgset -r memory.swappiness=0 "$cgname"
bytes_limit=`cgget -g "memory:$cgname" | grep memory.limit_in_bytes | cut -d\  -f2`
2
  • @sourcejedi added it :)
    – d9ngle
    Commented May 2, 2019 at 12:59
  • 2
    Right, I edited my answer. To enable swap limits you need to enable swap accounting on your system. There's a small runtime overhead to that so it isn't enabled by default on Ubuntu. See my edit.
    – JanKanis
    Commented May 2, 2019 at 13:18
1

Not really an answer to the question as posed, but:

Could you check the file-size, to prevent issues BEFORE trying to process a pdf? That would remove the "ridiculously large" issue.

There are also programs that will process a pdf (there are python programs, for instance: http://theautomatic.net/2020/01/21/how-to-read-pdf-files-with-python/) whereby one could split the pdf into more manageable-sized chunks. Or do both: if the file-size is reasonable, process it; otherwise (else) split it into as many pieces as required, and process those. One could then re-combine the outputs. One might need to have some overlap between sections to prevent "border" issues.

Limiting the available memory might well force a failure to process larger files, or lead to massive memory swap issues.

1

I also like to limit the memory of some of my programs, so I have an updated script from my previous answer that works with systemd as the cgroup manager. In order not to clutter the other answer, I'll add it here as a new answer.

Usage: limitmem 500M -s 200M <program>

That will hard-limit the program to 500M of ram and 200M swap, for a total of 700M memory usage. If swap usage is not specified, the below script will default to a limit of 1 GB. The new cgroups (v2) system unfortunately does not offer an integrated total virtual memory use control that would allow the kernel to partition between ram and swap use.

If the program is a Snap package (which starts its own cgroup, thereby leaving the cgroup limitmem puts it in), add the --snap flag.

The script (save as ~/bin/limitmem)

#!/bin/sh

# strict mode: error if commands fail or if unset variables are used
set -eu

limit=''
swaplimit="1G"
snap=0

while true
do
    # parse command line args and find limits

    if [ "$#" -lt 1 -o "$1" = "-h" -o "$1" = "--help" ]
    then
            echo Usage: `basename "$0"` "[--snap] <limit> <command>..."
            echo or: `basename "$0"` "[--snap] <memlimit> -s <swaplimit> <command>..."
            echo
            echo Pass "--snap" if the command is a Snap app, which will place itself into a systemd scope
            exit 1
    fi

    if [ "$1" = "-s" ]
    then
            shift
            swaplimit="$1"
            shift
            continue
    fi

    if [ "$1" = "--snap" ]
    then
        shift
        snap=1
        continue
    fi

    if [ "$1" = -- ]
    then
        shift
            break
    fi

    if [ "$limit" = "" ]
    then
        limit="$1"
        shift
        continue
    fi

    break  # Reached start of command
done

if [ "$snap" = 0 ]
then
    exec systemd-run --user --scope -p MemoryMax="$limit" -p MemorySwapMax="$swaplimit" "$@"
else
    scopename=snap.`basename "$1"`.\*
    (
        sleep 2;
        systemctl set-property --user "$scopename" MemoryMax="$limit" MemorySwapMax="$swaplimit"
    )&
    exec "$@"
fi

I use a customized version of the above that records the processes and their cgroups, so another fish shell function I have can show a nice overview of the memory usage. This version depends on fish shell and being uid 1000:

#!/bin/sh

# strict mode: error if commands fail or if unset variables are used
set -eu

limit=''
swaplimit="1G"
snap=0

while true
do
    # parse command line args and find limits

    if [ "$#" -lt 1 -o "$1" = "-h" -o "$1" = "--help" ]
    then
            echo Usage: `basename "$0"` "[--snap] <limit> <command>..."
            echo or: `basename "$0"` "[--snap] <memlimit> -s <swaplimit> <command>..."
            echo
            echo Pass "--snap" if the command is a Snap app, which will place itself into a systemd scope
            exit 1
    fi

    if [ "$1" = "-s" ]
    then
            shift
            swaplimit="$1"
            shift
            continue
    fi

    if [ "$1" = "--snap" ]
    then
        shift
        snap=1
        continue
    fi

    if [ "$1" = -- ]
    then
        shift
            break
    fi

    if [ "$limit" = "" ]
    then
        limit="$1"
        shift
        continue
    fi

    break  # Reached start of command
done

if [ "$snap" = 0 ]
then
    exec systemd-run --user --scope -p MemoryMax="$limit" -p MemorySwapMax="$swaplimit" fish -c "set -U JCLIMITMEM \$JCLIMITMEM (basename \"$1\"):(string split -f3 -m2 : (grep 0:: /proc/self/cgroup)); exec \$argv" -- "$@"
else
    scopename=snap.`basename "$1"`.\*
    fish -c "set -U JCLIMITMEM \$JCLIMITMEM (basename \"$1\"):\"/user.slice/user-1000.slice/[email protected]/app.slice/$scopename\"; echo \$JCLIMITMEM"
    (
        sleep 2;
        systemctl set-property --user "$scopename" MemoryMax="$limit" MemorySwapMax="$swaplimit"
    )&
    exec "$@"
fi

With this fish function to show current memory usage:

function memoryusage
    set -l valid_cgroups
    for g in $JCLIMITMEM
        set -l name (string split -f1 : $g)
        set -l cgroup (string split -f2 -m1 : $g)

        if ! [ -d (fish -c "echo /sys/fs/cgroup/$cgroup" 2>/dev/null) ]
            continue
        end
        if ! contains $name:$cgroup $valid_cgroups
            set valid_cgroups $valid_cgroups $name:$cgroup
        end

        set -l c (fish -c "echo /sys/fs/cgroup/$cgroup" 2>/dev/null)

        set -l limit (cat $c/memory.max)
        #set -l max (cat "$c/memory.peak")
        set -l curr (cat "$c/memory.current")
        set -l failcnt (string split -f2 \  (grep oom\  "$c/memory.events"))

        #set -l maxp (math "floor ($max * 100 / $limit)")
        set -l currp (math "floor ($curr * 100 / $limit)")
        set -l GB 1073741824
        set -l limitgb (math "floor ($limit / $GB)")
        set -l limitgbfrac (math "floor (($limit - $limitgb*$GB) * 100 / $GB)")
        #set -l maxgb (math "floor ($max / $GB)")
        #set -l maxgbfrac (math "floor (($max - $maxgb*$GB) * 100 / $GB)")
        set -l currgb (math "floor ($curr / $GB)")
        set -l currgbfrac (math "floor (($curr - $currgb*$GB) * 100 / $GB)")

        echo $name:
        printf "limit:     %12d %9s.%02d GB)\n" $limit "($limitgb" $limitgbfrac
        #printf "max usage: %12d %5s, %2d.%02d GB)\n" $max "($maxp%" $maxgb $maxgbfrac
        printf "usage:     %12d %5s, %2d.%02d GB)\n" $curr "($currp%" $currgb $currgbfrac
        printf "failcount: %12d\n" $failcnt
        echo
    end
    set JCLIMITMEM $valid_cgroups
end
0

Ubuntu 22.04
To resolve it you need to boot your host system into CGroupV1 mode by modifying your kernel’s boot arguments to include: systemd.unified_cgroup_hierarchy=false.

sudo nano /etc/default/grub

(or other editor of your choice).  Add/change

GRUB_CMDLINE_LINUX="cgroup_enable=memory swapaccount=1 systemd.unified_cgroup_hierarchy=false"

in the file.

sudo update-grub
sudo reboot
2

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .