2

I am doing some experiment with VM and KVM. I am trying to realize the fact of overcommitment and swap-space. At first, let me show the system setup and associate configuration that I made for this experiment.

My host computer system has:

  1. Memory(DRAM): 16GB
  2. swapspace(swap partition): 32GB
  3. SSD: 512GB

and my VM has:

  1. Memory: 4GB
  2. swapspace(swap partition): 8GB
  3. virtual Disk capacity: 20GB

If I limit memory resource of VM from 4GB to 1GB by cgroup interface. In this situation overcommitment occurs. Then I tried to resolve this by wap-space and checked through vmstat if it worked or not. It seems it doesn't use swap-space as a memory.

Why my VM did not use swap-space when during overcommitment situation?

2
  • You really dont want to start using hdd (or really even ssd) as primary memory.
    – davidgo
    Commented May 22, 2019 at 8:31
  • Oh, the OP has a very good motivation to do what he's doing. If he uses just a fraction of this SSD as "swap", so that the KVM sees - say - 48GB instead of just 16GB he will gain a lot of functionality; if the OS properly uses this cache, and the VM's are not too active on the whole range of their memory -- that will be very useful indeed. (Note the SSD wear though!)
    – P Marecki
    Commented Oct 17, 2020 at 7:10

1 Answer 1

0

I think the part you're missing is what "overcommitment" means.

Overcommitment is not a trick to use to actually have more resources than those you've constrained the system to have. It's a technique to act like there are more resources.

The basic concept is, most programs will allocate more memory than they actually use. For example, they may allocate 32K of RAM to read in a file that's only 57 bytes. They read it in, and actually use one page of memory for that. But there was still 32K allocated. Note that this example is from the 90s, and today, the allocation's bigger and the file's bigger, but the point is that extra memory is still a commitment that will never be collected.

If I'm not mistaken, this even happens in some programs that don't knowingly do it, because they load libraries that do it, and even loading those libraries does it - the linker allocates enough memory for the library, but only the routines that are actually invoked are actually loaded. That will probably be shared memory, so that effect is minimized, but it's still there.

The amount of excessive commitment that happens is based very much on the programs invoked and how they're used. If a program is exercised very thoroughly, it will probably use more of some of its libraries. Programs that call calloc instead of malloc don't show up as excessively commiting as much, because calloc writes to every byte of the allocated space, requiring the OS to fulfill the commitment requested, even if the real portion of that memory that will be used will be much less. Some programs are much more careful to only allocate the exact amount of memory they need, while others depend on the OS to support overcommitment to allow them to be pragmatic in their data structure arrangements.

If the applications on your server you've limited to using 1GB actually manage to try to access a full GB worth of pages, it will use the 8GB swap you've configured it to be able to use. But it doesn't use that for overcommitment. It doesn't technically use overcommitment until it's allocating past the 9GB of VM it actually has.

That having been said, this is Linux, and my recollection with trying to push Linux to maximum memory allocation without enabling overcommitment is that it tends to fail a bit early - it kept some reserve for various things like a tiny amount of file cache and stuff. That experience was over a decade ago, but I don't particularly see a reason for that to change; having memory to be able to do file I/O is important.

0

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .