I think the part you're missing is what "overcommitment" means.
Overcommitment is not a trick to use to actually have more resources than those you've constrained the system to have. It's a technique to act like there are more resources.
The basic concept is, most programs will allocate more memory than they actually use. For example, they may allocate 32K of RAM to read in a file that's only 57 bytes. They read it in, and actually use one page of memory for that. But there was still 32K allocated. Note that this example is from the 90s, and today, the allocation's bigger and the file's bigger, but the point is that extra memory is still a commitment that will never be collected.
If I'm not mistaken, this even happens in some programs that don't knowingly do it, because they load libraries that do it, and even loading those libraries does it - the linker allocates enough memory for the library, but only the routines that are actually invoked are actually loaded. That will probably be shared memory, so that effect is minimized, but it's still there.
The amount of excessive commitment that happens is based very much on the programs invoked and how they're used. If a program is exercised very thoroughly, it will probably use more of some of its libraries. Programs that call calloc instead of malloc don't show up as excessively commiting as much, because calloc writes to every byte of the allocated space, requiring the OS to fulfill the commitment requested, even if the real portion of that memory that will be used will be much less. Some programs are much more careful to only allocate the exact amount of memory they need, while others depend on the OS to support overcommitment to allow them to be pragmatic in their data structure arrangements.
If the applications on your server you've limited to using 1GB actually manage to try to access a full GB worth of pages, it will use the 8GB swap you've configured it to be able to use. But it doesn't use that for overcommitment. It doesn't technically use overcommitment until it's allocating past the 9GB of VM it actually has.
That having been said, this is Linux, and my recollection with trying to push Linux to maximum memory allocation without enabling overcommitment is that it tends to fail a bit early - it kept some reserve for various things like a tiny amount of file cache and stuff. That experience was over a decade ago, but I don't particularly see a reason for that to change; having memory to be able to do file I/O is important.