I'm wondering about how operating systems let child processes know how much available memory there is.
Say the whole computer has 1GB RAM memory built in. The operating runs and uses 100MB (I have no idea how much an operating system actually uses). So there is 900MB left.
Then you run 10 programs. Each program creates 10 child processes. The question is, what these parent and child processes see as the total memory available to them.
As a second part to the question to make it a little more complex, say that the applications have been running for a while and now there is 500MB available on the computer (say the operating system used 100MB more, and the applications used 300MB more, to get to this level of 500MB left). The question now is what these parent and child processes see as the available memory at this point. If it's the same as before, or different, and how it is different.
The reason for the question is because I've read about Virtual Memory which states:
virtual memory [is a] technique that provides an "idealized abstraction of the storage resources that are actually available on a given machine" which "creates the illusion to users of a very large (main) memory."
So basically, it sounds like each process in case (1) will get told either "you have 1GB available memory", or "you have 900MB of available memory". I am not sure what it would actually say, if it says the total on the whole computer, or the total - the operating system's usage.
Then for case (2), it would read either "1GB available", "900MB available", "600MB available", or "500MB available". Same situation, I'm not sure what it would say.
It could also be different from those values. The operating system could somehow approximate the available memory for each of the 100 child process, dividing it evenly perhaps. So if there is 500MB left on the computer, that would mean each process would be told "you have 500 / 100 == 5MB of available space". But then if this is the case, if one process used up 5MB, and there was still 495MB left, wondering if it would be allowed to start using this, and get told a new number of what is available. This is why I don't think this is how it's typically done, and rather it seems the OS would tell probably what is available on the computer as a whole (so 1GB).
Also, the reason why I think it would always say "1GB" is because I am not sure if there is a way to determine how much memory a single process is using (or if the operating system knows how much memory it is using). If the OS does know how much it itself is using, then it seems like it would report 900MB.
Another piece of the confusion is that, if the memory usage is constantly changing, and the OS tells each process what is the total memory - the used memory, then you would have to constantly check how much memory is available if you were to try accessing more memory. That is, you couldn't cache the memory usage when the program starts. It could be that the program, sitting idle for a few hours, starts with 100MB of memory "on the computer", but then after that while it checks again to find "oh wait there is only 5MB available". For some reason that seems like undesirable behavior, but I'm not sure.
Any help on understanding generally how an OS tells the child processes how much available memory there is at different points of time would be helpful. Thank you.