-1

I recently started working on systems with NUMA nodes, and noticed something strange.

/proc/meminfo shows the sytem has 128461 MB

[root@nfvis node]# head -n4 /proc/meminfo
MemTotal: 131544388 kB
MemFree: 334016 kB
MemAvailable: 49968 kB
Buffers: 4296 kB

The system has two NUMA nodes, and here's memory distribution across the nodes:

[root@nfvis node]# head -n4 node0/meminfo
Node 0 MemTotal: 66751388 kB
Node 0 MemFree: 308952 kB
Node 0 MemUsed: 66442436 kB
Node 0 Active: 309552 kB

[root@nfvis node]# head -n4 node1/meminfo
Node 1 MemTotal: 67108864 kB
Node 1 MemFree: 22872 kB
Node 1 MemUsed: 67085992 kB
Node 1 Active: 0 kB

Clearly node0 shows 65186 MB and node1 shows 65536 MB, which adds up to 130722 MB. I am not able to understand how the system lost 2261 MB memory. It would be really helpful if someone could point me in right direction.

1 Answer 1

1

Some small difference is expected, since you have no way to freeze the entire system and then read numbers that are consistent among themselves. Even if you could freeze the system, you might be preventing some process from updating these same files that you intend reading.

The following answer better explains the problem from the post
Relation between Linux /proc/meminfo and /sys/devices/system/node/nodex/meminfo.

The code for proc is in fs/proc/meminfo.c, for the sysfs files it's in drivers/base/node.c. Comparing them might give you some hints.

Note that you'll probably never get the numbers to add up 100%, because you can't atomically read the content of all the files, so the values will change while you're reading them.

There also seems to be an inconsistency in the total RAM reported via both methods. One explanation for that is that free_init_mem doesn't appear to be NUMA aware, and increments total_ram_pages but does not do any NUMA accounting.

Although this answer dates from 2011 and NUMA support in Linux has much improved since then, reading numbers that change while you are reading means that you should treat these numbers at best as approximations.

I also suspect that some small amount of system RAM may not be taken into account when these values are calculated.

3
  • ~2GB is a little more than small difference. I went through the mentioned answer, it does not explain which one should I consider for measuring system memory.
    – shriroop_
    Commented Mar 13, 2020 at 22:07
  • I don't think meminfo counts all the memory, which is partly why the numbers don't add up. In addition, I don't see why adding the NUMA nodes should give the entire system memory, as the common system kernel does not participate in the nodes.
    – harrymc
    Commented Mar 14, 2020 at 6:38
  • /proc/meminfo only includes the non reserved / non internal gpu assigned ram. It's an odd naming issue, since meminfo added memAvailable the actual internal memory available you would see in dmesg during boot for example confuses, since the two available terms don't refer to the same thing, in meminfo they refer to non used ram, but in dmesg it refers to the ram available to the system to use after boot. Roughly anyway.
    – Lizardx
    Commented Jun 8, 2023 at 21:07

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .