13

Similar to this post, I would like to create a named shared memory segment (created via shm_open() + mmap() on CentOS 7) on a specific NUMA node (not necessarily local). The post suggested that it be achieved by using numa_move_pages().

I have few more questions:

  1. if another process (running on a core local to a different NUMA ) later starts and mmap()s to the same named shared memory segment, will OS decide to move the named shared memory segment to a NUMA local to this process? If yes, how can I prevent it?

  2. Is there any other situation that a named shared memory segment will be moved to another NUMA after I specify through numa_move_pages()?

  3. Given a named shared memory segment in /shm/dev, how can I check which NUMA node it belongs to?

I looked into numactl, and its --membind option is closed to what I want, but I am not sure what the effect if two different processes use --membind to 2 different nodes. Who wins? I guess I can test it out if #3 is answered.

Thanks!

1 Answer 1

2

I wan only answer point 1 and 3.

Point 1:

As far as I remember from my teachers and what the this link says: a page on a NUMA machine can be moved closest to the most calling CPU. In other words: if your page is allocated on bank 0 but the CPU that is directly connected to bank 1 is using it much more often, then you page is moved to the bank 1.

Point 3:

Given a named shared memory I don't know how you get the calling numa node, but given a pointer that is in this shared memory you can get its memory policy by calling: get_mempolicy()

if flags specifies MPOL_F_ADDR, then information is returned about the policy governing the memory address given in addr. This policy may be different from the process's default policy if mbind(2) or one of the helper functions described in numa(3) has been used to establish a policy for the memory range containing addr.

from the man page of get_mempolicy() here

Not the answer you're looking for? Browse other questions tagged or ask your own question.