3

I have a memory pool class that manages memory objects of fixed sizes, a bit like a primitive malloc() that can only return memory blocks of a few predefined sizes and unlike malloc() is guaranteed to always be O(1), which makes it suitable for time critical code.

To prevent that the pool consumes too much memory, the creator can place an upper bound on the pool. And this bound arises an interesting question:

For memory management purposes every memory block must have a header that consumes some memory as well. How much is system dependent, e.g. it depends on how big pointers are, and thus is not easy to predict for unknown target platforms. Also the header is always equally large, no matter how big the memory block is, so the relative overhead varies as well and becomes the bigger the smaller the block is; that means the total overhead at runtime is also hard to predict even if the exact header size is known as it depends on how many blocks of which size will be in use, which may not even be known in advance in many cases.

The question is, should the overhead count towards the upper bound of the pool or not?

What speaks for counting it is that the bound will be absolute. If I set the bound to 512 kiB, I know for sure that no matter what I will do later on in my code, this pool will never use more than 512 kiB of memory. This is very useful on systems with very limited resources, as I can sum up all my pools and know for sure the worst case memory consumption to expect.

What speaks against it is that I won't know how much usable memory I will get out of the pool as that depends on the size of an unknown header (the header is private) and on the overhead, which depends on the size of the blocks I will request at runtime. If the bound just counts the usable memory, I will know that I can get 512 kiB of memory blocks out of that pool and thus can store 512 kiB of data in memory for sure. What I won't know is how much the total memory consumption will be in case I use all of the 512 kiB, as it will even vary depending on how many small and how many big blocks I have requested.

Programmatically its easy to implement the pool either way so it boils down to what behavior I desire and I see good reasons for either way. And yes, I could even implement the pool in such a way, that even this can be selected during creation but I don't think that this would be a good idea. The class should behave one way or the other way but it should always behave the same way, everything else will only cause confusion IMHO and lead to subtle bugs.

2
  • What will be the typical system where your allocator will get used? Would the designer of such a system accept an unknown, unbounded overhead in memory usage in order to be guaranteed they can allocate 512 kiB from your allocator in arbitrary large/small chunks? Or would they want to have an upper bound on the total memory used by your allocator and accept that some of that will be used for internal administration? Commented Dec 17, 2021 at 6:45
  • This is one of the things that is hard to say. The class is part of a framework that helps implementing network protocols in user space. When being used on a normal PC or a server, memory is not such an issue as these devices have plenty of it and they also have swap space. Yet when being used on an embedded device with limited RAM and no swap space, memory usage is far more critical, I guess. On the other hand, being able to know how much memory you can get out of the pool can be critical as well for some protocols.
    – Mecki
    Commented Dec 17, 2021 at 9:35

1 Answer 1

3

The question is, should the overhead count towards the upper bound of the pool or not?

Yes. Because the upper bound isn't the amount of usable memory. It can only be the consumed memory.

That isn't to say that your class can't spit out a useable memory calculation once settings are chosen. Your job here isn't to pretend that there isn't any overhead. It's to make the fact that these are two different numbers clear.

2
  • How would you then deal with the problem that a user requires a pool for holding up to 384 kiB of data? What bound should that user set? When setting 384 kiB, it's for sure that this is too little. Of course, you can always just add some "safety margin" but it's hard to tell how big it must be. E.g. on Linux the receive socket buffer size of an UDP socket includes overhead, so when you set it to 512 kiB, the kernel will in fact set it to 1 MiB (doubling whatever you have requested) to compensate for any possible overhead but that seems very wasteful to me.
    – Mecki
    Commented Dec 17, 2021 at 0:11
  • @Mecki, such a user needs to make an estimate for the allocation pattern they will use and how much overhead that allocation pattern incurs. Based on that they can set a proper bound on the memory available to your allocator. Commented Dec 17, 2021 at 6:41

Not the answer you're looking for? Browse other questions tagged or ask your own question.