1

I'm learning rust and curious to know the purpose that we can allocate memory sizes to data types. I'm not clear yet how this could be an advantage. It can definitely save up some memory space but I ran into a similar thread that stated memory is the least of the concern.

2
  • 1
    Whether or not memory consumption is a concern depends very much on the platform you are targeting. The world contains more tiny embedded devices with minimal available resources than ever. Commented Feb 3, 2022 at 12:37
  • @KilianFoth, thanks. I was wondering if this could help from a security point of view as well? Example, in financial applications where we can minimise exceeding certain values, thus minimising the effects.
    – Lionel
    Commented Feb 3, 2022 at 13:07

2 Answers 2

3

The answer is as always "it depends". If it was a feature with no value, nobody would have put it in the language. If it was a feature which always had value, it would be the default.

Situations in which you should do it:

  • You have an actual functional reason for caring about how many bits there are in your data types; most likely these days this means you know that it will overflow the size of an i32 (or similar for non-integer types).
  • You have actually profiled your code in production and found that the hot path can be improved by using a specific size.

Situations in which you shouldn't do it:

2

The primary value in Rust of allocating a size is allocating a defined range that is uniform. There are very few ways to actually observe the in-memory size of a value in rust. However overflow, eg adding 1 to a u8 containing the value 255 is very obvious and different from anything lower. And for a rust value you always know the range is 0 to (2^n)-1 for un and -(2^(n-1))+1 to (2^(n-1))-1 for in

So why do we need a range? We could make integers able to hold infinite values like Haskell and Python, but that requires dynamic memory allocation to hold arbitrarily large values. And Rust is designed not to require an allocator for core functionality.

C and C++ don't specify an exact range in the standard, instead specifying a number of properties (minimum size, and comparative size between types) that the range must fulfill. They then make it implementation defined as to what the exact range is. This means that switching to a new platform has often resulted in issues in the codebase from overflow happening at unexpected values. If the codebase is sufficiently validated, this will just leave a codebase with significant manual effort to fix, and often significant additional complexity. If the codebase is not sufficiently validated, these issues can result in hidden bugs that may not surface until late in production.

So compilers have sort of standardized on a convention in those languages as to what the ranges are in most scenarios.

Alternatively we could be like Java and C# where there are int and short but they always have a defined range in the standard. In practice this is effectively the same as what Rust does, but with the overhead you have to memorize the range rather than 0..(2^n) for all unsigned types.

So yes, you could in a lot of cases just use i32s for everything. But you still have to use a type that expresses the implicit assumption that you are limited to values up to (2^31)-1.

5
  • C and C++ specify a minimum range, which is generally enough and allows implementations for even (by modern standards) slightly wacky systems. Anyway, if you need a fixed-size type, C and C++ will gladly serve you too. Commented Mar 5, 2022 at 18:56
  • @Deduplicator A minimum range doesn't help you when the code is written against tests which have only been run on systems with more than minimum ranges. Rust made a conscious decision that the code behavior of integer types should not depend on platform impletations, ever. And if you look at the C and C++ fixed sized types, they are all named by size, exactly the same as the Rust ones. Commented Mar 5, 2022 at 20:16
  • I never said that minimum ranges are always good enough, but they are better than nothing, and more adaptable to the underlying machine than fixed ones. And considering fixed-size types exist too if you need them, what's to complain about? Anyway, change the platform, and you have to re-test everything anyway. The point was that "C and C++ don't specify a range in the standard" is somewhere between unhelpfully devoid of useful information and deliberately misleading. Commented Mar 5, 2022 at 21:18
  • @Deduplicator Fixed size types, you have to test everything, but it is far less likely that anything will actually break and need code fixes. Significant amounts of C and C++ code exists that assumes the target platform uses 32 bit ints, despite the fact that is significantly above the minimum range. In the rust and java models, that would obviously fail during initial development because the tests wouldn't work. In C and C++, you are left with a system that needs significant rewriting if it ever needs to run on a system with 16 bit ints. Commented Mar 6, 2022 at 12:39
  • By don't specify a range, I meant they don't specify an exact range of the implementation must always have this max and min. They specify a range of valid ranges. Commented Mar 6, 2022 at 12:40

Not the answer you're looking for? Browse other questions tagged or ask your own question.