0

Is there a logical reason why the integer is upgraded to 32+ bits? I was trying to make an 8bit mask, and found myself a bit disappointed that the upgrade will corrupt my equations.

sizeof( quint8(0)); // 1 byte
sizeof(~quint8(0)); // 4 bytes

Usually something like this is done for a good reason, but I do not see any reason why a bitwise operator would essentially need to add more bits. It would seem to me that this would hurt performance [slightly] because now you have more bits to allocate and evaluate.

Why does C++ [Or other languages] do this?

4
  • 1
    What is quint8()? That's not a C++ standard function. Commented May 27, 2020 at 21:13
  • 3
    stackoverflow.com/a/40873288 Commented May 27, 2020 at 21:22
  • @πάνταῥεῖ typedef unsigned char quint8; /* 8 bit unsigned */ << Its part of Qt's qglobal.h .
    – Anon
    Commented May 27, 2020 at 21:25
  • 1
    You should better retag your question accordingly. Commented May 27, 2020 at 21:28

2 Answers 2

4

The rule in C++, which was inherited from C, is that all operands that are smaller than int or unsigned int are first converted to one of those (with int being preferred if it can represent all values of the original type) before the operation is performed.

The type int (and unsigned int) is expected to match the native register size for non-floating-point registers of the processor.

The reason for this rule is because it allows compilers to generate more efficient code because intermediate results in a larger expression don't have to be converted back to a smaller datatype, which can involve copying the value back and forth to a temporary memory location.

If you really need (some of) the intermediate results to be truncated, you can either use an explicit cast or store them in a variable.

2

The C++ Language specification follows the C language specification in being counter-intuitive here. Its defined so that when evaluating integer expressions everything is first converted to an int and then the expression is evaluated.

This also applies to unsigned values getting converted to signed values.

My guess is that this behavior simplified compiler implementation.

Not the answer you're looking for? Browse other questions tagged or ask your own question.