2

I always assume that dividing a double by an integer will lead to faster code because the compiler will select the better microcode to compute:

double a;
double b = a/3.0;
double c = a/3; // will compute faster than b

For a single operation it does not matter, but for repetitive operations it can make difference. Is my assumption always correct or compiler or CPU dependent or whatever?

The same question applies for multiplication; i.e. will 3 * a be faster than 3.0 * a?

6
  • 4
    If the second operand is constant, there will be no difference. Commented Aug 15, 2021 at 9:08
  • As far as I can tell, there is no mixing of types internally in the arithmetic/floating point unit. At the machine level both operands are of the same type... converted previously if needed. a and 3.0 have the same type (double)... a and 3 require a (implicit) conversion (by the compiler, not at runtime).
    – pmg
    Commented Aug 15, 2021 at 9:13
  • Read this first :) Floating points are handled in hardware (unless you have a small processor) stackoverflow.com/questions/4584637/… Commented Aug 15, 2021 at 9:13
  • 1
    Is there a difference between C and C++? If so, the question should clarify which language is meant. If not, this should be addressed in an answer.
    – mkrieger1
    Commented Aug 15, 2021 at 9:15
  • @mkrieger1 I mean mostly for c++. But I think it will apply the same for C because compiler's math arithmetic is the same.
    – kstn
    Commented Aug 15, 2021 at 9:20

2 Answers 2

8

Your assumption is not correct because both your divide operations will be performed with two double operands. In the second case, c = a/3, the integer literal will be converted to a double value by the compiler before any code is generated.

From this Draft C++ Standard:

8.3 Usual arithmetic conversions          [expr.arith.conv]

1    Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the usual arithmetic conversions, which are defined as follows:

(1.3) – Otherwise, if either operand is double, the other shall be converted to double.


Note that, in this Draft C11 Standard, §6.3.1.8 (Usual arithmetic conversions) has equivalent (indeed, near-identical) text.

1
  • 3
    Note that, for a target architecture where there is an optimization to be had using code that divides a double by an int, then any decent compiler will use such code for a literal like 3.0, just as it would for 3 - it will surely spot that the fractional part is zero. Commented Aug 15, 2021 at 13:42
1

There is no difference. The integer operand is implicitly converted to a double, so they end up practically equivalent.

Not the answer you're looking for? Browse other questions tagged or ask your own question.