10
$\begingroup$

Most Smalltalk dialects currently implement a naive inexact floating modulus (fmod/remainder).
I just changed this to improve Squeak/Pharo and eventually other Smalltalk adherence to standards (IEEE 754, ISO/IEC 10967), as I already did for other state of the art floating point operations.

However for adoption of those changes, I anticipate that adhering to standard will not be enough to convince my peers, so explaining in which circumstances this exactness would really matter would help me a lot. I could not find a good example by myself so far.

Does any one here knows why/when/where (IOW in which algorithm) such exactness of modulus would matter?

$\endgroup$
6
  • $\begingroup$ I think you might get better answers on Computational Science since such issues are more important in their (sub-)domain. In any case, the question is ontopic here and you should give our answerers a few days before reposting. $\endgroup$
    – Raphael
    Commented May 4, 2014 at 10:36
  • 1
    $\begingroup$ I've seen code relying on fmod/modf exactness which made me shudder, but the possibility that a language might dare to implement a naive inexact floating point modulus seems even more scary. Example code: (1) Take the remainder. (2) Stop if it is zero. (3) Multiply it by 2 and go to (1). One can do some useful work during this process, but the crucial point is that termination of this process relies on exactness of remainder and exactness of multiplication by 2. Not sure whether I should give a more complete answer here, because Computational Science seems more appropriate for this question. $\endgroup$ Commented May 4, 2014 at 12:41
  • $\begingroup$ One guess: normalizing the input of a trigonometric function. $\endgroup$
    – user4577
    Commented May 4, 2014 at 19:58
  • $\begingroup$ @ThomasKlimpel I'm interested if you find references. Note that naive remainder is defined as (x - ((y/x) truncated * x)) with IEEE round to nearest even ops, we can prove that exactRem(x,y)==0 => naiveRem(x,y)==0. The problem is the contrary - false exact division positive - like naiveRem(4.0,0.1)==0.0 which unfortunately fits naive expectations in many cases! $\endgroup$
    – aka.nice
    Commented May 6, 2014 at 15:06
  • $\begingroup$ @PaulA.Clayton yes, for sine in degrees maybe... Though, my guess is that naive rem works equally well as exact rem up to approx. 1e16 degrees because 360 has only a range of 6 bits set, and because the division by 360 seems to never round up for predecessors of multiple of 360... For radians a decent library requires multiple precision, does an exact rem limited to double precision really help in such case? $\endgroup$
    – aka.nice
    Commented May 6, 2014 at 17:42

1 Answer 1

2
$\begingroup$

Note that inexact floating point implementation affect the weather.

There have been tests running weather predictions with the same inputs on different hardware and the predictions diverged. If you are running iterative algorithm then a small rounding difference here or there can result in a butterfly effect changing sunshine into rain.

The rounding rules in standards (IEEE 754, ISO/IEC 10967) have been carefully thought out so numeric algorithms behave predictable with the most accuracy and reproduce the same result every time. By not following the standard numerical algorithms designed for those rounding rules will break and iterative algorithms like weather predictions can even give a random result.

(and doesn't that say something about weather predictions? :)

$\endgroup$
3
  • 1
    $\begingroup$ On the other hand, if the butterfly effect changes sunshine to rain, then your results were not useful anyway. $\endgroup$
    – gnasher729
    Commented May 22, 2018 at 22:21
  • $\begingroup$ Once upon a time, I saved float data in ASCII with not enough digits. One client wanted to show me a problem, but after restoring the data from ASCII file, the problem vanished. I said that a few ulp off should not matter, if his problem was ill conditioned, there's nothing I could do anyway. He said that it was his business, mine was to provide software enabling reproducibility of his own problems. He was right. $\endgroup$
    – aka.nice
    Commented May 23, 2018 at 8:03
  • $\begingroup$ That's why you should output floating point numbers for saves as hexadecimals using %a. $\endgroup$ Commented May 24, 2018 at 10:56

Not the answer you're looking for? Browse other questions tagged or ask your own question.