3

I'm doing unit testing and I've got this line:

assertEquals(1.1886027926838422606868849265505866347, 1.18860279268384230000000000000000000000,0);

With a delta of 0 they should have to be exactly the same in order to pass and they are clearly not, however this test passes, try it yourself.

Changing the delta to 1E-50 still passes.

Why is it passing when they are two very different numbers?

2
  • Probably because Java doesn't support numbers that big.
    – PC Luddite
    Commented Aug 25, 2015 at 2:46
  • @luoluo how is this similar to that question?
    – Aequitas
    Commented Aug 25, 2015 at 2:55

2 Answers 2

10

This is because Java compiler rounds these two numeric literals to the same number.

Run this experiment:

System.out.println(1.1886027926838422606868849265505866347);
System.out.println(1.18860279268384230000000000000000000000);

This prints the same number (demo):

1.1886027926838423
1.1886027926838423

The double primitive type can only handle up to 16 decimal places, so it cannot represent these numbers all the way to the last digit.

If you want full precision, use BigDecimal instead.

5
  • You say 16 decimal places, so if I multiply everything by 10^16 can I have another 16 decimal places?
    – Aequitas
    Commented Aug 25, 2015 at 2:56
  • @Aequitas No. The (roughly) 16 decimal places is a limitation of the language. The number has to be rounded. docs.oracle.com/javase/tutorial/java/nutsandbolts/…
    – PC Luddite
    Commented Aug 25, 2015 at 3:01
  • 1
    @Aequitas Not exactly: the number of correct digits you get would stay the same, but it would get re-distributed among the whole and the fractional part (demo). Commented Aug 25, 2015 at 3:02
  • so it's 16 SF not DP ?
    – Aequitas
    Commented Aug 25, 2015 at 3:05
  • 2
    @ThePcLuddite To be more accurate, it's a limitation of the IEEE 754 double-precision binary floating-point standard that most CPU's and programming languages use to store double values (for languages that support double).
    – Andreas
    Commented Aug 25, 2015 at 3:58
0

The difference between the two numbers is too small to represent, so they compare as equal. You get roughly 16 decimal digits of precision.

Not the answer you're looking for? Browse other questions tagged or ask your own question.