194

According to this java.sun page == is the equality comparison operator for floating point numbers in Java.

However, when I type this code:

if(sectionID == currentSectionID)

into my editor and run static analysis, I get: "JAVA0078 Floating point values compared with =="

What is wrong with using == to compare floating point values? What is the correct way to do it? 

6
  • 32
    Because comparing floats with == is problematic, it's unwise to use them as IDs; the names in your example code suggest that's what you are doing; long integers (longs) are preferred, and the de facto standard for IDs. Commented Jul 6, 2009 at 17:35
  • 12
    Mandatory link :-) What Every Computer Scientist Should Know About Floating-Point Arithmetic
    – nos
    Commented Jul 6, 2009 at 19:49
  • 4
    Yeah, was that just a random example or do you actually use floats as IDs? Is there a reason? Commented Feb 13, 2010 at 20:12
  • 7
    "for float fields, use the Float.compare method; and for double fields, use Double.compare. The special treatment of float and double fields is made necessary by the existence of Float.NaN, -0.0f and the analogous double constants; see the Float.equals documentation for details." (Joshua Bloch: Effective Java)
    – lbalazscs
    Commented Nov 3, 2014 at 0:58
  • @nos Old comment, I know, but the link seems to be broken nowadays (at least in my browser). The updated link should be this: docs.oracle.com/cd/E19957-01/806-3568/ncg_goldberg.html Commented Jun 7, 2022 at 13:32

21 Answers 21

221

the correct way to test floats for 'equality' is:

if(Math.abs(sectionID - currentSectionID) < epsilon)

where epsilon is a very small number like 0.00000001, depending on the desired precision.

8
  • 29
    See the link in the accepted answer (cygnus-software.com/papers/comparingfloats/comparingfloats.htm) for why a fixed epsilon isn't always a good idea. Specifically, as the values in the floats being compared get large (or small), the epsilon is no longer appropriate. (Using epsilon is fine if you know your float values are all relatively reasonable, though.)
    – P.T.
    Commented Dec 13, 2011 at 2:05
  • 1
    @P.T Can he multiply epsilon with one numbers and change function to if(Math.abs(sectionID - currentSectionID) < epsilon*sectionID to tackle that issue? Commented Nov 6, 2014 at 23:11
  • 3
    This may even be the best answer so far, but it is still flawed. Where do you get the epsilon from? Commented Apr 27, 2015 at 15:34
  • 1
    @MichaelPiefel it already says: "depending on the desired precision". Floats by their nature are kind of like physical values: you're only interested in some limited number of positions depending on the total inaccuracy, any differences beyond that are considered moot. Commented Aug 28, 2016 at 22:29
  • But the OP really only wanted to test for equality, and since this is known to be unreliable, has to use a different method. Still, I don’t fathom he knows what his “desired precision” even is; so if all you want is a more reliable equality test, the question remains: Where do you get the epsilon from? I proposed using Math.ulp() in my answer to this question. Commented Aug 29, 2016 at 7:15
57

Floating point values can be off by a little bit, so they may not report as exactly equal. For example, setting a float to "6.1" and then printing it out again, you may get a reported value of something like "6.099999904632568359375". This is fundamental to the way floats work; therefore, you don't want to compare them using equality, but rather comparison within a range, that is, if the diff of the float to the number you want to compare it to is less than a certain absolute value.

This article on the Register gives a good overview of why this is the case; useful and interesting reading.

5
  • @kevindtimm : so you will do your equality tests like so then if (number == 6.099999904632568359375) any time you wish to know of number is equals to 6.1... Yes you are correct... everything in the computer is strictly deterministic, just that the approximations used for floats are counter-intuitive when doing math problems.
    – Newtopian
    Commented Jul 22, 2009 at 1:29
  • Floating point values are only nondeterministically imprecise on very specific hardware. Commented Jun 3, 2011 at 18:57
  • 1
    @Stuart I could be mistaken, but I don't think the FDIV bug was non-deterministic. The answers given by the hardware did not conform to specification, but they were deterministic, in that the same calculation always produced the same incorrect result
    – Gravity
    Commented Jul 27, 2011 at 4:43
  • @Gravity You can argue that any behavior is deterministic given a specific set of caveats. Commented Aug 21, 2011 at 4:41
  • 1
    Floating point values are not imprecise. Every floating point value is exactly what it is. What may be imprecise is the result of a floating point calculation. But beware! When you see something like 0.1 in a program, that's not a floating point value. That's a floating point literal---a string that compiler converts into a floating point value by doing a calculation. Commented Nov 6, 2015 at 18:24
23

Just to give the reason behind what everyone else is saying.

The binary representation of a float is kind of annoying.

In binary, most programmers know the correlation between 1b=1d, 10b=2d, 100b=4d, 1000b=8d

Well it works the other way too.

.1b=.5d, .01b=.25d, .001b=.125, ...

The problem is that there is no exact way to represent most decimal numbers like .1, .2, .3, etc. All you can do is approximate in binary. The system does a little fudge-rounding when the numbers print so that it displays .1 instead of .10000000000001 or .999999999999 (which are probably just as close to the stored representation as .1 is)

Edit from comment: The reason this is a problem is our expectations. We fully expect 2/3 to be fudged at some point when we convert it to decimal, either .7 or .67 or .666667.. But we don't automatically expect .1 to be rounded in the same way as 2/3--and that's exactly what's happening.

By the way, if you are curious the number it stores internally is a pure binary representation using a binary "Scientific Notation". So if you told it to store the decimal number 10.75d, it would store 1010b for the 10, and .11b for the decimal. So it would store .101011 then it saves a few bits at the end to say: Move the decimal point four places right.

(Although technically it's no longer a decimal point, it's now a binary point, but that terminology wouldn't have made things more understandable for most people who would find this answer of any use.)

4
  • 1
    @Matt K - um, not fixed point; if you "save a few bits at the end to say move the decimal point [N] bits to the right", that is floating point. Fixed point takes the position of the radix point to be, well, fixed. Also, in general, since shifting the binamal (?) point can always be made to leave you with a '1' in the leftmost position, you will find some systems that omit the leading '1', devoting the space thus liberated (1 bit!) to extending the range of the exponent.
    – JustJeff
    Commented Jul 6, 2009 at 18:52
  • The problem has nothing to do with binary vs. decimal representation. With decimal floating-point, you still have things like (1 / 3) * 3 == 0.9999999999999999999999999999.
    – dan04
    Commented Oct 28, 2010 at 12:44
  • 2
    @dan04 yes, because 1/3 has no decimal OR binary representation, it does have a trinary representation and would convert correctly that way :). The numbers I listed (.1, .25, etc) all have perfect decimal representations but no binary representation--and people are used to those having "exact" representations. BCD would handle them perfectly. That's the difference.
    – Bill K
    Commented Oct 28, 2010 at 17:19
  • 1
    This should have way more upvotes, since it describes the REAL problem behind the issue.
    – Levite
    Commented Mar 25, 2015 at 7:51
19

What is wrong with using == to compare floating point values?

Because it's not true that 0.1 + 0.2 == 0.3

2
  • 8
    what about Float.compare(0.1f+0.2f, 0.3f) == 0 ? Commented Jan 26, 2016 at 4:31
  • 0.1f + 0.2f == 0.3f but 0.1d + 0.2d != 0.3d. By default, 0.1 + 0.2 is a double. 0.3 is a double as well. Commented Jun 28, 2019 at 0:29
19

As of today, the quick & easy way to do it is:

if (Float.compare(sectionID, currentSectionID) == 0) {...}

However, the docs do not clearly specify the value of the margin difference (an epsilon from @Victor 's answer) that is always present in calculations on floats, but it should be something reasonable as it is a part of the standard language library.

Yet if a higher or customized precision is needed, then

float epsilon = Float.MIN_NORMAL;  
if(Math.abs(sectionID - currentSectionID) < epsilon){...}

is another solution option.

1
  • 2
    The docs which you linked states "the value 0 if f1 is numerically equal to f2" which makes it the same as doing (sectionId == currentSectionId) which is not accurate for floating points. the epsilon method is the better approach, which is in this answer: stackoverflow.com/a/1088271/4212710
    – typoerrpr
    Commented Aug 22, 2019 at 1:43
14

I think there is a lot of confusion around floats (and doubles), it is good to clear it up.

  1. There is nothing inherently wrong in using floats as IDs in standard-compliant JVM [*]. If you simply set the float ID to x, do nothing with it (i.e. no arithmetics) and later test for y == x, you'll be fine. Also there is nothing wrong in using them as keys in a HashMap. What you cannot do is assume equalities like x == (x - y) + y, etc. This being said, people usually use integer types as IDs, and you can observe that most people here are put off by this code, so for practical reasons, it is better to adhere to conventions. Note that there are as many different double values as there are long values, so you gain nothing by using double. Also, generating "next available ID" can be tricky with doubles and requires some knowledge of the floating-point arithmetic. Not worth the trouble.

  2. On the other hand, relying on numerical equality of the results of two mathematically equivalent computations is risky. This is because of the rounding errors and loss of precision when converting from decimal to binary representation. This has been discussed to death on SO.

[*] When I said "standard-compliant JVM" I wanted to exclude certain brain-damaged JVM implementations. See this.

9
  • When using floats as IDs, one must be careful to either ensure that they are compared using == rather than equals, or else ensure that no float which compares unequal to itself gets stored in a table. Otherwise, a program which tries to e.g. count how many unique results can be produced from an expression when fed various inputs may regard every NaN value as unique.
    – supercat
    Commented Jul 27, 2014 at 17:54
  • The above referes to Float, not to float.
    – quant_dev
    Commented Aug 6, 2017 at 22:27
  • What's talking about Float? If one tries to build a table of unique float values and compares them with ==, the horrible IEEE-754 comparison rules will result in the table being flooded with NaN values.
    – supercat
    Commented Aug 7, 2017 at 14:26
  • 1
    This looks like the right answer when we talk about using float as an ID column. However, I wonder how the "float behaviour" of the DB plays into this equation. So, then that may take us back to the Epsilon method. In the end, probably a bad idea to use float as ID.
    – Teddy
    Commented Sep 21, 2023 at 15:13
  • 1
    @Teddy You're correct, the way DBs handle floats is adding additional complexity here, probably not worth the trouble.
    – quant_dev
    Commented Oct 11, 2023 at 9:30
8

Foating point values are not reliable, due to roundoff error.

As such they should probably not be used for as key values, such as sectionID. Use integers instead, or long if int doesn't contain enough possible values.

4
  • 2
    Agreed. Given that these are IDs, there is no reason to complicate things with floating point arithmetic.
    – Yohnny
    Commented Jul 6, 2009 at 17:34
  • 2
    Or a long. Depending on how many unique IDs get generated in the future, an int may not be large enough. Commented Jul 6, 2009 at 18:23
  • How precise is double compared to float? Commented Mar 17, 2017 at 17:54
  • 1
    @ArvindhMani doubles are much more precise, but they are also floating point values, so my answer was meant to include both float and double. Commented Mar 20, 2017 at 13:16
7

In addition to previous answers, you should be aware that there are strange behaviours associated with -0.0f and +0.0f (they are == but not equals) and Float.NaN (it is equals but not ==) (hope I've got that right - argh, don't do it!).

Edit: Let's check!

import static java.lang.Float.NaN;
public class Fl {
    public static void main(String[] args) {
        System.err.println(          -0.0f   ==              0.0f);   // true
        System.err.println(new Float(-0.0f).equals(new Float(0.0f))); // false
        System.err.println(            NaN   ==               NaN);   // false
        System.err.println(new Float(  NaN).equals(new Float( NaN))); // true
    }
} 

Welcome to IEEE/754.

3
  • If something is ==, then they are identical down to the bit. How could they not be equals()? Maybe you have it backwards?
    – Matt K
    Commented Jul 6, 2009 at 18:15
  • @Matt NaN is special. Double.isNaN(double x) in Java is actually implemented as { return x != x; }...
    – quant_dev
    Commented Jul 6, 2009 at 19:10
  • 2
    With floats, == doesn't mean that numbers are "identical to the bit" (the same number can be represented with different bit patterns, though only one of them is normalized form). As well, -0.0f and 0.0f are represented by different bit patterns (the sign bit is different), but compare as equal with == (but not with equals). Your assumption that == is bitwise comparison is, generally speaking, wrong. Commented Jul 22, 2009 at 1:27
7

This is a problem not specific to java. Using == to compare two floats/doubles/any decimal type number can potentially cause problems because of the way they are stored. A single-precision float (as per IEEE standard 754) has 32 bits, distributed as follows:

1 bit - Sign (0 = positive, 1 = negative)
8 bits - Exponent (a special (bias-127) representation of the x in 2^x)
23 bits - Mantisa. The actuall number that is stored.

The mantisa is what causes the problem. It's kinda like scientific notation, only the number in base 2 (binary) looks like 1.110011 x 2^5 or something similar. But in binary, the first 1 is always a 1 (except for the representation of 0)

Therefore, to save a bit of memory space (pun intended), IEEE deccided that the 1 should be assumed. For example, a mantisa of 1011 really is 1.1011.

This can cause some issues with comparison, esspecially with 0 since 0 cannot possibly be represented exactly in a float. This is the main reason the == is discouraged, in addition to the floating point math issues described by other answers.

Java has a unique problem in that the language is universal across many different platforms, each of which could have it's own unique float format. That makes it even more important to avoid ==.

The proper way to compare two floats (not-language specific mind you) for equality is as follows:

if(ABS(float1 - float2) < ACCEPTABLE_ERROR)
    //they are approximately equal

where ACCEPTABLE_ERROR is #defined or some other constant equal to 0.000000001 or whatever precision is required, as Victor mentioned already.

Some languages have this functionality or this constant built in, but generally this is a good habit to be in.

3
  • 4
    Java has a defined behavior for floats. It is not platform dependent.
    – Yishai
    Commented Jan 22, 2010 at 0:21
  • 1
    The term used in the IEEE-754 standard is “significand,” not “mantissa.” The leading bit of the significand is 1 only if the exponent field is 1-254. If the exponent field is 0, the leading bit of the significand is 0. The statement “0 cannot possibly be represented exactly in a float” is false; 0 is represented with all bits zero (and with the leading bit set to 1 which is distinguished as −0, which equals +0). This does not cause any issue with comparison, and it is not “the main reason == is discouraged.” Commented Sep 9, 2020 at 12:03
  • Re “The proper way to compare two floats”: There is no general solution for comparing floating-point numbers that contain errors from previous operations.. Commented Sep 9, 2020 at 12:04
7

Here is a very long (but hopefully useful) discussion about this and many other floating point issues you may encounter: What Every Computer Scientist Should Know About Floating-Point Arithmetic

0
4

First of all, are they float or Float? If one of them is a Float, you should use the equals() method. Also, probably best to use the static Float.compare method.

4

You can use Float.floatToIntBits().

Float.floatToIntBits(sectionID) == Float.floatToIntBits(currentSectionID)
2
  • 1
    You are on the right track. floatToIntBits() is the right way to go, but it would be easier to just use Float's built in equals() function. See here: stackoverflow.com/a/3668105/2066079 . You can see that the default equals() utilizes floatToIntBits internally.
    – dberm22
    Commented Apr 4, 2014 at 13:03
  • 1
    Yes if they are Float objects. You can use above equation for primitives.
    – aamadmi
    Commented Apr 4, 2014 at 18:08
4

The following automatically uses the best precision:

/**
 * Compare to floats for (almost) equality. Will check whether they are
 * at most 5 ULP apart.
 */
public static boolean isFloatingEqual(float v1, float v2) {
    if (v1 == v2)
        return true;
    float absoluteDifference = Math.abs(v1 - v2);
    float maxUlp = Math.max(Math.ulp(v1), Math.ulp(v2));
    return absoluteDifference < 5 * maxUlp;
}

Of course, you might choose more or less than 5 ULPs (‘unit in the last place’).

If you’re into the Apache Commons library, the Precision class has compareTo() and equals() with both epsilon and ULP.

2
  • when changing float to double, this method doesn't work as isDoubleEqual(0.1+0.2-0.3, 0.0) == false
    – hychou
    Commented Sep 26, 2017 at 2:31
  • It seems you need more like 10_000_000_000_000_000L as the factor for double to cover this. Commented Sep 26, 2017 at 11:24
3

you may want it to be ==, but 123.4444444444443 != 123.4444444444442

3

If you *have to* use floats, strictfp keyword may be useful.

http://en.wikipedia.org/wiki/strictfp

1
  • Or may be even more useful for different architectures.
    – joey rohan
    Commented Apr 3, 2015 at 7:57
2

Two different calculations which produce equal real numbers do not necessarily produce equal floating point numbers. People who use == to compare the results of calculations usually end up being surprised by this, so the warning helps flag what might otherwise be a subtle and difficult to reproduce bug.

2

Are you dealing with outsourced code that would use floats for things named sectionID and currentSectionID? Just curious.

@Bill K: "The binary representation of a float is kind of annoying." How so? How would you do it better? There are certain numbers that cannot be represented in any base properly, because they never end. Pi is a good example. You can only approximate it. If you have a better solution, contact Intel.

1

As mentioned in other answers, doubles can have small deviations. And you could write your own method to compare them using an "acceptable" deviation. However ...

There is an apache class for comparing doubles: org.apache.commons.math3.util.Precision

It contains some interesting constants: SAFE_MIN and EPSILON, which are the maximum possible deviations of simple arithmetic operations.

It also provides the necessary methods to compare, equal or round doubles. (using ulps or absolute deviation)

0

In one line answer I can say, you should use:

Float.floatToIntBits(sectionID) == Float.floatToIntBits(currentSectionID)

To make you learned more about using related operators correctly, I am elaborating some cases here: Generally, there are three ways to test strings in Java. You can use ==, .equals (), or Objects.equals ().

How are they different? == tests for the reference quality in strings meaning finding out whether the two objects are the same. On the other hand, .equals () tests whether the two strings are of equal value logically. Finally, Objects.equals () tests for any nulls in the two strings then determine whether to call .equals ().

Ideal operator to use

Well this has been subject to lots of debates because each of the three operators have their unique set of strengths and weaknesses. Example, == is often a preferred option when comparing object reference, but there are cases where it may seem to compare string values as well.

However, what you get is a falls value because Java creates an illusion that you are comparing values but in the real sense you are not. Consider the two cases below:

Case 1:

String a="Test";
String b="Test";
if(a==b) ===> true

Case 2:

String nullString1 = null;
String nullString2 = null;
//evaluates to true
nullString1 == nullString2;
//throws an exception
nullString1.equals(nullString2);

So, it’s way better to use each operator when testing the specific attribute it’s designed for. But in almost all cases, Objects.equals () is a more universal operator thus experience web developers opt for it.

Here you can get more details: http://fluentthemes.com/use-compare-strings-java/

-2

The correct way would be

java.lang.Float.compare(float1, float2)
4
  • 7
    Float.compare(float1, float2) returns an int, so it cannot be used instead of float1 == float2 in the if condition. Moreover, it doesn't really solve the underlying problem which this warning is referring to -- that if floats are results of numerical calculation, float1 != float2 may occur just due to rounding errors.
    – quant_dev
    Commented Jul 6, 2009 at 19:07
  • 1
    right, you cannot copy paste, you have to check the doc first.
    – Eric
    Commented Jul 6, 2009 at 19:17
  • 2
    What you can do instead of float1 == float2 is Float.compare(float1,float2) == 0.
    – deterb
    Commented Jul 19, 2009 at 18:23
  • 29
    This doesn't buy you anything - you still get Float.compare(1.1 + 2.2, 3.3) != 0 Commented Jul 22, 2009 at 1:25
-3

One way to reduce rounding error is to use double rather than float. This won't make the problem go away, but it does reduce the amount of error in your program and float is almost never the best choice. IMHO.

Not the answer you're looking for? Browse other questions tagged or ask your own question.