I came across an operation with some confusion.
var a = 0.1;
var b = 0.2;
var c = 0.3;
console.log(a); // 0.1
console.log(b); // 0.2
console.log(c); // 0.3
But,
consolo.log(a+b+c) // 0.6000000000000001.
While
console.log(a+(b+c)) // 0.6
I understand that Javascript use binary floating point, thus can't accurate represent 0.1, 0.2, 0.3 But what does the bracket around (b+c). Is there any conversion or round up here?
Many thanks,
I came across an operation with some confusion.
var a = 0.1;
var b = 0.2;
var c = 0.3;
console.log(a); // 0.1
console.log(b); // 0.2
console.log(c); // 0.3
But,
consolo.log(a+b+c) // 0.6000000000000001.
While
console.log(a+(b+c)) // 0.6
I understand that Javascript use binary floating point, thus can't accurate represent 0.1, 0.2, 0.3 But what does the bracket around (b+c). Is there any conversion or round up here?
Many thanks,
Share Improve this question asked Sep 19, 2014 at 2:09 MikeNQMikeNQ 6532 gold badges16 silver badges29 bronze badges 5-
Well, in the first case you are doing
(0.1 + 0.2) + 0.3 = 0.3 + 0.3
and in the second case you do0.1 + (0.2 + 0.3) = 0.1 + 0.5
. I guess the rounding error in the first case larger than in the second case. – Felix Kling Commented Sep 19, 2014 at 2:16 - I had the same thought as Felix, but JavaScript's "number" type is double-precision floating point. That ought to give much better precision than the example. (I've reproduced OP's example in FF and Chrome, and also found that (a+b)+c gives the same result as a+b+c, so the problem is order of evaluation, and not the presence of parentheses.) – Bob Brown Commented Sep 19, 2014 at 2:22
- Yes, What I mean is that, since the bracket change the order, why rounding behave on (0.2 + 0.3) is different from (0.1+0.2) – MikeNQ Commented Sep 19, 2014 at 2:33
- 1 A masochist could work this out by converting the values to IEEE 754 DP floating point binary and doing the arithmetic. I think I'll wait for a mathematician to e along. – Bob Brown Commented Sep 19, 2014 at 2:56
- @BobBrown, "...that ought to give much better precision than the example." The example has 16 digits of precision. That is about the limit of precision for double-precision binary floats. – Solomon Slow Commented Sep 22, 2014 at 22:38
3 Answers
Reset to default 7How a JavaScript Number is Defined
JavaScript number is represented in IEEE754 which is double precision binary floating point (binary64), it is in scientific notation and using 2 as base. There are 64 bits in a number, and they are split into 3 parts (from high to low bits):
- The first bit is for sign: 0 - positive; 1 - negative
- The next 11 bits are exponent part
- The last 52 bits are for mantissa / fraction
So, a float number is calculated as: (-1) ^ sign * (2 ^ exponent) * significand
Note: as the exponent part of a scientific notation could be either positive or negative, the actual exponent value for a binary64 number should be calculated by subtracting exponent bias (which is the middle value 1023) from the 11 bit exponent value.
The standard also defines the significand value to be between [1, 2).
As the first number of significand part is always 1, so it is implied and not presented in the above figure. So, basically the significand part has 53 bits precision actually, and the red part in above figure is just the mantissa or fraction part.
0.1, 0.2 and 0.3 in binary64 Format
Based on the standard, it's not hard to find 0.1, 0.2 and 0.3 in binary64 format (you can calculate either manually or by this tool http://bartaz.github.io/ieee754-visualization/):
0.1
0 01111111011 1001100110011001100110011001100110011001100110011010
and in scientific notation, it is
1.1001100110011001100110011001100110011001100110011010 * 2e-4
Note: the significand part is in binary format, and the following numbers are in same format
0.2
0 01111111100 1001100110011001100110011001100110011001100110011010
and in scientific notation, it is
1.1001100110011001100110011001100110011001100110011010 * 2e-3
0.3
0 01111111101 0011001100110011001100110011001100110011001100110011
and in scientific notation, it is
1.0011001100110011001100110011001100110011001100110011 * 2e-2
Steps to Add Up 2 binary64 Numbers
Step 1 - Align the exponents
- Shift the significand for the number which has smaller exponent
- Shift the significand right
- Increase the exponent by 1 for every significand shift until both exponents are same
- After shift, the significand should be round up.
Step 2 - Add up the significand
if the added up significand is not satisfying
[1,2)
requirement, shift it into that range and change the exponentAfter shift, the significand should be round up.
0.1 + 0.2 + 0.3 == 0.6000000000000001
As above explained, 0.1
has exponent -4
and 0.2
has exponent -3
, so need to do exponent alignment first:
Shift 0.1
from
1.1001100110011001100110011001100110011001100110011010 * 2e-4
to
0.1100110011001100110011001100110011001100110011001101 * 2e-3
Then add the significand
0.1100110011001100110011001100110011001100110011001101
with
1.1001100110011001100110011001100110011001100110011010
we get added up significand value:
10.0110011001100110011001100110011001100110011001100111
But it is not in range [1,2)
so need right shift it (with round up) to:
1.0011001100110011001100110011001100110011001100110100 (* 2e-2)
then add it to
0.3 (1.0011001100110011001100110011001100110011001100110011 * 2e-2)
we get:
10.0110011001100110011001100110011001100110011001100111 * 2e-2
Again, we need shift and round up it, and finally get the value:
1.0011001100110011001100110011001100110011001100110100 * 2e-1
it is exactly the value of 0.6000000000000001
(decimal)
With same workflow, you get calculate 0.1 + (0.2 + 0.3)
Tools
This web page http://bartaz.github.io/ieee754-visualization/ helps you quickly convert a decimal number to binary64 format, you can use it to verify the calculation steps.
If you are processing a single precision binary float number, you would refer to this tool: http://www.h-schmidt/FloatConverter/IEEE754.html
The general problem is described in Is floating point math broken?.
In the remainder I will just look at the difference between the two putations.
From my ment:
Well, in the first case you are doing (0.1 + 0.2) + 0.3 = 0.3 + 0.3 and in the second case you do 0.1 + (0.2 + 0.3) = 0.1 + 0.5. I guess the rounding error in the first case larger than in the second case.
Lets have a closer look at the actual values in this putation:
var a = 0.1;
var b = 0.2;
var c = 0.3;
console.log(' a:', a.toPrecision(21));
console.log(' b:', b.toPrecision(21));
console.log(' c:', c.toPrecision(21));
console.log(' a + b:', (a + b).toPrecision(21));
console.log(' b + c:', (b + c).toPrecision(21));
console.log(' a + b + c:', (a + b + c).toPrecision(21));
console.log('a + (b + c):', (a + (b + c)).toPrecision(21));
The output is
a: 0.100000000000000005551
b: 0.200000000000000011102
c: 0.299999999999999988898
a + b: 0.300000000000000044409
b + c: 0.500000000000000000000
a + b + c: 0.600000000000000088818
a + (b + c): 0.599999999999999977796
So, it's clear that both putations have rounding errors, but the errors are different because you are performing the additions in a different order. It just happens that a + b + c
produces a larger error.
The console seems to round the number to the 16th decimal:
> (a + b + c).toPrecision(16)
"0.6000000000000001"
> (a + (b + c)).toPrecision(16)
"0.6000000000000000"
That's why the second putation will simply output 0.6
. If the console would round to the 17th decimal, things would look different:
> (a + b + c).toPrecision(17)
"0.60000000000000009"
> (a + (b + c)).toPrecision(17)
"0.59999999999999998"
That's not problem of JavaScript, you would get similar surprises in other languages too.
Please read this: What Every Programmer Should Know About Floating-Point Arithmetic