In JavaScript, everyone knows the famous calculation: 0.1 + 0.2 = 0.30000000000000004
. But why does JavaScript print this value instead of printing the more accurate and precise 0.300000000000000044408920985006
?
In JavaScript, everyone knows the famous calculation: 0.1 + 0.2 = 0.30000000000000004
. But why does JavaScript print this value instead of printing the more accurate and precise 0.300000000000000044408920985006
?
- 6 Possible duplicate of Is floating point math broken? – Sulthan Allaudeen Commented Mar 20, 2018 at 12:42
- @SulthanAllaudeen I had known the result, but i do not know why the js not print the full decimal, but only keep 17 decimals? – Guang Lin Commented Mar 20, 2018 at 13:04
- 2 @SulthanAllaudeen: This is not a duplicate of that question. General information about how floating-point arithmetic works does not answer a question about how JavaScript formats numbers when converting floating-point values to decimal strings. Different languages make different choices (e.g., fixed number of significant digits, least number of significant digits needed to uniquely distinguish value, as many digits as needed to show exact value). This question ought to be answered by explaining what the JavaScript (ECMAScript) specification says about this. – Eric Postpischil Commented Mar 20, 2018 at 13:14
1 Answer
Reset to default 13The default rule for JavaScript when converting a Number
value to a decimal numeral is to use just enough digits to distinguish the Number
value. (You can request more or fewer digits by using the toPrecision
method.)
JavaScript uses IEEE-754 basic 64-bit binary floating-point for its Number
type. Using IEEE-754, the result of .1 + .2
is exactly 0.3000000000000000444089209850062616169452667236328125. This results from:
- Converting “.1” to the nearest value representable in the
Number
type. - Converting “.2” to the nearest value representable in the
Number
type. - Adding the above two values and rounding the result to the nearest value representable in the
Number
type.
When formatting this Number
value for display, “0.30000000000000004” has just enough significant digits to uniquely distinguish the value. To see this, observe that the neighboring values are:
0.299999999999999988897769753748434595763683319091796875
,0.3000000000000000444089209850062616169452667236328125
, and0.300000000000000099920072216264088638126850128173828125
.
If the conversion to a decimal numeral produced only “0.3000000000000000”, it would be nearer to 0.299999999999999988897769753748434595763683319091796875 than to 0.3000000000000000444089209850062616169452667236328125. Therefore, another digit is needed. When we have that digit, “0.30000000000000004”, then the result is closer to 0.3000000000000000444089209850062616169452667236328125 than to either of its neighbors. Therefore, “0.30000000000000004” is the shortest decimal numeral (neglecting the leading “0” which is there for aesthetic purposes) that uniquely distinguishes which possible Number
value the original value was.
This rules es from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification, which is one of the steps in converting a Number
value m to a decimal numeral for the ToString
operation:
Otherwise, let n, k, and s be integers such that k ≥ 1, 10k‐1 ≤ s < 10k, the Number value for s × 10n‐k is m, and k is as small as possible.
The phrasing here is a bit imprecise. It took me a while to figure out that by “the Number value for s × 10n‐k”, the standard means the Number
value that is the result of converting the mathematical value s × 10n‐k to the Number
type (with the usual rounding). In this description, k is the number of significant digits that will be used, and this step is telling us to minimize k, so it says to use the smallest number of digits such that the numeral we produce will, when converted back to the Number
type, produce the original number m.