最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

How does JavaScript determine the number of digits to produce when formatting floating-point values? - Stack Overflow

programmeradmin5浏览0评论

In JavaScript, everyone knows the famous calculation: 0.1 + 0.2 = 0.30000000000000004. But why does JavaScript print this value instead of printing the more accurate and precise 0.300000000000000044408920985006?

In JavaScript, everyone knows the famous calculation: 0.1 + 0.2 = 0.30000000000000004. But why does JavaScript print this value instead of printing the more accurate and precise 0.300000000000000044408920985006?

Share Improve this question edited Mar 20, 2018 at 14:30 Eric Postpischil 225k14 gold badges189 silver badges362 bronze badges asked Mar 20, 2018 at 12:39 Guang LinGuang Lin 896 bronze badges 3
  • 6 Possible duplicate of Is floating point math broken? – Sulthan Allaudeen Commented Mar 20, 2018 at 12:42
  • @SulthanAllaudeen I had known the result, but i do not know why the js not print the full decimal, but only keep 17 decimals? – Guang Lin Commented Mar 20, 2018 at 13:04
  • 2 @SulthanAllaudeen: This is not a duplicate of that question. General information about how floating-point arithmetic works does not answer a question about how JavaScript formats numbers when converting floating-point values to decimal strings. Different languages make different choices (e.g., fixed number of significant digits, least number of significant digits needed to uniquely distinguish value, as many digits as needed to show exact value). This question ought to be answered by explaining what the JavaScript (ECMAScript) specification says about this. – Eric Postpischil Commented Mar 20, 2018 at 13:14
Add a ment  | 

1 Answer 1

Reset to default 13

The default rule for JavaScript when converting a Number value to a decimal numeral is to use just enough digits to distinguish the Number value. (You can request more or fewer digits by using the toPrecision method.)

JavaScript uses IEEE-754 basic 64-bit binary floating-point for its Number type. Using IEEE-754, the result of .1 + .2 is exactly 0.3000000000000000444089209850062616169452667236328125. This results from:

  • Converting “.1” to the nearest value representable in the Number type.
  • Converting “.2” to the nearest value representable in the Number type.
  • Adding the above two values and rounding the result to the nearest value representable in the Number type.

When formatting this Number value for display, “0.30000000000000004” has just enough significant digits to uniquely distinguish the value. To see this, observe that the neighboring values are:

  • 0.299999999999999988897769753748434595763683319091796875,
  • 0.3000000000000000444089209850062616169452667236328125, and
  • 0.300000000000000099920072216264088638126850128173828125.

If the conversion to a decimal numeral produced only “0.3000000000000000”, it would be nearer to 0.299999999999999988897769753748434595763683319091796875 than to 0.3000000000000000444089209850062616169452667236328125. Therefore, another digit is needed. When we have that digit, “0.30000000000000004”, then the result is closer to 0.3000000000000000444089209850062616169452667236328125 than to either of its neighbors. Therefore, “0.30000000000000004” is the shortest decimal numeral (neglecting the leading “0” which is there for aesthetic purposes) that uniquely distinguishes which possible Number value the original value was.

This rules es from step 5 in clause 7.1.12.1 of the ECMAScript 2017 Language Specification, which is one of the steps in converting a Number value m to a decimal numeral for the ToString operation:

Otherwise, let n, k, and s be integers such that k ≥ 1, 10k‐1s < 10k, the Number value for s × 10nk is m, and k is as small as possible.

The phrasing here is a bit imprecise. It took me a while to figure out that by “the Number value for s × 10nk”, the standard means the Number value that is the result of converting the mathematical value s × 10nk to the Number type (with the usual rounding). In this description, k is the number of significant digits that will be used, and this step is telling us to minimize k, so it says to use the smallest number of digits such that the numeral we produce will, when converted back to the Number type, produce the original number m.

与本文相关的文章

发布评论

评论列表(0)

  1. 暂无评论