You can find a lot about Floating Point Precision Errors and how to avoid them in Javascript, for example "How to deal with floating point number precision in JavaScript?", who deal with the problem by just rounding the number to a fixed amount of decimal places.
My problem is slightly different, I get numbers from the backend (some with the rounding error) and want to display it without the error.
Of course I could just round the number to a set number of decimal places with value.toFixed(X)
. The problem is, that the numbers can range from 0.000000001 to 1000000000, so I can never know for sure, how many decimal places are valid.
(See this Fiddle for my unfruitful attempts) Code :
var a = 0.3;
var b = 0.1;
var c = a - b; // is 0.19999999999999998, is supposed to be 0.2
// c.toFixed(2) = 0.20
// c.toFixed(4) = 0.2000
// c.toFixed(5) = 0.200000
var d = 0.000003;
var e = 0.000002;
var f = d - e; // is 0.0000010000000000000002 is supposed to be 0.000001
// f.toFixed(2) = 0.00
// f.toFixed(4) = 0.0000
// f.toFixed(5) = 0.000001
var g = 0.0003;
var h = 0.0005;
var i = g + h; // is 0.0007999999999999999, is supposed to be 0.0008
// i.toFixed(2) = 0.00
// i.toFixed(4) = 0.0008
// i.toFixed(5) = 0.000800
My Question is now, if there is any algorithm, that intelligently detects how many decimal places are reasonable and rounds the numbers accordingly?
You can find a lot about Floating Point Precision Errors and how to avoid them in Javascript, for example "How to deal with floating point number precision in JavaScript?", who deal with the problem by just rounding the number to a fixed amount of decimal places.
My problem is slightly different, I get numbers from the backend (some with the rounding error) and want to display it without the error.
Of course I could just round the number to a set number of decimal places with value.toFixed(X)
. The problem is, that the numbers can range from 0.000000001 to 1000000000, so I can never know for sure, how many decimal places are valid.
(See this Fiddle for my unfruitful attempts) Code :
var a = 0.3;
var b = 0.1;
var c = a - b; // is 0.19999999999999998, is supposed to be 0.2
// c.toFixed(2) = 0.20
// c.toFixed(4) = 0.2000
// c.toFixed(5) = 0.200000
var d = 0.000003;
var e = 0.000002;
var f = d - e; // is 0.0000010000000000000002 is supposed to be 0.000001
// f.toFixed(2) = 0.00
// f.toFixed(4) = 0.0000
// f.toFixed(5) = 0.000001
var g = 0.0003;
var h = 0.0005;
var i = g + h; // is 0.0007999999999999999, is supposed to be 0.0008
// i.toFixed(2) = 0.00
// i.toFixed(4) = 0.0008
// i.toFixed(5) = 0.000800
My Question is now, if there is any algorithm, that intelligently detects how many decimal places are reasonable and rounds the numbers accordingly?
Share Improve this question edited Dec 4, 2017 at 14:04 LocalHorst asked Dec 4, 2017 at 13:43 LocalHorstLocalHorst 1,1682 gold badges13 silver badges27 bronze badges 5- 1 Please also post your code here, fiddles will be deleted sometime, while the question could be later useful for others. – KIMB-technologies Commented Dec 4, 2017 at 13:48
- 1 You cannot "guess" what number is correct, because for all you know the rounding error was the actual number. However, if you know at most how many decimal places there should be - from your example 6 decimals. Than it is trivial. 1) Round to 6 decimals 2) print using normal string conversion. – gcasar ♦ Commented Dec 4, 2017 at 13:53
- What about starting left going right as long as there is a new digit, and if one digit is 3 times the same, cut there? – KIMB-technologies Commented Dec 4, 2017 at 13:54
-
2
This is a simple way that rounds to 6:
Number(value.toFixed(6))
. Will output0.2
,0.0008
and0.000001
in your examples. – gcasar ♦ Commented Dec 4, 2017 at 13:55 - 3 I wish people would stop closing floating-point problems as duplicates promiscuously. The question asks a specific question that is not a duplicate of the purported original and is not answered by the top and accepted answer there. – Eric Postpischil Commented Dec 4, 2017 at 15:05
3 Answers
Reset to default 2When a decimal numeral is rounded to binary floating-point, there is no way to know, just from the result, what the original number was or how many significant digits it had. Infinitely many decimal numerals will round to the same result.
However, the rounding error is bounded. If it is known that the original number had at most a certain number of digits, then only decimal numerals with that number of digits are candidates. If only one of those candidates differs from the binary value by less than the maximum rounding error, then that one must be the original number.
If I recall correctly (I do not use JavaScript regularly), JavaScript uses IEEE-754 64-bit binary. For this format, it is known that any 15-digit decimal numeral may be converted to this binary floating-point format and back without error. Thus, if the original input was a decimal numeral with at most 15 significant digits, and it was converted to 64-bit binary floating-point (and no other operations were performed on it that could have introduced additional error), and you format the binary floating-point value as a 15-digit decimal numeral, you will have the original number.
The resulting decimal numeral may have trailing zeroes. It is not possible to know (from the binary floating-point value alone) whether those were in the original numeral.
In order to fix issues where:
0.3 - 0.1 => 0.199999999
0.57 * 100 => 56.99999999
0.0003 - 0.0001 => 0.00019999999
You can do something like:
const fixNumber = num => Number(num.toPrecision(15));
Few examples:
fixNumber(0.3 - 0.1) => 0.2
fixNumber(0.0003 - 0.0001) => 0.0002
fixNumber(0.57 * 100) => 57
One liner solution thanks to Eric's answer:
const fixFloatingPoint = val => Number.parseFloat(val.toFixed(15))
fixFloatingPoint(0.3 - 0.1) // 0.2
fixFloatingPoint(0.000003 - 0.000002) // 0.000001
fixFloatingPoint(0.0003 + 0.0005) // 0.0008