最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

Are JSON numbers always double-precision floating-point numbers? - Stack Overflow

programmeradmin0浏览0评论

I have two conflicting mindsets:

Mindset 1:

JSON numbers are always double-precision floating point numbers. Therefore:

  • There is no semantic difference between 1 and 1.0 - both represent the same exactly number;
  • 12345678901234567890 is actually 12345678901234567000 because 12345678901234567890 cannot be accurately represented as a double-precision floating-point number.

Mindset 2:

JSON numbers cannot be always interpreted as double-precision floating-point numbers. JSON is a communication protocol that is distinct from JavaScript. The belief that JSON numbers are always double-precision floating-point numbers stems from the confusion between JavaScript and JSON and the idiosyncrasies of the default JSON parser and serializer in JavaScript, which interprets them in this way. Therefore:

  • 1 and 1.0 need not be the same. In particular, the presence or absence of the trailing .0 can be used to encode type information. Many programming languages, such as Java or C#, distinsguish between integers and floating-point numbers. It is reasonable to demand that integers in such languages must always be serialized without the trailing .0, while floating point numbers must always be serialized with the trailing .0.
  • 12345678901234567890 and 12345678901234567000 are not the same numbers. Certain widely used parsers interpret them, by default, as the same number because they coerce a JSON number into a double-precision floating-point number - but this is on these parsers, and not on JSON itself.

Which - if any - of these two mindsets is correct?

Googling seems to yield conflicting results.

  • says this: "JSON does not have distinct types for integers and floating-point values. Therefore, the presence or absence of a decimal point is not enough to distinguish between integers and non-integers. For example, 1 and 1.0 are two ways to represent the same value in JSON." - so it appears this is consisten with Mindset 1;
  • talks about using JSON to serialize and deserialize large numbers that cannot be stored as double-precision floating-point numbers - so it appears to be consistent with Mindset 2.

I have two conflicting mindsets:

Mindset 1:

JSON numbers are always double-precision floating point numbers. Therefore:

  • There is no semantic difference between 1 and 1.0 - both represent the same exactly number;
  • 12345678901234567890 is actually 12345678901234567000 because 12345678901234567890 cannot be accurately represented as a double-precision floating-point number.

Mindset 2:

JSON numbers cannot be always interpreted as double-precision floating-point numbers. JSON is a communication protocol that is distinct from JavaScript. The belief that JSON numbers are always double-precision floating-point numbers stems from the confusion between JavaScript and JSON and the idiosyncrasies of the default JSON parser and serializer in JavaScript, which interprets them in this way. Therefore:

  • 1 and 1.0 need not be the same. In particular, the presence or absence of the trailing .0 can be used to encode type information. Many programming languages, such as Java or C#, distinsguish between integers and floating-point numbers. It is reasonable to demand that integers in such languages must always be serialized without the trailing .0, while floating point numbers must always be serialized with the trailing .0.
  • 12345678901234567890 and 12345678901234567000 are not the same numbers. Certain widely used parsers interpret them, by default, as the same number because they coerce a JSON number into a double-precision floating-point number - but this is on these parsers, and not on JSON itself.

Which - if any - of these two mindsets is correct?

Googling seems to yield conflicting results.

  • https://json-schema./understanding-json-schema/reference/numeric says this: "JSON does not have distinct types for integers and floating-point values. Therefore, the presence or absence of a decimal point is not enough to distinguish between integers and non-integers. For example, 1 and 1.0 are two ways to represent the same value in JSON." - so it appears this is consisten with Mindset 1;
  • https://developer.mozilla./en-US/docs/Web/JavaScript/Reference/Global_Objects/JSON talks about using JSON to serialize and deserialize large numbers that cannot be stored as double-precision floating-point numbers - so it appears to be consistent with Mindset 2.
Share Improve this question edited Mar 3 at 18:15 user29873842 asked Mar 3 at 18:05 user29873842user29873842 131 bronze badge
Add a comment  | 

1 Answer 1

Reset to default 1

The JSON format does not set limits to the numbers that it can represent: the following JSON is valid:

1e999999999999

...even though it represents a number that far exceeds the capacity of a double-precision floating point number.

Similarly, you can have this valid JSON:

1234567890123456789.01234567890123456789 

...even though double-precision floating point numbers cannot represent that many significant digits.

Such concerns are not inherent to the JSON format, but to the implementations that read and write JSON. The RFS 8259 standard touches on this in section 6 on numbers:

This specification allows implementations to set limits on the range and precision of numbers accepted. Since software that implements IEEE 754 binary64 (double precision) numbers [IEEE754] is generally available and widely used, good interoperability can be achieved by implementations that expect no more precision or range than these provide, in the sense that implementations will approximate JSON numbers within the expected precision. A JSON number such as 1E400 or 3.141592653589793238462643383279 may indicate potential interoperability problems, since it suggests that the software that created it expects receiving software to have greater capabilities for numeric magnitude and precision than is widely available.

Note that when such software is used, numbers that are integers and are in the range [-(2**53)+1, (2**53)-1] are interoperable in the sense that implementations will agree exactly on their numeric values.

This means that the first article you quoted is not entirely accurate. Namely the statement that "the presence or absence of a decimal point is not enough to distinguish between integers and non-integers".

Although in practice this might be true, this really is an implementation aspect. We can imagine implementations for which it would be enough to distinguish between integers and non-integers. This is not the business of the JSON format itself.

发布评论

评论列表(0)

  1. 暂无评论