最新消息:雨落星辰是一个专注网站SEO优化、网站SEO诊断、搜索引擎研究、网络营销推广、网站策划运营及站长类的自媒体原创博客

math - JavaScript: Efficient integer arithmetic - Stack Overflow

programmeradmin1浏览0评论

I'm currently writing a compiler for a small language that compiles to JavaScript. In this language, I'd quite like to have integers, but JavaScript only supports Number, which is a double-precision floating point value. So, what's the most efficient way to implement integers in JavaScript? And how efficient is this compared to just using Number?

In particular, overflow behaviour should be consistent with other languages: for instance, adding one to INT_MAX should give INT_MIN. Integers should either be 32-bit or 64-bit.

I'm currently writing a compiler for a small language that compiles to JavaScript. In this language, I'd quite like to have integers, but JavaScript only supports Number, which is a double-precision floating point value. So, what's the most efficient way to implement integers in JavaScript? And how efficient is this compared to just using Number?

In particular, overflow behaviour should be consistent with other languages: for instance, adding one to INT_MAX should give INT_MIN. Integers should either be 32-bit or 64-bit.

Share Improve this question asked Apr 18, 2011 at 20:38 Michael WilliamsonMichael Williamson 11.4k4 gold badges39 silver badges33 bronze badges
Add a comment  | 

7 Answers 7

Reset to default 10

So, what's the most efficient way to implement integers in JavaScript?

The primitive number type is as efficient as it gets. Many modern JS engines support JIT compilation, so it should be almost as efficient as native floating-point arithmetic.

In particular, overflow behaviour should be consistent with other languages: for instance, adding one to INT_MAX should give INT_MIN. Integers should either be 32-bit or 64-bit.

You can achieve the semantics of standard 32-bit integer arithmetic by noting that JavaScript converts "numbers" to 32-bit integers for bitwise operations. >>> (unsigned right shift) converts its operand to an unsigned 32-bit integer while the rest (all other shifts and bitwise AND/OR) convert their operand to a signed 32-bit integer. For example:

  • 0xFFFFFFFF | 0 yields -1 (signed cast)
  • (0xFFFFFFFF + 1) | 0 yields 0 (overflow)
  • -1 >>> 0 yields 0xFFFFFFFF (unsigned cast)

I found this implementation of BigIntegers in Javascript: http://www-cs-students.stanford.edu/~tjw/jsbn/

Perhaps this will help?

Edit: Also, the Google Closure library implements 64-bit integers: http://code.google.com/p/closure-library/source/browse/trunk/closure/goog/math/long.js

These are essentially just generating convenience objects though, and won't do anything for improving on the fundamental data-type efficiency.

On a modern CPU if you restrict your integer values to the range +- 2^52 then using a double will be barely less efficient than using a long.

The double IEE754 type has 53 bits of mantissa, so you can easily represent the 32-bit integer range and then some.

In any event, the rest of Javascript will be much more of a bottleneck than the individual CPU instructions used to handle arithmetic.

All Numbers are Numbers. There is no way around this. JavaScript has no byte or int type. Either deal with the limitations or use a more low level language to write your compiler in.

The only sensible option if you want to achieve this is to edit one of the JavaScript interpreters (say V8) and extend JS to allow access to native C bytes.

Well you can choose JavaScript's number type which is probably computed using primitives of your CPU or you can choose to layer a complete package of operators and functions and what not on an emulated series of bits...?

... If performance and efficiency is your concern, stick with the doubles.

The most efficient would be to use numbers, and add operations to make sure that operations on the simulated integers would give an integer result. A division for example would have to be rounded down, and a multiplication would either have to be checked for overflow or masked down to fit in the integer range.

This of course means that floating point operations in your language will be considerably faster than integer operations, defeating most of the purpose of having an integer type in the first place.

Note that ECMA-262 Edition 3 added Number.prototype.toFixed, which takes a precision argument telling how many digits after the decimal point to show. Use this method well and you won't mind the disparity between finite precision base 2 and the "arbitrary" or "appropriate" precision base 10 that we use every day. - Brendan Eich

发布评论

评论列表(0)

  1. 暂无评论