As far as I know, there are two main ways of storing 2D data. One, a 2D array:
var array = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
];
// access element at (1, 1)
array[1][1];
The other, a flat array with a stored width
:
var array = [1, 2, 3, 4, 5, 6, 7, 8, 9];
var width = 3;
// access element at (1, 1)
array[1 * width + 1];
It's being said all over the Internet that "multidimensional" arrays are bad and perform super poorly pared to flat arrays and storing the width. (Also, typed arrays can be used to speed up the second variant even more). However, I really dislike the syntax you must use to access a point with the flat array, and I think that you should code as close to what you mean. Also, it's a lot of (I think) unnecessary math every frame.
I'm in a situation where I need to handle large 2D arrays quickly (you guessed it, a game) and I want to know if flattening my arrays is worth the possible performance gain. So, is it?
As far as I know, there are two main ways of storing 2D data. One, a 2D array:
var array = [
[1, 2, 3],
[4, 5, 6],
[7, 8, 9]
];
// access element at (1, 1)
array[1][1];
The other, a flat array with a stored width
:
var array = [1, 2, 3, 4, 5, 6, 7, 8, 9];
var width = 3;
// access element at (1, 1)
array[1 * width + 1];
It's being said all over the Internet that "multidimensional" arrays are bad and perform super poorly pared to flat arrays and storing the width. (Also, typed arrays can be used to speed up the second variant even more). However, I really dislike the syntax you must use to access a point with the flat array, and I think that you should code as close to what you mean. Also, it's a lot of (I think) unnecessary math every frame.
I'm in a situation where I need to handle large 2D arrays quickly (you guessed it, a game) and I want to know if flattening my arrays is worth the possible performance gain. So, is it?
Share Improve this question asked Feb 25, 2014 at 4:33 rvighnervighne 21.9k11 gold badges53 silver badges75 bronze badges 8- 1 Benchmark it and find out. Or, better yet, don't. If you encapsulate the array properly, it should cost you virtually nothing to transition from one method of storage to the other, after the game is done. Then you'll actually be able to produce useful benchmarks instead of synthetic ones. – user229044 ♦ Commented Feb 25, 2014 at 4:36
- @meagar: Because the second version will involve a lot of juggling from one type of coordinate to the other in this scenario (and besides will be a pain to write), will that cancel out the benefits? That's a little hard to benchmark. – rvighne Commented Feb 25, 2014 at 4:41
-
Neither version should involve anything except a function invocation, something like
getData(x, y)
andsetData(x, y, value)
. – user229044 ♦ Commented Feb 25, 2014 at 4:42 - Why start optimizing there? I'd start with basic refactoring first, then algorithms, then memoization, and finally, only if all that fails I would consider changing a logical data structure for the sake of performance. – elclanrs Commented Feb 25, 2014 at 4:42
-
Note that you can store the width as a property of the array:
array.width = 3;
. – RobG Commented Feb 25, 2014 at 4:43
3 Answers
Reset to default 61D array may have a performance benefit due to the index look-up times. A 2D+ array will first have to have one index looked up, then in the resulting array yet another and so forth. There is an initial cost each time but it may be microscopic all in all.
If you really want a gain in performance then consider using Typed Arrays. These are actual low-level byte arrays and you can use them with 8-bit, 16-bit, 32-bit and 32/64 bits float values, signed and unsigned (latter not for float).
If your numbers are of one type then you can use typed arrays. You use them as any other array with index look-up but performance can be many times as these are optimized by the JS engine(s):
var array = new Uint8Array(9); // 9 x bit-width, here 8 = 9 bytes
// write
array[0] = 1;
array[1] = 2;
...etc.
// read
var value1 = array[0];
...etc.
I think I've found an ideal solution to my own problem! Here is a benchmark.
UPDATE: The originial benchmark was incorrect and the updated version shows that flat typed arrays perform just slightly better than 2D typed arrays.
2D arrays turned out to be much slower than the flattened version, just as everyone warns. However, if the second level is a Typed Array, then it performs better than a flat array (even if the flattened one is typed as well). Thanks to @Ken for pointing me to typed arrays.
This will allow me to code how I think (array[x][y]
), while also having superior performance.
I created some jsperf tests for 1000x1000 arrays (larger than the OP used in their jsperf), and added methods for returning a single nested array / flat array slice.
var arrDim = 1000;
var clArr = new Array(arrDim);
for (var i = 0; i < arrDim; i++) {
clArr[i] = new Uint8Array(arrDim);
}
clArr.getSlice = function(i) {
return this[i];
}
clArr.setVal = function(i, j, val) {
this[i][j] = val;
}
clArr.getVal = function(i, j) {
return this[i][j];
}
var flArr = new Uint8Array(Math.pow(arrDim, 2));
flArr.getSlice = function(i) {
var start = i*arrDim;
return this.slice(start, start + arrDim);
}
flArr.setVal = function(i, j, val) {
this[i*arrDim + j] = val;
}
flArr.getVal = function(i, j) {
return this[i*arrDim + j];
}
- jsperf - get values
- jsperf - set values
- jsperf - get slice
For getting and setting, Chrome 60 has pretty much the same speed for the 2 methods, and in EdgeHTML 15 the nested array is ~ 50% quicker.
But most importantly, for getting the nested arrays / slicing, the nested array is much, much quicker.