I just ran into a very interesting issue when someone posted a jsperf benchmark that conflicted with a previous, nearly identical, benchmark I ran.
Chrome does something drastically different between these two lines:
new Array(99999); // jsperf ~50,000 ops/sec
new Array(100000); // jsperf ~1,700,000 ops/sec
benchmarks:
I was wondering if anyone has any clue as to what's going on here!
(To clarify, I'm looking for some low-level details on the V8 internals, such as it's using a different data structure with one vs the other and what those structures are)
I just ran into a very interesting issue when someone posted a jsperf benchmark that conflicted with a previous, nearly identical, benchmark I ran.
Chrome does something drastically different between these two lines:
new Array(99999); // jsperf ~50,000 ops/sec
new Array(100000); // jsperf ~1,700,000 ops/sec
benchmarks: http://jsperf.com/newarrayassign/2
I was wondering if anyone has any clue as to what's going on here!
(To clarify, I'm looking for some low-level details on the V8 internals, such as it's using a different data structure with one vs the other and what those structures are)
Share Improve this question edited Apr 13, 2022 at 11:58 Dharman♦ 33.2k27 gold badges101 silver badges146 bronze badges asked Jul 15, 2011 at 18:16 user578895user578895 6 | Show 1 more comment3 Answers
Reset to default 52Just because this sounded pretty interesting, I searched through the V8 codebase for a static defined as 100000, and I found this kInitialMaxFastElementArray
var, which is the subsequently used in the builtin ArrayConstructInitializeElements
function function. While I'm not a c programmer and don't know the nitty-gritty here, you can see that it's using an if
loop to determine if it's smaller than 100,000, and return
ing at different points based on that.
Well, there always is some threshold number when you design algorithms that adapt to the size of data (for example SharePoint changes the way it works when you add 1000 items to a list). So, the guess would be that you have found the actual number and the performance differs, as different data structures or algorithms are used.
I don't know what operating system you're using, but if this is Linux, I'd suspect that Chrome (i.e. malloc
) is allocating memory from a program-managed heap (size determined using the sbrk
system call, and the free lists are managed by the C standard library), but when you reach a certain size threshold, it switches to using mmap
to ask the kernel to allocate large chunks of memory that don't interfere with the sbrk
-managed heap.
Doug Lea describes how malloc
works in the GNU C Library, better than I could. He wrote it.
Or maybe 100000 hits some kind of magic threshold for the amount of space needed that it triggers the garbage collector more frequently when trying to allocate memory.
65,536
or16,777,216
or some other power of 2 – user578895 Commented Jul 15, 2011 at 18:24