So, I am programming a 2d Javascript physics simulation. The performance is good, but I'm going through making optimizations to make it better. So, because the program works with a lot of physical geometry, I make several Pythagorean Theorem calculations in the program. In all, about five calculations; together, they run about one million times per second. So, I figured it would boost performance if I put that simple Pythagorean theorem code into a new function and called it; after all, that way the browser has less piling to do. So, I ran the code in Firefox and got.... a 4000000% increase in the execution time of that calculation.
How? It's the same code: Math.sqrt(x*x+y*y), so how does adding it as a function slow it down? I assume the reason is that a function takes time just to be called, without executing the code, and that adding that a million of these delays per second slows it down?
That seems rather alarming to me. Would this also apply to predefined js functions? It seems unlikely, and if so, how do they avoid it?
The code used to go like this:
function x()
{
dx=nx-mx;
dy=ny-my;
d=Math.sqrt(dx*dx+dy*dy);
doStuff(...
}
What I tried was this:
function x()
{
dx=nx-mx;
dy=ny-my;
d=hypo(dx,dy);
doStuff(...
}
function hypo(x,y)
{
return Math.sqrt(x*x+y*y);
}
Thanks!
So, I am programming a 2d Javascript physics simulation. The performance is good, but I'm going through making optimizations to make it better. So, because the program works with a lot of physical geometry, I make several Pythagorean Theorem calculations in the program. In all, about five calculations; together, they run about one million times per second. So, I figured it would boost performance if I put that simple Pythagorean theorem code into a new function and called it; after all, that way the browser has less piling to do. So, I ran the code in Firefox and got.... a 4000000% increase in the execution time of that calculation.
How? It's the same code: Math.sqrt(x*x+y*y), so how does adding it as a function slow it down? I assume the reason is that a function takes time just to be called, without executing the code, and that adding that a million of these delays per second slows it down?
That seems rather alarming to me. Would this also apply to predefined js functions? It seems unlikely, and if so, how do they avoid it?
The code used to go like this:
function x()
{
dx=nx-mx;
dy=ny-my;
d=Math.sqrt(dx*dx+dy*dy);
doStuff(...
}
What I tried was this:
function x()
{
dx=nx-mx;
dy=ny-my;
d=hypo(dx,dy);
doStuff(...
}
function hypo(x,y)
{
return Math.sqrt(x*x+y*y);
}
Thanks!
Share Improve this question edited Nov 21, 2013 at 20:37 mindoftea asked Apr 12, 2012 at 23:17 mindofteamindoftea 8366 silver badges16 bronze badges 12- 3 Is your function defined outside of the scope that is being ran a million times a second? – alex Commented Apr 12, 2012 at 23:18
- And it's not true that the browser has "less piling to do" because you put it in a function... it should be about the same, really, especially since pilation is a startup thing. But @alex probably got the reason for your 400% slowdown :) – Ry- ♦ Commented Apr 12, 2012 at 23:20
- 2 It is important to have the exact code you are talking about. Try using jsfiddle – Otto Allmendinger Commented Apr 12, 2012 at 23:21
- 3 Note that when you call a function, the browser has to allocate memory for new variables and copy the values into them (i.e. for the parameters of the function). This would no doubt add overhead – joshuahealy Commented Apr 12, 2012 at 23:21
- 5 +1 for not putting the code on jsfiddle and posting it here instead, which is the proper way to ask here, and for a well-asked question (after the edit adding code). A jsfiddle link as an additional reference to your post would have been fine, but having the actual code here so it's available to future users who run across it or search for it is the right way to do it. :) – Ken White Commented Apr 12, 2012 at 23:52
2 Answers
Reset to default 8Function calls are negligible or even optimizing in pre-piled languages which JS has never been. Beyond that, a great deal depends on the browser.
They're the death of all performance in interpreted languages which JS has been primarily until fairly recently. Most modern browsers have JIT (Just In Time) pilers which is a huge upgrade from the JS interpreters of the past but I believe function calls to another scope still cost some overhead because JS's call object has to determine what is actually being called and that means marching up and down various scope chains.
So as a general rule: if you care about IE8 and lower and older versions of Chrome and Firefox avoid function calls period. Especially inside loops. For the JIT browsers, I would expect that a function defined inside the other function would be generally beneficial (but I would still test as this is brand new technology for IE9 and relatively new for everybody else).
One other thing to be wary of. If a function is particularly plex, JIT's may not do anything to optimize them.
https://groups.google./forum/#!msg/closure-piler-discuss/4B4IcUJ4SUA/OqYWpSklTE4J
But the important thing to understand is that when something is locked and only called inside a context, like a function within a function, it should be easy for a JIT to optimize. Defined outside of a function, it has to determine which definition of that function is being called exactly. It could be in an outer function. It could be global. It could be a property of the window object's constructor's prototype, etc... In a language where functions are first class, meaning their references can be passed around as args the same way you pass data around, you can't really avoid that step outside of your current context.
So try defining hypo inside X to see what happens.
Another couple of general tips from the interpreted age that might still be valuable in JITs:
The '.' operator as in
someObject.property
, is a process worth caching. It costs overhead as there is an associated call object lookup process every time you use it. I imagine Chrome would not preserve the results of this process since alterations to parent objects or prototypes could alter what it actually references outside of a given context. In your example if x is being used by a loop (likely okay or even helpful if x is defined in the same function as the loop in JITs - murder in an interpreter), I would try assigning Math.sqrt to a var before using it in hypo. Having too many references to stuff outside of the context of your current function might cause some JITs to decide it's not worth the trouble to optimize but that's pure speculation on my part.The following is probably the fastest way to loop an array:
//assume a giant array called someArray var i = someArray.length; //note the property lookup process being cached here //'someArray.reverse()' if original order isimportant while(i--){ //now do stuff with someArray[i]; }
note: code block not working here for some reason.
Doing it this way can be helpful because it basically morphs the inc/decrement step and the logical parison into just the decrement, pletely removing the need for a left/right parison operator. Note that in JS the right side decrement operator means that i gets passed to be evaluated and then decremented before it's used inside the block. while(0)
evaluates to false.
To my surprise, caching the lookup as suggested by Erik doesn't do much to improve performance on my browser (Chromium, Linux) but seems hurts performance instead: http://jsperf./inline-metric-distance
var optimizedDistance = (function () {
var sqrt = Math.sqrt;
return function (x, y) { return sqrt(x * x + y * y); }
})();
is slower than
var unoptimizedDistance = function(x, y) {
return Math.sqrt(x * x + y * y);
}
Even calling an alias is slower
var _sqrt = Math.sqrt; // _sqrt is slower than Math.sqrt!
But then again, this is not an exact science and real life measurements can still vary.
Nonetheless, I'd go with using Math.sqrt
.