Sree Kotay blogs about JavaScript a bit. (If you’re interested in more technical details, I’d recommend checking out Simon Willison’s “A (Re)-Introduction to JavaScript” presentation.) In Sree’s blog, he writes:
Part of understanding the distinction, in the trivial case, comes from the (obvious) understanding of basic JS optimization, that:
for (i=0; i<100; i++) a.b.c.d(v);
...is A LOT slower, at least in JavaScript, than:
var f=a.b.c.d;
for (i=0; i<100; i++) f(v);...because after all, JS is a dynamic language. I'll provide some specific JavaScript performance tips and JavaScript benchmarks to make the points clearly.
Now, I intuitively understand and agree with Sree that the latter should be faster, but exactly how much faster? Are symbol lookups in modern JavaScript engines actually that slow? Don't modern JavaScript interpreters take advantage of JIT bytecode compilation and bytecode optimization, so that if you write the former code, it gets optimized behind the scenes into the latter form? (Whether this is possible through static analysis, I'm not sure of -- I'm just throwing this question out there.)
Supposing it's not possible to optimize away the inefficiency of the first form ... what kind of performance penalty are we talking about? 1%? 10%? Is it a material difference that should drive a best practice around coding convention to avoid it? Out of sheer laziness, I'm only going to benchmark in Firefox 1.5.0.1 here on my 2.2 GHz Dell C840:
<script language="JavaScript"> var v = "Hello, world."; var a = { b: { c: { d: function(arg) { return arg; } } } }; document.write("<p>a.b.c.d(\"Hello, world.\") = " + a.b.c.d("Hello, world.") + "</p>"); var start = new Date(); for (var i = 0; i < 1000000; i++) { a.b.c.d(v); } var now = new Date(); document.write("<p>Difference: " + (now - start) + "</p>"); start = new Date(); var f = a.b.c.d; for (var i = 0; i < 1000000; i++) { f(v); } now = new Date(); document.write("<p>Difference: " + (now - start) + "</p>"); </script>
The output:
a.b.c.d("Hello, world.") = Hello, world. Difference: 2374 Difference: 1652
Since the JavaScript Date object provides time in milliseconds, we're seeing one million iterations in 2374 milliseconds or 2.4 microseconds per iteration for the first form vs. one million iterations in 1652 milliseconds or 1.7 microseconds for the second form. We're talking a difference of 0.7 microseconds per iteration, or a 29% difference. (Okay, my math skills are really weak, so I could be wrong here. Please double-check my numbers and let me know if I've gotten anything wrong, please.)
Okay, so 29% overhead is nothing to scoff at, but shaving 0.7 microseconds per iteration isn't worth optimizing away when I'm guessing there's lots of other coding practices where much more time is wasted. In other words, 90% of the time spent exeucting a script likely isn't in that 29% of overhead, so it's not where you should be focusing your optimization efforts.
Tags:
javascript,
programming,
optimization
Hey Dossy, I hadn’t finished part 3, where I get into the numbers, but I’d think 30% is still a pretty big deal, IF you’re writing applications at scale – in the trivial case, totally agree with you – who cares – that’s indeed part of the point of Javascript.
Also, dom/activex/livescript nested references are even slower than “native” javascript object ones – so that can have more of an impact.
Additionally, none of the browser javascript interpreters can do squat meaningfully to optimize (even JScript.NET, as I’ll show in pt 3) because unfortunately, Javascript is inherently a dynamic language – the function invocation (or even property invocation) may cause volatility (e.g. self-modification) with respect to the calling function/object. Doesn’t mean you CAN’T deal with it, but none do.
Incidentally, though, I wasn’t trying to make a big deal out that optimization, per se – I was only trying to suggest that understanding the WHY of the perf delta will help you understand the value of other language features like the “prototype” property.
Hope that helps clarify :)