Part of understanding the distinction, in the trivial case, comes from the (obvious) understanding of basic JS optimization, that:
for (i=0; i<100; i++) a.b.c.d(v);
for (i=0; i<100; i++) f(v);
Supposing it's not possible to optimize away the inefficiency of the first form ... what kind of performance penalty are we talking about? 1%? 10%? Is it a material difference that should drive a best practice around coding convention to avoid it? Out of sheer laziness, I'm only going to benchmark in Firefox 126.96.36.199 here on my 2.2 GHz Dell C840:
a.b.c.d("Hello, world.") = Hello, world. Difference: 2374 Difference: 1652
Okay, so 29% overhead is nothing to scoff at, but shaving 0.7 microseconds per iteration isn't worth optimizing away when I'm guessing there's lots of other coding practices where much more time is wasted. In other words, 90% of the time spent exeucting a script likely isn't in that 29% of overhead, so it's not where you should be focusing your optimization efforts.