My Profile Photo

Sheogorath's Blog

JavaScript performance optimization


After explaining in my previous article how to optimize the load time of JavaScript let’s talk about how to improve the runtime of JavaScript.

JavaScript is a really flexible programming language which allows you to add interactive elements in nearly every browser on almost every device. Even more, there are currently many frameworks which let you write smartphone and desktop apps in HTML5, CSS and JavaScript.

Besides the client area, the server-side JavaScript is getting more and more public. While on client side you may can work a bit sloppily if you need high performance on server side.

Don’t use JavaScript

This may sound crazy but is the best thing you can do. When ever you can avoid the usage of JavaScript; just use other methods.

If you want to create e.g. a drop-down menu use CSS instead of JavaScript, because CSS is highly optimized on the client itself and uses native, pre-compiled functions of the browser.

If you use those predefined functions you will save time for compiling the JavaScript, your application reduces its codebase. At least the client side API is optimized for the platform and may use GPU and other techniques you can’t (and don’t want) to control in your JavaScript file.

Understand how JavaScript works

The engine

To optimize JavaScript you should now how it works. First of all there are two types of JavaScript engines:

  1. interpreters and more common
  2. just-in-time-compilers (short: JIT-compilers)

As you may expect the just-in-time-compilation doesn’t work without using an interpreter so nearly every JavaScript engine uses both techniques.

V8 the JavaScript engine used in Google’s browser Chrome alike the NodeJS engine. It is currently the fastest JavaScript engine and on some cases even faster than GCC.

This is possible because it uses dynamic code optimization. It starts with pre-parsing and compiling the code at first time. By checking the return value of a function and improving its structure it builds bytecode for the function which is highly optimized.

As you may know, JavaScript can return strings, integers and floating types and much more in the same function. If you stay on same data type V8 will always use the optimized function. Optimized for the first return type. But what happens if you return an floating number instead of an integer?

If something “unexpected” happens it falls back to old unoptimized pre-parsed code and now build new bytecode optimized to handle integers and floating type. Then it let the newly optimized function replace the old one and continues.

If a callee procedure starts returning floating-point values where it was returning integer values before, the optimized procedure is de-optimized -- a relatively expensive process of recompiling the original procedure to the unoptimized form, replacing the optimized function on the stack with the unoptimized version, and then continuing the computation.

The problem you are running in is that it’s a just-in-time-compilation. This means it can de-optimize and re-optimize your code anywhere, also in a critical loop.

This leads to the conclusion: Try to build your functions in a way it always returns the same data type.

APIs usage takes much time

JavaScript itself is nice but nearly useless (in browsers) without its best friend the DOM.

The Document Object Model (DOM) is a cross-platform and language-independent convention for representing and interacting with objects in HTML, XHTML, and XML documents.


A big performance killer is the access of DOM Elements. While JavaScript itself can be optimized in a simple way it isn’t that easy with DOM. Changing DOM triggers rendering functions which takes much time and resources. That’s the reason why you should “push” changes from your script to DOM as rarely as possible.

Accessing the HTML DOM is very slow, compared to other JavaScript statements.


W3Schools also suggests to use own variables or better reference for DOM-Elements if you access them more than one time.

var exampleDOM;
exampleDOMelement = document.getElementById("example");
exampleDOMelement.innerHTML = "\\o/";

Objects and references

An important thing for fast JavaScript code are objects. Because a reference is always faster than copying a variable.

Building an object in JavaScript is simple:

var exampleObject = {type:"example",
                     value: "nothing",
                     getValue: function() {
                        return ("This " + this.type+ " has value " +this.value);

If you now call a function and use exampleObject as parameter it works like a “call by reference”.

function changeValue (a) {
   a.value= "something";
alert(exampleObject.getValue()); // results: This example has value nothing
changeValue (exampleObject);
alert(exampleObject.getValue()); // results: This example has value something

With this technique you can handle really big data in a fast way.

This works for all Object types like JSON, Arrays, Hashes etc. But unlike Java in JavaScript strings are a native data type which is NOT an object.

As you already know JavaScript is optimizing it’s code in runtime. You also may know that JavaScript allows you to modify an object on the fly which is nothing new. If you only change values it’s no problem at all, but if you change the object structure the engine will de-optimize it and re-optimize it later. This slows down your code incredibly. What does “change the structure” mean? It’s commonly known as object mutation.

If you take out exampleObject and change its structure it works like this:

exampleObject.additionalValue = 5;

This adds a new variable to the Object. This slows it down a bit but it’s still “okay”. A bigger problem is this:

exampleObject.doSomehting = function (a, b, c) {
    var result;
    result = a + b + c;
    return result;

This function has many problems. First of all it’s added in runtime to an existing Object. Doing that in a loop will cost a lot of time. Another problem is that results may differ in many types. Concatenate a string or add up integers or floating numbers. You should never do something like that.

JavaScript high-performance in practice

To be honest, all this stuff above is only useful if you handle a high amount of data in a short time. If you are just about “I want to make my Script run a bit faster” you don’t need to know how crazy JavaScript works. In 9 out of 10 cases the runtime of JavaScript is nothing against the load time. So you should optimize the loading of JavaScript first.

After you done that, let’s improve JavaScript handling…

Caching, caching and caching

Besides the possibilities of the primary browser caching you can also cache parts of your JavaScript code. In this case cache means storing already generated data in variables if you use them more than once.

The Sieve Of Eratosthenes is nothing “new”. It’s a simple algorithm to get all prime numbers which are lower than n.

Lets say you need it multiple times and use the “common” algorithm.

In most cases it looks like this (I just implement the pseudo code from Wikipedia):

function SieveOfEratosthenesUncached(n) {
    // define array of possible primes
    var primes = [];
    for (var i = 0; i < n; ++i)

    // find possible primes
    for (var i = 2; i < Math.sqrt(n); i++)
        if (primes[i])
            for (j = Math.pow(i, 2), k = 0; j < n; k++, j = (Math.pow(i, 2) + (i * k)))
                primes[j] = false;

    // prepare possible primes as return
    var results = [];
    for (var i = 2; i < primes.length; ++i)
        if (primes[i])

    return results;

After this I also implemented an optimized version with caching for running multiple times:

function SieveOfEratosthenesCached(n, cache) {
    var primes = cache;
    for (var i = cache.length; i < n; ++i)

    // find possible primes
    // Improve speed by prevent active looping
    for (var i = 2, l = Math.sqrt(n); i < l; ++i)
        if (primes[i])
            for (j = Math.pow(i, 2), k = 0; j < n; k++, j = (Math.pow(i, 2) + (i * k))) {
                primes[j] = false;

    // prepare possible primes as return
    var results = [];
    for (var i = 2, l = primes.length; i < n && i < l ; ++i)
        if (primes[i])

    return results;

Notice that the cache has to be a predefined array. I use a default array with var cache = [true,true];. This is okay because everyone knows 0 and 1 are not prime.

Check the difference yourself. Don’t worry if your browser freezes for a few seconds:

First Tests to Check results are identical

Function callPrime numbersDuration
Function callPrime numbersDuration
SieveOfEratosthenesCached(10000000, cache)
SieveOfEratosthenesCached(20000000, cache)
SieveOfEratosthenesCached(10000000, cache)

Penetration test

100 times get all prime numbers from a random number lower than 1000. Press Start to get results!

As you can see the cached version is always faster.

That’s what I thought before writing this article.

Yes while testing my code I noticed that in the second part, the penetration test, the duration of the cached version grows if you run it multiple times in Google Chrome. I was really confused! While the uncached version always took about 100ms the cached version starts with less than 40 ms but grows up to more than 500ms! In Firefox there wasn’t any effect like that. The cached version was always twice as fast.

The next extreme: IE11. While uncached version of penetration test took more than 2000ms the cached version only takes about 100ms. In Microsoft Edge the cached version was always 4 times faster than the uncached.

To be honest I really didn’t expect that the cached code can be slower than the uncached version.

But however! The point of this is that you should cache results which you need more than one time. Regularly it saves your application time and improve your JavaScript speed.

Besides those “result caches” it’s always useful to cache objects you assign to the DOM. Especially images and elements which create https request should be cached. This can safe a lot of time while running.

Use Attributes

If you write an application and have critical parts like draw something into a canvas you should minimize frequency of function calls. Functions are risky because they possibly return something unexpected or may cause an exception. Besides that you have to open a function stack etc. This all consumes time in critical parts of your program.

As example data I use 200.000 objects of the following:

  id: 1,
  value : Math.random() * 2500,
  getValue : function() {
    return this.value;

Try it yourself by pressing the “Start”- Button.

The function calls took: unknown milliseconds. The value calls took: unknown milliseconds.

While the test results in Firefox, Edge and IE11 are nearly identical and it makes not a real difference to call this super simple function or access the attribute directly, in my tests with chrome the direct access of the value was more than 6 times faster compared with the function call. Imagine what a simple function call that is. If you just pre-calculate your data you can supercharge your critical code sections up to 6 times faster.


Wow. That whole article was hard to write. Mainly because I did theory first and was sure Google’s V8 engine is the fastest JavaScript engine like I’ve read it multiple times. But in the practice part it was primary the most indeterministic engine.

From daily browsing and working with canvas I know V8 is much faster than Mozilla’s Spidermonkey or Microsoft’s Chakra JScript engine. But in some of those trivial tests it was much slower than it’s friends.

Let’s find a useful summarizing of this article:

  1. Avoid API interaction (DOM)
  2. Use objects!
  3. Cache your results
  4. Pre-calculate data for critical sections

I really hope you enjoyed the article and found something useful. Use Mastodon to get in touch with me!

Further information about V8:

Further information about JavaScript performance code: