Best Practices to Measure Execution Time in JavaScript

Measure execution time in JavaScript

There are many cases in which you’d want to measure execution time in JavaScript. You might want to detect and fix performance issues. Or you may need to get usage telemetry, like how much time it took for the user to press some button. Or you might need it as a feature, like showing the user how much time had passed. Whatever the reason, this kind of timer functionality is a bit tricky in JavaScript. While other languages have some sort of Stopwatch class, JavaScript has several different APIs that vary in accuracy and browser implementations. In this article, you’ll see the best ways to measure time, the pros and cons of each approach, as well as some best practices.

1. Using

One of the easiest ways to measure execution time is to use, which returns the number of milliseconds that passed since January 1, 1970 UTC, also known as UNIX Epoch Time. You can use this to get the date before code execution and after code execution. Then, subtract them to get the duration. Something like this:

Seems like a simple enough solution, right?

It really is, except that it’s limited in its accuracy. By definition, Unix Time is measured in milliseconds, so it’s impossible to get to higher resolutions like microseconds. With that in mind, let’s see what other options exist out there.

NOTE: Using is same as using new Date().getTime(), but it’s best to use to prevent initializing unnecessary Date objects.

2. Using

The API was introduced specifically to be able to measure high-precision performance tasks. It returns a DOMHighResTimeStamp, which is the number of milliseconds passed since the document started. The value is a floating-point that can reach resolutions of a microsecond (1000 microseconds == 1 millisecond).

You can use it like this:

Even though the intention of this API is to reach high-res timestamps, it’s purposely rounded to be less accurate. Yes, that sounds strange, but there’s a good reason for this. Some very smart people discovered that using accurate timestamps exposes a major security vulnerability where potential attackers can use timestamp information to extract private data. This has to do with modern processors speculating execution results and leaving observable data in memory as a side effect (see Spectre attack). Pretty complicated stuff, but the important thing to understand is that all APIs that produce any kind of accurate timestamp were limited to round their results. This is true for API as well. It’s rounded up to 1-millisecond resolution in Firefox and to 100us in Chromium (us stands for microseconds).

By the way, is rounded to 2 ms in Firefox (by default), and can be rounded up to 100ms if the setting privacy.resistFingerprinting is enabled. So even after the new limitations, will have better accuracy than

There is a way to override the low-resolution limitation and allow to output higher resolution. This is done by adding a couple of headers that tell the browser that this site is cross-origin-isolated:

When a site is cross-origin-isolated, it’s not vulnerable to cross-origin attacks where other sites can use the Spectre vulnerability and extract private data.

Note that when adding those headers, you won’t be able to show pop-ups or iFrames from different origins than yours that aren’t themselves marked as cross-origin.

This API is supported by all browsers in their latest versions.

3. Using performance.mark()

When investigating performance issues, I often find myself trying to find out which part of my functionality is responsible for the perf problem. This means I’ll need to measure and log every part’s execution time. One way to do that is with multiple variables, like so:

But that’s a bit cumbersome.

Alternatively, you can use the performance.mark() and performance.measure() APIs [1] [2], like this:

This will log an array of PerformanceMeasure objects:

At first glance, this doesn’t look like a huge improvement. You’ll still need to parse those PerformanceMeasure objects to log this data in a way that’s going to be easy to analyze later. But the big advantage with this API is that it’s really easy to measure execution time across several functions or classes, without needing to define nasty global variables. Well, you still need to define some strings if you want to set the mark IDs as consts (e.g const markStart = "mark_start"). But even so, imagine instead having to define a bunch of global “startA, durationA, startB, durationB, ..” variables.

The accuracy of performance.mark() and performance.measure() is the same as You get rounded numbers that can become high-resolution time if you define cross-origin-isolation.

This API is supported by all browsers in their latest versions.

Best practices for efficient perf analysis

Once you’ve added all the time measurements in code, you’ll be deploying it to production and analyzing the results in bulk. It takes time for your changes to reach production and even more time to get significant results. If you miss something and need to add more changes, you’ll have to wait for the whole process again, so better do it right the first time. Here are some things to consider when adding performance measurements:

  • Adding too many logs can hurt performance and skew the results. Try not to add something that executes a thousand times per second.

  • Make sure your time measurements don’t cause any exceptions or logical errors. Like if some of your customers use a really old browser that doesn’t support the performance API. You can check this with if (window.performance) { ... }. Or create a Stopwatch class that uses if it’s supported, and otherwise (like here).

  • What happens when there’s an exception during execution? Will you still log the time execution? And is it going to be correct? A common bug happens when you mark the execution start, but the execution end is never reached because of an exception, so the timestamp remains at its default value or its previous value. As a result, the duration might be negative. One way to deal with it is by using try..finally, like this:

  • Do you suspect what’s going to be the performance issue? Can you place additional measurements in advance? If you do, you can save time waiting for another deployment to production. It’s not the cleanest approach, but it can certainly save some time.

  • Consider in advance the query you’re going to use to analyze the results. Make sure to report measurements in a format that’s going to be easy to analyze later. And to correlate to the relevant features, users, state, and whatever else might be relevant.

  • Include in the logs whatever metadata you think will be useful like the session ID, application state, browser info, etc.

Finishing up

We saw three different methods to measure execution time in JavaScript. We saw that all of them are rounded up to about a millisecond resolution by default, although the performance APIs can reach microsecond resolutions if you add cross origin isolation. This isolation is easy if you don’t embed 3rd-party iFrames or pop-ups, but very difficult if you do.

We also talked about a bunch of best practices so that you’ll be most efficient when analyzing the results later on. Think ahead of what can go wrong and what to do to make sure analyzing the results will be as easy and effective as possible. Cheers everyone.


Enjoy the blog? I would love you to subscribe! Performance Optimizations in C#: 10 Best Practices (exclusive article)

Want to become an expert problem solver? Check out a chapter from my book Practical Debugging for .NET Developers