Using Browsers’ performance tools to find bottlenecks in your Web App

Jakub Janczyk
Pragmatists
Published in
9 min readJan 9, 2020

--

Among the different ways of measuring the performance of your web application, there’s one that stands out from the crowd. And it doesn’t require you to install any new tools besides the one you’re probably already using. As you might have guessed — it’s the browser of your choice. I’m using Chrome for my web development, so in this article, I want to show you how to use the integrated Dev Tools both to measure performance and to troubleshoot slow applications. I also want to demonstrate how to detect issues such as why your application might be slow, and in so doing, to help you fix it easily. In fact, no matter what browser you’re using, you could do the same in any other modern browser. Let’s get started!

When you open Chrome Dev Tools (F12, Ctrl+Shit+I or Right Mouse Click + Inspect), you’ll find a tab called “Performance”, as in the following screenshot.

It’s a really powerful tool that is ready to use, no effort required. But I have to admit that it might be complicated to use at first, at least for me. Don’t, however, let this discourage you. It’s really worth learning how it works, thanks to the many benefits it provides. I’m now going to introduce you to it. If you’re already familiar with it, perhaps you’ll find something useful; something you didn’t know. Let’s dive in!

Why do we need to measure performance?

For years, web applications have been getting increasingly complex. After all, we create them in order to perform more and more operations. That’s okay — browsers are getting better as well, enabling them to handle those operations faster. However, as with everything — there are limitations. In the case of web applications, the effect of reaching the limit of a browser’s capabilities is that your application will freeze (and possibly, even the browser tab will crash)! This is really bad for user experience.

In light of the above, you might want to measure your application’s performance and find any bottlenecks you might have. In my example, simply by measuring the performance once, we were able to detect and fix a piece of our code that was responsible for a very significant percentage of computations on a page.

Chrome Dev Tools Performance Tab

Example usage of Chrome Performance Tools

The basic control for the Performance tab is the record button on the top left corner. As the instruction says, by clicking it you start recording what’s happening on your page. I’ll show you how the results of such a recording are looking. Before you click it, however, it’s worth exploring possible options we could set. You could decide to add screenshots or memory snapshots to your results. Important options are hidden under the cog on the top right corner. When you open additional menus, you might find that these options end up slowing down your network or throttling the CPU. It’s important, as you’re probably developing your application on quite a powerful machine, in an environment you can control. But it’s very probable that your users will have slower machines. This way, you can easily put yourself in their position.

Recording a Profile

When we have basic options set up, it’s time to start recording our application’s behavior and see some results!

As you start recording, you’ll see something like the following.

You could measure your performance for as long as you want, but be aware that your browser might crash if you do it for too long (due to the memory required for storing intermediate results). When you’ve started recording, it’s time to perform some actions on your page; specifically, the ones you want to measure. Just go ahead and click some buttons, type texts — do whatever your user would be doing on your page. When you’re ready — click “Stop”.

Now, here’s where all the magic happens!

Profile details

As a result of Performance measurements, there will be several charts and numbers available. First of all, at the very bottom, you’ll find a timeline presenting what was happening in your application in a given time frame.

Colors let you distinguish what was actually happening — for example, the orange color tells us that time was spend executing JS, while the purple color is indicating the rendering phase, etc. You will see all the colors with descriptions in a moment.

Following this, you have several more detailed timelines, as shown in the screenshot below. There you can discover details about network interactions, the Main thread, user interactions (like mouse clicks), and several more.

Flame Graphs

I’m going to focus here on the Main thread timeline, because that’s where our code is by default. It contains a nice flame graph that shows you exactly what was going on in your application — what functions were called, which event handlers were fired, etc.

At the top, there is usually a top-level function/handler being called. Going down, you can follow the call stack. The bar’s length indicates how long it took for a given function to execute. This is one of the places where you can find out which function is taking a long time to run, and as a result, slowing down your application. I will get back to that later.

Time distributions

Next, below all of this you have several more tabs, as in the screenshot.

Of these, I find the first two the most useful. The first one, Summary, shows you the percentage/time distribution of what is taking the longest in your application. These are split into several categories defined by the browser. You can see here the assignment of colors to the different kinds of processing.

The other one is even more intriguing. It lets you discover which functions were executing for the longest time period.

Self Time indicates the time that was actually spent inside a given function (without time spent in other functions, called from this one). Total Time shows time from entering the function to exiting it completely (including all nested function calls). This is the other tool that lets you easily identify bottlenecks in your code.

Now, with a theoretical introduction behind us, let me show you a more concrete example.

Finding bottlenecks in real applications

In one of the projects I was working on, I had a situation you might have encountered yourself. Our application was working fine, until we ran it with data of the biggest client to be using it. It became unusable, with very frequent freezes making it impossible to do anything. Luckily we had Chrome Dev Tools to identity the issue.

By way of a bit more introduction, as part of this application, we were loading some data in batches. Every 5 seconds, after we finished loading the previous batch, there was another one coming on. This data was then merged and displayed in the form of charts. Every 5 seconds, new data was coming in, charts were updated, and so on. Nothing really complicated.

Initial measure

When we first ran our Performance measurements, using a significant volume of data, we saw this chart.

As you can see, there is visible processing of data, happening around 5 seconds after the previous occurrence (as expected). But every subsequent processing takes much longer than the previous one! Something was clearly wrong here. Harnessing the full potential of Dev Tools, I decided to delve deeper into the issue.

Detecting an issue

At first, I looked into a flame graph for the Main thread. For sake of clarity, the following screenshot presents only part of the 90-second time frame I’ve used — it is around 20 seconds. If you ignore most of the top bars (which are React/Redux stuff), you can see that at the bottom, there is one last bar with a label (anonymous). After that, there are several really small bars.

When I focused on the long bar it turned out that this one function took 18 seconds to run! Out of around 20 seconds of processing.

Then, when I focused on one of the smaller bars below, it wasn’t that bad — up to a few milliseconds for each.

However, given their sheer amount, it came to a combined 18 seconds. And I knew which function was executed so many times. It was baseUniq, coming from the Lodash library.

So, probably something here should be fixed. But where exactly it is called? Label (anonymous) doesn’t give us much information (it was the anonymous arrow function, with no name of course). However, if you look closely, you can see that a few bars above it, there is one with the label myFunction, probably coming from our code base. We now know where to look for the bottlenecks in our code and do something with it.

But before I did, I looked into the Bottom-Up tab available in Performance Dev Tools. I sorted it via the Self Time column and everything was clear now — baseUniq took almost 35 seconds out of the 90-second time frame!

That’s huge. I expanded the details of a call stack for this function and I was able to not only see in which of my functions it was called, but also what was my (anonymous) function —it turned out to be unionWith from Lodash! I was using it to merge the list of existing and new data for our charts, as there could be duplicates and we wanted to avoid them.

Of course, a solution like this works best in the case of non-minified code, when names of files and functions are not obfuscated. In a different scenario, it might be harder to identify the exact place in your code that is causing trouble. It should either be done in development mode, or you must use source maps to have readable names.

After a fix

I won’t bore you with more details on fixing it. Long story short, the simple switching to unionBy almost removed the bottleneck. Now, the chart looks much nicer, as you can see in the screenshot below. Also, the results of the Bottom-Up tab are less disturbing.

After that, thanks to the help of Dev Tools, we were able to make a few more optimizations that significantly improved our application’s performance.

Summary

Even though this might be quite an extreme example (going from 35 seconds on one function, to an almost negligible value), I think it nicely shows how using Performance measurements can help keep your applications faster, more responsive and reliable in general.

I hope you find it useful, and if you haven’t already done so, you’ll review your app performance. Who knows? Maybe there are some pieces of code you could optimize to make your user even happier!

Stay tuned for upcoming articles in the series about improving your applications’ performance. It will cover everything you need to know about Web Workers, namely how we’ve used them to further take the load out of the main thread and make our application even smoother!

--

--