Optimizing AngularJS: 1200ms to 35ms

Edit: Due to the level of interest, we’ve released the source code to the work described here: https://github.com/scalyr/angular.

Here at Scalyr, we recently embarked on a full rewrite of our web client. Our application is a broad-spectrum monitoring and log analysis tool. Our home-grown log database executes most queries in tens of milliseconds, but each interaction required a page load, taking several seconds for the user.

A single-page application architecture promised to unlock the backend’s blazing performance, so we began searching for an appropriate framework, and identified AngularJS as a promising candidate. Following the “fail fast” principle, we began with our toughest challenge, the log view.

This is a real acid test for an application framework. The user can click on any word to search for related log messages, so there may be thousands of clickable elements on the page; yet we want instantaneous response for paging through the log. We were already prefetching the next page of log data, so the user interface update was the bottleneck. A straightforward AngularJS implementation of the log view took 1.2 seconds to advance to the next page, but with some careful optimizations we were able to reduce that to 35 milliseconds. These optimizations proved to be useful in other parts of the application, and fit in well with the AngularJS philosophy, though we had to break few rules to implement them. In this article, we’ll discuss the techniques we used.


A log of Github updates, from our live demo.

An AngularJS log viewer

At heart, the Log View is simply a list of log messages. Each word is clickable, and so must be placed in its own DOM element. A simple implementation in AngularJS might look like this:

<span class=’logLine’ ng-repeat=’line in logLinesToShow’><span class=’logToken’ ng-repeat=’token in line’>{{token | formatToken}} </span><br></span>

One page can easily have several thousand tokens. In our early tests, we found that advancing to the next log page could take several agonizing seconds of JavaScript execution. Worse, unrelated actions (such as clicking on a navigation dropdown) now had noticeable lag. The conventional wisdom for AngularJS says that you should keep the number of data-bound elements below 200. With an element per word, we were far above that level.


Using Chrome’s Javascript profiler, we quickly identified two sources of lag. First, each update spent a lot of time creating and destroying DOM elements. If the new view has a different number of lines, or any line has a different number of words, Angular’s ng-repeat directive will create or destroy DOM elements accordingly. This turned out to be quite expensive.

Second, each word had its own change watcher, which AngularJS would invoke on every mouse click. This was causing the lag on unrelated actions like the navigation dropdown.

Optimization #1: Cache DOM elements

We created a variant of the ng-repeat directive. In our version, when the number of data elements is reduced, the excess DOM elements are hidden but not destroyed. If the number of elements later increases, we re-use these cached elements before creating new ones.

Optimization #2: Aggregate watchers

All that time spent invoking change watchers was mostly wasted. In our application, the data associated with a particular word can never change unless the overall array of log messages changes. To address this, we created a directive that “hides” the change watchers of its children, allowing them to be invoked only when the value of a specified parent expression changes. With this change, we avoided invoking thousands of per-word change watchers on every mouse click or other minor event.  (To accomplish this, we had to slightly break the AngularJS abstraction layer. We’ll say a bit more about this in the conclusion.)

Optimization #3: Defer element creation

As noted, we create a separate DOM element for each word in the log. We could get the same visual appearance with a single DOM element per line; the extra elements are needed only for mouse interactivity. Therefore, we decided to defer the creation of per-word elements for a particular line until the mouse moves over that line.

To implement this, we create two versions of each line. One is a simple text element, showing the complete log message. The other is a placeholder which will eventually be populated with an element per word. The placeholder is initially hidden. When the mouse moves over that line, the placeholder is shown and the simple version is hidden. Showing the placeholder causes it to be populated with word elements, as described next.

Optimization #4: Bypass watchers for hidden elements

We created one more directive, which prevents watchers from being executed for an element (or its children) when the element is hidden. This supports Optimization #1, eliminating any overhead for extra DOM nodes which have been hidden because we currently have more DOM nodes than data elements. It also supports Optimization #3, making it easy to defer the creation of per-word nodes until the tokenized version of the line is shown.

Here is what the code looks like with all these optimizations applied. Our custom directives are shown in bold.

<span class=’logLine’ sly-repeat=’line in logLinesToShow’ sly-evaluate-only-when=’logLines’><div ng-mouseenter=”mouseHasEntered = true”><span ng-show=’!mouseHasEntered’>{{logLine | formatLine }} </span><div ng-show=’mouseHasEntered’ sly-prevent-evaluation-when-hidden><span class=’logToken’ sly-repeat=’tokens in line’>{{token | formatToken }}</span></div></div>



sly-repeat is our variant of ng-repeat, which hides extra DOM elements rather than destroying them. sly-evaluate-only-when prevents inner change watchers from executing unless the “logLines” variable changes, indicating that the user has advanced to a new section of the log. And sly-prevent-evaluation-when-hidden prevents the inner repeat clause from executing until the mouse moves over this line and the div is displayed.

This shows the power of AngularJS for encapsulation and separation of concerns. We’ve applied some fairly sophisticated optimizations without much impact on the structure of the template. (This isn’t the exact code we’re using in production, but it captures all of the important elements.)


To evaluate performance, we added code to measure the time from a mouse click, until Angular’s $digest cycle finishes (meaning that we are finished updating the DOM). The elapsed time is displayed in a widget on the side of the page. We measured performance of the “Next Page” button while viewing a Tomcat access log, using Chrome on a recent MacBook Pro. Here are the results (each number is the average of 10 trials):

Data already cached Data fetched from server
Simple AngularJS 1190 ms 1300 ms
With Optimizations 35 ms 201 ms

These figures do not include the time the browser spends in DOM layout and repaint (after JavaScript execution has finished), which is around 30 milliseconds in each implementation. Even so, the difference is dramatic; Next Page time dropped from a “stately” 1.2 seconds, to an imperceptible 35 ms (65 ms with rendering).

The “data fetched from server” figures include time for an AJAX call to our backend to fetch the log data. This is unusual for the Next Page button, because we prefetch the next page of logs, but may be applicable for other UI interactions. But even here, the optimized version updates almost instantly.


This code has been in production for two months, and we’re very happy with the results. You can see it in action at the Scalyr Logs demo site. After entering the demo, click the “Log View” link, and play with the Next / Prev buttons. It’s so fast, you’ll find it hard to believe you’re seeing live data from a real server.

Implementing these optimizations in a clean manner was a fair amount of work. It would have been simpler to create a single custom directive that directly generated all of the HTML for the log view, bypassing ng-repeat. However, this would have been against the spirit of AngularJS, bearing a cost in code maintainability, testability, and other considerations. Since the log view was our test project for AngularJS, we wanted to verify that a clean solution was possible. Also, the new directives we created have already been used in other parts of our application.

We did our best to follow the Angular philosophy, but we did have to bend the AngularJS abstraction layer to implement some of these optimizations. We overrode the Scope’s $watch method to intercept watcher registration, and then had to do some careful manipulation of Scope’s instance variables to control which watchers are evaluated during a $digest.

Next time

This article covered a set of techniques we used to optimize JavaScript execution time in our AngularJS port. We’re big believers in pushing performance to the limit, and these are just some of the tricks we’ve used. In upcoming articles, we’ll describe techniques to reduce network requests, network latency, and server execution time. We may also discuss our general experience with AngularJS and the approach we took to structuring our application code — if you’re interested in this, let us know in the comments. For more discussion on AngularJS performance, you can also read this article by Gleb Bahmutov.

Obligatory plug

At Scalyr, we’re all about improving the DevOps experience through better technology. If you’ve read this far, you should probably read more about the Scalyr log management tool.

64 thoughts on “Optimizing AngularJS: 1200ms to 35ms

  1. Nice work, I’m impressed. I’d love to see some of these changes finding their way to Angular core, as we are building build more and more applications that are starting to test limits of Angular.

  2. Great work. That log app is amazing. So fast.
    Question – Since you overrode angular scope’s $watch method, does that mean if in angular’s next release they modify the $watch method you’re going to modify yours too?

    1. Yes, with the current implementation’s approach, if the AngularJS team changed the $scope.$watch implementation to use a different non-public variable that we depend on, our change would break and we would have to fix it. This is the minor way the optimization ‘breaks the abstraction layer’, but it can be addressed either through a change in the public AngularJS API or some other techniques we are currently investigated based on other feedback from this post.

  3. Looks like some nifty tricks and ones that I might use in the future, but I don’t quite follow the “why.” It seems like an awful lot of hacks just to get a single word. You could do this all with a single DOM element. Use an element with “white-space: pre;” to preserve line breaks (your screenshot shows all of the text styled the same). This one get updated ever unless you paginate.

    To find which word was clicked, just bind a click event to the single element and use some javascript foo on click position to determine which word was clicked.


    Here is an example where I’m keeping a background copy of the data split on newlines. Then when you click, it divides Y by line height to pick the correct line out of the array and then drops that into a hidden element and loops through to get the word for the x position. This feature could be dropped into a directive instead of using several directives that kind of break the way angular works.

    1. Thanks Ken for the example of how to determine which word was clicked using a hidden DOM element. We didn’t go into all of the details of the ‘why’ for the sake of brevity, but we perform other operations with the tokens after they have been clicked. For example, we add different background colors to indicate the word has been selected and even allow for a drag selection operation over all of the words. For those, it is much easier to have the line broken up into separate DOM elements. Of course there might be more trick we could do, so we are always looking for other suggestions. Thanks!

  4. I’m currently trying to optimise my AngularJS project so I’m very interested in taking a look at these customised directives. Please share the code soon!

  5. Great write up. Just wondering – can you give a breakup of the performance boost you got with each optimization. We’re trying to optimize our Angular app and were wondering how we should prioritize these optimizations. Thanks.

  6. Really good article – thanks for posting.

    “We may also discuss our general experience with AngularJS and the approach we took to structuring our application code — if you’re interested in this, let us know in the comments.”

    Yes please

  7. Excellent writeup!
    I’m working on an AngularJS fronted application now – having recently come from an EmberJS fronted project. You’ve hit a couple of optimization concepts I’ve been trying to sort out (and have saved me some time testing by stating your experience here), and have given me a few new ones to consider….thank you for sharing!

    +1 for “We may also discuss our general experience with AngularJS and the approach we took to structuring our application code — if you’re interested in this, let us know in the comments.”

  8. All of this should have been replaced with two event delegations. One for row hovering that adds spans on the fly (no pre generation, to many dom elements) and one click handler that finds the text value from the content of the event source.

    This whole article describes how angular is making developers forget best practice.

  9. > Implementing these optimizations in a clean manner was a fair amount of work. It would have been simpler to create a single custom directive that directly generated all of the HTML for the log view, bypassing ng-repeat.

    So in the end, not using angular would have saved a lot of time, and probably even yielded better performance.
    https://tbpl.mozilla.org/ can generate ~7000 elements which are a lot more sophisticated than “tokens” and grouped in a more complex way than `for X in line` in ~280ms of which ~210ms is creating the DOM through `.innerHTML`. So the DOM in fact does account for quite a lot.

    Still, we do it ahead of time, and not only on hover.

    And you know the best thing? You can maintain, profile and debug that code. When something is slow, you know exactly which function it is, what to optimize. With angular you spend 98% of your time in $digest and you have absolutely no chance to know why.

  10. First, I’d like to say that I’m an AngularJS noob.
    In Optimization #2, you said “All that time spent invoking change watchers was mostly wasted”. Is that because AngularJS tries to allow users to write $scope.variable = “new value” and doesn’t want to use observables like Knockout?

    1. Knockout would likely have a similar performance characteristic in the unoptimized version. It would be easier to implement the hacks described above with the already existing knockout bindings.

      Event delegation and on demand row replacement would still outperform such a solution and be less complex

  11. Hi scalyr ,
    Very informative article, thanks for sharing
    Need one clarification, may i know which tool you are using for this performance measurement?
    How we can really come up with performance matrix for our changes?

    1. As noted briefly in the post: we added a bit of JavaScript code to our page for this purpose. The code measures the time from a mouse click, until Angular’s $digest cycle finishes (meaning that we are finished updating the DOM). It then displays that interval in a widget on the side of the page. It’s just a few lines of JavaScript.

      1. If you don’t mind me asking, is the detection of the $digest cycle finishing coming from a $rootScope.$evalAsync? I’m trying to measure some performance as well, so I am interested in the answer as well.

      2. Replying to David’s comment. As with anything with Angular, there’s probably several different ways to detect the finish of the $digest cycle. The approach we used was to use a $provider decorator on the $rootScope service to monkey patch $rootScope.$digest. In the override method, we simply record the start time, invoke the original $digest method, and then capture the end time. We have a listener registration interface that allows other services to be notified of how long the latest $digest cycle took.

  12. Both your original solution and the “optimisations” are hideously inefficent. Take a look at this working, widely compatible solution I whipped up in a few mins :http://jsfiddle.net/vm6X3/2/. Does *not* require making extra random elements. Licensed under the MIT License. Go wild. You will need the 1.3alpha version of `rangy` and it’s `textrange` module.

    1. I have to agree that making every single word a watchable expression was extremely inefficient. There is absolutely no problem IMHO to venture outside of angular digest cycle for for optimization purposes and a simple click handler(like others pointed out) would’ve been a much simpler solution.

  13. Is there a link to the demo? The linked page does not have an option for a demo any more, and I’m very interested in trying this out.

      1. Sign up gives me:

        Oops! Scalyr was unable to display this page (code 500).

        The error has been logged and someone will investigate shortly.

    1. Sorry about the error — that’s what I get for posting too quickly. The demo link I gave you was bad, and would lead to an error if you then try to sign up. I’ve fixed the link. Meanwhile, if you just go to http://www.scalyr.com, you should be able to sign up from there.

  14. You mentioned u added code to evaluate the performance. Can you please provide me the logic or code snippets which you used to measure it from mouse click to DOM formation. I am need of such solution.
    I will be thankfull to you. Thanks!!

  15. I’m curious to know what you feel have been the overall benefits of using AngularJS in your site rewrite and whether they have outweighed all of the costs, including this time you have spent shoe-horning functionality like the log page into it.

    All frameworks give you advantages so long as your needs fit within the boundaries of what they can deliver. But if you build applications that are complex, or need to run especially fast as is your case, you always end up having to hack the framework, which might take longer than if you’d just written it all from scratch using lower level libraries.

    I am investigating Angular as a framework for my next project, and trying to work out what gotchas will bite me down the track. Your blog was excellent in helping to assess that, but didn’t leave me with a sense of whether it was all worth it! 🙂

  16. AngularJS 1.3 introduces one-time binding to reduce watching and digest loops which is relevant to this.


    One-time binding
    An expression that starts with :: is considered a one-time expression. One-time expressions will stop recalculating once they are stable, which happens after the first digest if the expression result is a non-undefined value

  17. Big help , have been trying to filter and refresh a large list using ng-repeat and having major issues with lag, this completely solved it,
    thank you

    a question though what of things like this .
    $scope.destinations = {
    someplace: 20,
    anotherplace : 30
    ng-repeat=”(name, count) in destinations”

  18. Why use sly-prevent-evaluation-when-hidden? To me it sounds like it does the same as ng-if.
    What is the difference?

      1. Seems one of the optimization techniques is based on not creating/destroying so many DOM elements. As I understand it, seems in some scenarios like their log viewer this approach of “hiding + avoiding evaluation” is noticeably faster than “destroying DOM / rendering DOM”. I reckon in many other cases you can simply use ng-if.

  19. Thank you very much. I’m really impressed with your slyRepeat directive ! For my application the execution time goes from 9700ms to 400ms (with ajax query). This is a must have ! And thank you for making this code open-source.

  20. Great work guys.
    Are you planning to bump up your Angular version in the near future?
    I’m using Angular 1.2.26 @ metric.pt with sly-repeat and sly-show and up until now I didn’t have any problems (still in dev for a week or so). We still need to work some issues in our interfaces that are too heavy in scopes and bindings but we have an interface that contains a list of user files, imported into our platform and some user lists are unusable if a user requests more than 1 month of imported data.

  21. Brilliant write up guys.
    Reading articles like this is when I wish I was working at a startup or a small company.
    Instead I’m stuck working on enterprise system where use of technology is limited by corporate policies.

  22. Thank you very much. I’m really impressed with your slyRepeat directive ! For my application the execution time goes from 9700ms to 400ms (with ajax query). This is a must have ! And thank you for making this code open-source.

  23. Interesting article. I have just a small comment. As you were already prefetching the next page of log data, a straightforward AngularJS implementation of the log view may appear to take 0ms plus rendering time by pre-creating the next page also 🙂

  24. Thank you very much for this article and sharing the code. I just fixed an unusable “smart-table + ng-repeat” page by simply adding `sly-evaluate-only-when` (on Angular 1.4.4). Congrats for the clean and angular-focused approach. A couple of points:

    1) Your technique 2 for aggregating watchers (using `sly-evaluate-only-when`) is exactly was I was looking for. Why does Angular insist on evaluating so many watches every cycle, when many times we know they won’t change? In my opinion this optimization technique shall be built into Angular (ie. `ng-evaluate-only-when` and `ng-evaluate-always`). Do you know if the Angular team is aware of this?

    2) Using this module seems to be preventing Batarang Chrome extension from working, so don’t have performance indicators anymore (number of watchers, cycles per second…). Do you have any clue about why this could be happening, and eventually how to resolve it?

  25. hi ,
    I am building drop down controls and options for dropdown from sever side using string builder Dynamically. while building I am applying ng-model and ng-change for each dropdown control
    .so I have more then ten drop down list controls each dropdown having more then 500 option all are build dynamically .
    While angular js is taking to much time to load dropdown controls and having issue (Local host is not responding due to long run of javacsript).
    could you suggest me .

  26. Is this still valid in AngularJS 1.6 or Angular (2+)? We use Kendo UI (the jQuery-based, not the Angular-based) with AngularJS and we’re suffering, a typical page has more than 4000 watchers! We’ve tried bind-once (::) but it can only do that much.

      1. I’m afraid I didn’t make myself clear. I’m not asking whether you still support your library with the latest versions of Angular, I’m just asking if Angular still suffers from the same issues on the latest versions.

        1. Ah gotcha. They have made some improvements along these lines in Angular 2 with how they reworked component-level change detection, but it doesn’t completely solve the problem. It is very easy for a naive user of Angular to run into the same performance problems we discussed in our blog post.

        2. May be I can help you, I’m working on a new project ( migrating an existing project from EXT to angularjs) and the problem was that the backend used to serve big objects and then all of them are binded to the DOM. This kind of application is doomed to be slow because the angular binding way (dirty check) if you are having performance issue maybe you should look for other technology or change the app design.

          1. Thank you Daniel, but I’m afraid I cannot do much to change the business or the back-end. We already told our business analysts that they should strive to keep their forms simple, with as few fields/components as possible at any given time (which would be a better UX for the users too), but in the end if the client wants a huge form with lots of grids (as they usually do in my line of work), we need to deliver.
            But just out of curiosity, why would you want to migrate from ExtJs to AngularJS in the first place? Imho, the latest ExtJs is far, far better to use in business/corporate applications, because with Angular you still need to pick a library of widgets, especially complex ones like data grids. In a way, it’s like the old days of jQuery and jQuery UI. And these libraries, especially if they’re not written specifically for Angular (which is the usual case), come with their own baggage. In fact, Sencha recently released ExtJs for React, which makes me think that they chose React over Angular because of these performance issues.

          2. One other option is to look into using $broadcast and $on to trigger events for an excessively large collection of objects

          3. I’m from Argentina, here the small companies can’t afford to pay developers and a lot of licences, and paying a front end Framework is not on chief minds. So, EXTjs is not an option. I understand what you told me about the form and the backend, here in my work they tried only to do a front-end refactor (changing EXTjs with angularjs) but the result was a messy front-end, and the idea of reusability, and atomicity were lost. Today when they saw the result, they are convinced that the idea of a better UX is not only a better user experience, it’s about not showing more data that’s needed and that was translate in smaller queries and forces us to have a less coupled back-end. If the idea is only to change front-end, maybe AngularJs is not the framework that you are looking for.

Leave a Reply

Your email address will not be published. Required fields are marked *