The Angular conversion part 4: What we couldn’t automate

Amanda DaSilva
Grubhub Bytes
Published in
6 min readFeb 19, 2019

--

Photo by Fancycrave on Unsplash

(This is the fourth and final part of our series on our conversion from AngularJS to Angular. Part one discussed our history with frameworks and why we decided to make the change. Part two covered some of the preliminary steps we took before running our conversion script. And part three gave you the script and detailed our process.)

With the JavaScript landscape constantly changing, a framework conversion is a challenge that most web teams will eventually tackle. On a high level, this kind of conversion is a matter of changing your application dependencies and updating the codebase to function with those dependencies. In reality, there is a lot more to a framework conversion than just making the application “work.” You also need the application to work well, and that requires adopting the best practices and ideologies of the new framework.

As the previous post in this series discusses, we were able to automate the conversion from AngularJS to Angular quickly in terms of boilerplate. However, completing this project was a largely handcrafted process. This fourth and final blog post in the series on our conversion from AngularJS to Angular will cover some lessons we’ve learned while converting our app from something that “works” to a performant Angular app.

Automating a big chunk of our Angular conversion allowed us to start running our code within the new framework relatively quickly. But this first version of our converted app was a disappointment. Scrolling was janky, our bundles were huge, and the tooling was slow. It turns out that the Angular core team was right — migrating from AngularJS to Angular is not fully automatable. Our team spent the next few months learning Angular and its tooling to get our application in a state we could be proud of.

Zones

Angular made significant updates to its change detection triggers, and understanding those updates was key to improving our own app performance. Change detection in Angular is powered by a library called Zone.js. This library monkey patches global asynchronous functions on the page and triggers change detection cycles when those asynchronous tasks complete. This is clever because most applications don’t need to update the view unless there is a user interaction or new data on the page. This is the default change detection that ships out of the box with Angular.

In almost all cases, the default change detection works well and has good performance. However, sometimes you don’t want to use the default. One big exception we found is for asynchronous events that fire rapidly, such as scroll events. In the first version of our application, our pages had poor performance during scrolling. This was the result of our scroll listener triggering hundreds of change detection cycles during scrolling.

To solve this problem, Angular provides a method on the NgZone provider that allows engineers to run a function outside of the Angular zone to prevent auto triggering change detection. After learning this, we made sure all rapid fire events were explicitly running outside of zones. When these events needed to update the DOM, we explicitly triggered change detection only when necessary using Angular’s ChangeDetectionRef provider. This simple change gave us a huge performance boost and resolved our scrolling jank.

Example for running code outside of ngZone

Aside from performance, Zones also have a huge impact on end-to-end testing with Protractor. After the conversion, we noticed that our e2e tests were running painfully slow. We learned that Protractor utilizes the Angular zone to handle testing synchronization. Essentially, Protractor waits for all zones to complete before moving on to the next step in the testing sequence. As a result, several long-running setTimeouts for things like notifications were causing serious delays during our tests. The solution for this was to run the setTimeout using outside of zones — as described above — and then to explicitly trigger change detection when the callback is finally fired.

Lazy Loading

The new Angular is a framework that requires compilation, which introduces a dramatic change to the way you ship and deploy apps. The new recommended way to ship a production app is through Ahead-Of-Time (AOT) compilation via the AOT compiler. This compiler runs as part of the build process to output performance optimized code.

Introducing the AOT compiler to our Webpack production build required us to make many changes to our build process and codebase because it introduces and enforces a lot of stylistic and architectural patterns. Some significant changes include:

  • AOT templates can only access public class members.
  • There is no ability to add custom loaders for things like HTML files.
  • The compiler requires your code to use only a subset of JavaScript that is understood by the AOT collector.

We found that with all the things going on in a black box, and it was challenging to create the custom tooling that our application required.

One challenge we faced was to add support for our route-independent lazy modules. For example, we have an Angular module that contains all of our modal components. Since modals never load immediately after app bootstrap, we lazy load this module 10 seconds after initial page load to reduce our page load time. With the Angular upgrade, this presented a problem for us since the currently prescribed way to connect and compile a module in Angular is through the router and this module is route independent. We went through several implementations utilizing both Webpack plugins and Angular tools before we came up with a simpler solution utilizing the router.

We solved this by adding an unreachable route referencing this module, which the Angular compiler would compile in the context of our application, just the way we needed it. To load the lazy modules, we built a service that utilizes the loader functionality on the Angular router. Once loaded, this service initializes the module with an Angular Injector and provides references to the module’s component factories. These factory references are consumed by our lazy-module-outlet directive which handles initializing a component from the factory and attaches it to the view. This solution is available on NPM.

An additional benefit to leveraging the Angular router in this way is the ability to manually pre-load any route modules when we anticipate users will need them soon — e.g., load the restaurant module after a user starts interacting with the search page. This gives us greater control of when modules are loaded in our application and has improved our user experience both on the initial load and when moving between routes.

Unit Testing

The performance of the application our diners use is generally our biggest priority, but performance of developer tooling is also important for maintaining a healthy codebase, and the Angular conversion introduced a lot of new tooling. In particular, our testing suites were hugely impacted by these changes. In addition to Zones significantly slowing down our Protractor tests, after the initial conversion, our unit tests were running disappointingly slow.

Although building our huge spec bundles was time consuming, we were surprised to find that the unit tests themselves were taking much longer to run post conversion. After investigating with the Chrome performance tab, we were able to identify the cause. In the new Angular testing tools, they recommend to call a function TestBed.configureTestingModule before each component test. It turns out that this function has very poor performance and was responsible for the slowdown. However, it is also a vital tool for testing components.

We refactored our unit tests and created a tool to allow us to only call TestBed.configureTestingModule once for each component we unit test. This tool prevents the TestBed from resetting between each test and instead resets only providers between each unit test. Implementing this decreased our component unit test runtime by about 70% for a suite of ~1200 component tests. This was a huge improvement for developers who regularly run the unit tests repeatedly during development and for our integration time.

Total post-build runtime of 1193 component tests:

  • Without improvement: ~100s
  • With improvement: ~30s

Code:

Our solution for speeding up Angular component unit tests with Jasmine

In Conclusion…

Finishing the conversion that we initially automated required us to get to know new tooling, adapt our code to a new set of best practices, and develop some creative solutions. This post mentions a few key learning moments we faced, but there were many more we weren’t able to include such as translations, code splitting, and server-side rendering.

Automating the first part of our conversion allowed us to get our application bootstrapped quickly. This enabled our engineers to spend more time learning and solving problems and less time migrating boilerplate. As much as we wanted to automate the entire process, we were lucky to strike this balance between automation and manual development. We hope that sharing these experiences will be helpful to other teams going through similar challenges.

Do you want to learn more about opportunities with our team? Visit theGrubhub careers page.

--

--