Improving frontend performance with code splitting in React

Overview

The Open edX platform consists of numerous micro-frontends (MFEs) which are Single-Page Applications (SPAs) implemented with React, provided data from RESTful APIs powered by Django services.

When a user loads a micro-frontend in the browser, a request waterfall is started. The browser must first parse the HTML markup before loading any specified CSS, JavaScript, or images:

1. |-> Markup 2. |-> CSS 2. |-> JS 2. |-> Image

Then, the request waterfall continues serially if you fetch CSS inside a JS file (double waterfall):

1. |-> Markup 2. |-> JS 3. |-> CSS

And if that CSS fetches a background image, it’s becomes a triple waterfall, where the image is blocked by several other requisite requests:

1. |-> Markup 2. |-> JS 3. |-> CSS 4. |-> Image

Each waterfall represents at least one roundtrip to the server, unless the resource is locally cached… Because of this, the negative effects of request waterfalls are highly dependent on the users latency. Consider the example of the triple waterfall, which actually represents 4 server roundtrips.

With 250ms latency, which is not uncommon on 3g networks or in bad network conditions, we end up with a total time of 4*250=1000ms only counting latency. If we were able to flatten that to the first example with only 2 roundtrips, we get 500ms instead, possibly loading that background image in half the time! Source

Request waterfalls may be spotted and analyzed via the browser devtools “Network” tab or Chrome’s “Performance” tab’s generated report (as shown below for frontend-app-learning):

image-20240123-123433.png
Performance insights in Chrome DevTools for frontend-app-learning
image-20240123-123541.png
Zoomed in on request waterfall

In the above screenshot, the request waterfall is as follows:

Note that the user sees a blank white screen until after Step 4 above, which is when the actual MFE application code is rendered and begins making its own requests. As stated above, the 935.*.js JavaScript file is by far the largest bottleneck in terms of the request waterfall, taking nearly 12s to download on Fast 3G in Chrome’s “Performance” test.

What JavaScript bundles are output by Webpack today?

When running npm run build for a MFE (e.g., locally or in CI/CD during a release), Webpack transforms source files into compiled assets by transpiling code to ES6 and then bundling source files together into a standardized set of chunks.

Our shared Webpack configuration across all Open edX MFEs (provided by @openedx/frontend-build) generates 3 JavaScript file chunks by default:

  1. Vendor. Installed third-party libraries (node_modules).

  2. Application. Custom source code for the MFE.

  3. Runtime. Webpack internals to help load JavaScript chunks.

As alluded to above, the vendor chunk is by far the largest bottleneck in terms of frontend performance. See documentation on Code Splitting support in Webpack.

The Webpack production build for Open edX MFEs also generates a bundle analzyer report (see dist/report.html), which can be useful for identifying larger bundles and appropriately splitting them. As seen below, this MFE relies on a quite expensive/large NPM package (Plotly), but does not use code splitting. In fact, in this MFE, Plotly is only used for one specific tab under a single page route yet it represents 1 MB of the total 1.65 MB for the vendor chunk (542.*.js).

How do we measure frontend performance?

In addition to viewing and analyzing performance reports and request waterfalls as demonstrated above, there are also KPIs such as Core Web Vitals that are important to address as they relate more to the user experience itself. More qualitatively, analyzing filmstrips is also a viable options.

Core Web Vitals

Core Web Vitals are metrics defined by Google to “provide unified guidance for quality signals that are essential to delivering a great user experience on the web.” (source)

  • Largest Contentful Paint (LCP). Measures loading performance.

    • “time from when the page starts loading to when the largest text block or image element is rendered on the screen.”

  • First Input Delay (FID). Measures interactivity.

    • “time from when a user first interacts with your site (i.e. when they click a link, tap a button, or use a custom, JavaScript-powered control) to the time when the browser is actually able to respond to that interaction.”

  • Cumulative Layout Shift (CLS). Measures visual stability.

    • “cumulative score of all unexpected layout shifts that occur between when the page starts loading and when its lifecycle state changes to hidden.”

An example of a tool (New Relic Browser) showcasing the Largest Contentful Paint (LCP) for one MFE is shown below. Note, every Open edX MFE (at least deployed at *.edx.org) according to LCP is in the “Poor” category (> 4s).

CI

Typically, Open edX engineers don’t run npm run build themselves or look at the Webpack Bundle Analyzer report to know when there are opportunities for code splitting in their micro-frontends. Given this, there has been some discussion through Frontend Working Group around integrating with BundleWatch such that contributions to micro-frontends can have better observability around affects on bundle size in their PRs.

https://docs.openedx.org/projects/openedx-proposals/en/latest/best-practices/oep-0067/decisions/frontend/0007-bundle-size.html

What is code splitting?

As demonstrated above, the default Webpack configuration for Open edX MFEs results in primarily 3 generated JavaScript chunks in the npm run build output. Each JavaScript chunk must be downloaded and parsed before anything is rendered in the UI. These chunks combined make up the entire application, irrespective of what page route the user is requesting and/or how the user is interacting with the UI.

As a result, bundlers like Webpack support code splitting which can create appropriately sized bundles that can be dynamically loaded at runtime, as needed. This code splitting approach improves performance by helping you “lazy load” only the things that are currently needed by the user.

For example, using the bundle analyzer report from above, we saw that Plotly was quite large, making up the majority of the vendor chunk generated by Webpack despite only being necessary for a small piece of the application. The Webpack configuration could instead be modified to enforce a maximum generated file size (Webpack recommends 244 KB) per chunk, as seen in the below screenshow, where the Webpack configuration was modified to have a maxSize for vendor chunks:

When running this Webpack configuration on the example MFE with Plotly, we can see that instead of Plotly being bundled in with every other package in node_modules , it has its own distinct file (326.*.js) that can be dynamically loaded and won’t result in blocking UI for users.

During npm run build, you may notice Webpack output warnings such as the following:

How do I code split in React?

React.lazy

Documentation

Suspense

Documentation

OTHER:

LazyRouteFunction | React Router API Reference

Considerations for code splitting

While code splitting is fairly straightforward in React, though with a few important considerations:

  • Must be intentional about deciding what application components or third-party packages to code split. Don’t go overboard introducing code splitting for everything.

    • Route-based code splitting.

      • Defer loading code for page routes the user is not currently viewing.

      • It’s worth noting that route-based code splitting is largely built into the architecture in a world with Piral and/or Next.js.

    • Component-based code splitting.

      • Lazy load specific, expensive/large component or dependencies, where relevant.

  • Because code splitting defers the loading of code until it becomes relevant to the user’s interactions, some consideration around these dynamic imports from a UX/UI perspective is worth considering.

    • For example, if a user is on a very slow network, the dynamic import of a deferred JavaScript chunk could take a perceivable amount of time, in which some sort of fallback UI should be presented to the user.

    • Additionally, in order for screen content to always be consistent for users, if an already shown/rendered component suspends, it should not render the fallback UI as this would be disorienting from the user’s perspective. See more details here.

  • Similarly, due to the dynamic nature of lazy loading JavaScript code, considerations should be made for error handling (e.g., should a deferred JavaScript chunk fail to load due to a network outage).

    • The React documentation recommends using ErrorBoundaries for this purpose. See more detail here.

Examples of code splitting in Open edX

  • frontend-app-discussions began introducing code splitting with React.lazy and Suspense in May 2023 (PR).

  • frontend-app-learning uses code splitting in a handful of places (dating back to 4 years ago), though not broadly.

    • In some places, it uses Suspense without React.lazy and others it uses React.lazy without Suspense.

Summary

Adopting a strategy around code splitting may help to significantly improve frontend performance of the Open edX micro-frontends by deferring or lazy loading certain code until it becomes relevant for the user. Code splitting should help to improve/mitigate the Core Web Vital metrics such as Largest Contentful Paint.

While migrating to a Piral and/or Next.js based on architecture helps bring in code splitting on its own, we can take more incremental steps towards code splitting to improve the frontend performance in the current Open edX MFE architecture.