Overview

The Open edX platform consists of numerous micro-frontends (MFEs) which are Single-Page Applications (SPAs) implemented with React, provided data from RESTful APIs powered by Django services.

When a user loads a micro-frontend in the browser, a request waterfall is started. The browser must first parse the HTML markup before loading any specified CSS, JavaScript, or images:

1. |-> Markup
2.   |-> CSS
2.   |-> JS
2.   |-> Image

Then, the request waterfall continues serially if you fetch CSS inside a JS file (double waterfall):

1. |-> Markup
2.   |-> JS
3.     |-> CSS

And if that CSS fetches a background image, it’s becomes a triple waterfall, where the image is blocked by several other requisite requests:

1. |-> Markup
2.   |-> JS
3.     |-> CSS
4.       |-> Image

Each waterfall represents at least one roundtrip to the server, unless the resource is locally cached… Because of this, the negative effects of request waterfalls are highly dependent on the users latency. Consider the example of the triple waterfall, which actually represents 4 server roundtrips.

With 250ms latency, which is not uncommon on 3g networks or in bad network conditions, we end up with a total time of 4*250=1000ms only counting latency. If we were able to flatten that to the first example with only 2 roundtrips, we get 500ms instead, possibly loading that background image in half the time! Source

Request waterfalls may be spotted and analyzed via the browser devtools “Network” tab or Chrome’s “Performance” tab’s generated report (as shown below for frontend-app-learning):

image-20240123-123433.pngimage-20240123-123541.png

In the above screenshot, the request waterfall is as follows:

1. |-> Markup
2.   |-> app.*.css (Bundles application CSS, including Paragon)
3.   |-> JS, including the following:
3a.    |-> 935.*.js (third-party node_modules; delays rendering of UI for almost 12s on Fast 3G)
3b.    |-> app.*.js (MFE application code)
4.       |-> /api/mfe_config/v1
5.         |-> /api/notifications/count
6.         |-> /api/course_home/...

Note that the user sees a blank white screen until after Step 4 above, which is when the actual MFE application code is rendered and begins making its own requests. As stated above, the 935.*.js JavaScript file is by far the largest bottleneck in terms of the request waterfall, taking nearly 12s to download on Fast 3G in Chrome’s “Performance” test.

What JavaScript bundles are output by Webpack today?

When running npm run build for a MFE (e.g., locally or in CI/CD during a release), Webpack transforms source files into compiled assets by transpiling code to ES6 and then bundling source files together into a standardized set of chunks.

Our shared Webpack configuration across all Open edX MFEs (provided by @openedx/frontend-build) generates 3 JavaScript file chunks by default:

  1. Vendor. Installed third-party libraries (node_modules).

  2. Application. Custom source code for the MFE.

  3. Runtime. Webpack internals to help load JavaScript chunks.

As alluded to above, the vendor chunk is by far the largest bottleneck in terms of frontend performance. See documentation on Code Splitting support in Webpack.

The Webpack production build for Open edX MFEs also generates a bundle analzyer report (see dist/report.html), which can be useful for identifying larger bundles and appropriately splitting them. As seen below, this MFE relies on a quite expensive/large NPM package (Plotly), but does not use code splitting. In fact, in this MFE, Plotly is only used for one specific tab under a single page route yet it represents 1 MB of the total 1.65 MB for the vendor chunk (542.*.js).

image-20240123-131640.png

How do we measure frontend performance?

In addition to viewing and analyzing performance reports and request waterfalls as demonstrated above, there are also KPIs such as Core Web Vitals that are important to address as they relate more to the user experience itself. More qualitatively, analyzing filmstrips is also a viable options.

Core Web Vitals

Core Web Vitals are metrics defined by Google to “provide unified guidance for quality signals that are essential to delivering a great user experience on the web.” (source)

An example of a tool (New Relic Browser) showcasing the Largest Contentful Paint (LCP) for one MFE is shown below. Note, every Open edX MFE (at least deployed at *.edx.org) according to LCP is in the “Poor” category (> 4s).

CI

Typically, Open edX engineers don’t run npm run build themselves or look at the Webpack Bundle Analyzer report to know when there are opportunities for code splitting in their micro-frontends. Given this, there has been some discussion through Frontend Working Group around integrating with BundleWatch such that contributions to micro-frontends can have better observability around affects on bundle size in their PRs.

https://docs.openedx.org/projects/openedx-proposals/en/latest/best-practices/oep-0067/decisions/frontend/0007-bundle-size.html

What is code splitting?

As demonstrated above, the default Webpack configuration for Open edX MFEs results in primarily 3 generated JavaScript chunks in the npm run build output. Each JavaScript chunk must be downloaded and parsed before anything is rendered in the UI. These chunks combined make up the entire application, irrespective of what page route the user is requesting and/or how the user is interacting with the UI.

As a result, bundlers like Webpack support code splitting which can create appropriately sized bundles that can be dynamically loaded at runtime, as needed. This code splitting approach improves performance by helping you “lazy load” only the things that are currently needed by the user.

For example, using the bundle analyzer report from above, we saw that Plotly was quite large, making up the majority of the vendor chunk generated by Webpack despite only being necessary for a small piece of the application. The Webpack configuration could instead be modified to enforce a maximum generated file size (Webpack recommends 244 KB) per chunk, as seen in the below screenshow, where the Webpack configuration was modified to have a maxSize for vendor chunks:

When running this Webpack configuration on the example MFE with Plotly, we can see that instead of Plotly being bundled in with every other package in node_modules , it has its own distinct file (326.*.js) that can be dynamically loaded and won’t result in blocking UI for users.

During npm run build, you may notice Webpack output warnings such as the following:

WARNING in asset size limit: The following asset(s) exceed the recommended size limit (244 KiB).
This can impact web performance.
Assets: 
  b97a8c7a57187f694566727836060631.gif (468 KiB)
  8ecb9b1c7d41d7196375abdc97a53b4b.gif (649 KiB)
  app.972d77415883dbdb0cb9.css (1010 KiB)
  app.972d77415883dbdb0cb9.js (974 KiB)
  93.0acb2cf330737d1f41cc.js (4.56 MiB)

How do I code split in React?

React.lazy

Documentation

Suspense

Documentation

OTHER:

https://reactrouter.com/en/main/route/lazy

Considerations for code splitting

While code splitting is fairly straightforward in React, though with a few important considerations:

Examples of code splitting in Open edX

Summary

Adopting a strategy around code splitting may help to significantly improve frontend performance of the Open edX micro-frontends by deferring or lazy loading certain code until it becomes relevant for the user. Code splitting should help to improve/mitigate the Core Web Vital metrics such as Largest Contentful Paint.

While migrating to a Piral and/or Next.js based on architecture helps bring in code splitting on its own, we can take more incremental steps towards code splitting to improve the frontend performance in the current Open edX MFE architecture.