Release Testing Strategy Brainstorm

Let's analyze the current community release testing strategy and decide whether, based on both product and technical requirements, it warrants improvement. If so, how can we assist the community in doing so? And, in particular, what can we do to:

  • Help guarantee that every feature in each release is as well-tested as possible, in such a way to minimize the introduction of last-minute bugs;

  • Help the Product Working Group communicate with whoever is doing the testing, so that the latter effort can be factored into the plan for upcoming releases.

And finally:

  • How much of the testing should be delegated to the development teams? To the reviewers? To a dedicated testing team?

General testing categories

  • Feature evaluation/user acceptance testing

  • Feature validation

  • Testing at scale

    • Edge cases

    • Performance

Part 0. Overview of current edx.org testing

  • Feature shipped but off by default

  • “Real world” beta testing

    • Via course waffle flags

    • Subset of users that are beta testers

    • Uses actual courses, with real users

  • Multi-stage roll-out of new features (for bigger features, usually)

    • Subset of orgs/courses

    • Allowlist/denylist for defaulting it to on

    • Extreme Example: Learning MFE (6 months - 1 year: an extreme case)

  • End-to-end testing

    • Cypress

  • Reverting commits

    • Because they’re running on master and have control of it

  • Monitoring + Continuous Delivery

    • Detecting new errors against specific release versions (down to specific PRs)

  • No specific philosophy/industry pattern adopted

  • Coursegraph:

    • Graph database where modulestore data is dumped

    • Page of canned queries

    • Example: “How many people use an LTI tool with this exact URL?”

    • Evaluate the priority/impact of features/bugs

    • Possible to replicate variety of content

  • Bug reporting pipeline (CRs)

Part I.  Overview of the current community testing strategy

Links to existing documentation:

Part II. What should probably continue for Redwood?

List the things that have worked well so far, and likely shouldn’t be messed with.

  • BTR should continue to function as gatekeepers for a release

Part III. What should we change for Redwood?

  1. Product should take ownership of a “master” test spreadsheet

    • The master version should be ready by the cutoff date

    • The test manager still instantiates the release-specific version of the spreadsheet

  2. Cut the official testing period in half

    1. Pros: more features land in Redwood

      1. “If the test cycle was smaller, we’d increase the chance of features getting into each release”

    2. We (Axim) know BTR has taken the brunt of it, and are willing to help find resources to get testing done faster (more automation, more documentation, etc)

  3. Do more point releases

    1. On a cadence, or at every (important) backport?

  4. Codify the process on docs.openedx.org

    1. What the test manager needs to do: write this doc proactively (this Friday)

  5. Move from spreadsheet to Github issues?

    • One issue for each test case

      • Tracks the whole lifecycle of the issue

    • Script to convert the product-owned master spreadsheet to issues

      • Happens at every release

      • Smart enough to create new issues if new rows are added

  6. Process suggestions

    1. Critical BTR roles should probably rotate mandatorily every two terms

    2. Critical roles should probably have a primary and secondary

  7. Make the test plan/state more visible/discoverable

    • A “folder” pinned to the Slack channel would be great

  8. Milestones (with dates) should be set

    1. Week 1: 25%, week 2: 50%, and so on.

  9. Maintainers are responsible for fixing bugs or getting the authors of the feature question to do so

Part IV. What should probably change for future releases?

Things that haven’t worked very well at achieving stated objectives, aren’t current done (but probably should), etc. Anything to add, remove, or modify.

Part IV. The weeds: how, exactly?

How can we help implement the changes? Are there specific patterns or tools to recommend? Who should lead this - should it continue to be delegated to BTR? What are the next steps?

  • Studio MFE project as guinea pig

    • Half has already been tested “the edx.org way”

      • Half has not

    • Ideally with end-user testing

  • What is preventing us from automating testing?

    • We need to invest in it

    • Product validates the test that gets run continuously

    • At the integration level we don’t need to test every single iteration - mostly basic stuff

    • Won’t replace all testing, but will save some BTR time

    • Once we build the tests, who’s responsible to maintaining them?

    • Exasperation with bokchoy

      • Maybe Cypress is viable

        • Not for Redwood

  • OEP-37: Dev Data

    • Tutor plugin that sets up test data

    • Making it a REST API would abstract out the model definitions

    • Bonus: would make it easier to provision the platform

    • A pre-requisite for testing at scale

  • Testing at scale

    • Expensive and difficult

    • We should do it eventually, but definitely not for Redwood