Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • How much of the testing should be delegated to the development teams? To the reviewers? To a dedicated testing team?

...

General testing categories

  • Feature evaluation/user acceptance testing

  • Feature validation

  • Testing at scale

    • Edge cases

    • Performance

...

  • BTR should continue to function as gatekeepers for a release

Part III. What should we change for Redwood?

  1. Product should take ownership of a “master” test spreadsheet

    • The master version should be ready by the cutoff date

    • The test manager still instantiates the release-specific version of the spreadsheet

  2. Cut the official testing period in half

    1. Pros: more features land in Redwood

      1. “If the test cycle was smaller, we’d increase the chance of features getting into each release”

    2. We (Axim) know BTR has taken the brunt of it, and are willing to help find resources to get testing done faster (more automation, more documentation, etc)

  3. Do more point releases

    1. On a cadence, or at every (important) backport?

  4. Codify the process on docs.openedx.org

    1. What the test manager needs to do: write this doc proactively (this Friday)

  5. Move from spreadsheet to Github issues?

    • One issue for each test case

      • Tracks the whole lifecycle of the issue

    • Script to convert the product-owned master spreadsheet to issues

      • Happens at every release

      • Smart enough to create new issues if new rows are added

  6. Process suggestions

    1. Critical BTR roles should probably rotate mandatorily every two terms

    2. Critical roles should probably have a primary and secondary

  7. Make the test plan/state more visible/discoverable

    • A progress bar with milestones/target datesA “folder” pinned to the Slack channel would be great

  8. Milestones (with dates) should be set

    1. Week 1: 25%, week 2: 50%, and so on.

  9. Maintainers are responsible for fixing bugs or getting the authors of the feature question to do so

...

  • Proper retesting after bugs are fixed / new features drop

  • Documented process for testers

    • Full procedure, from opening tickets to closing them

  • Continuous testing?

  • Product owns definition of test cases

  • Automate testing

  • Performance testing

  • Testing with multiple configurations

    • eg. Forums On/Off

    • A sandbox for end-users to test

  • Testing with complex data

    • Generate data (which we currently don’t have)

    • A place we control where there’s data that can be used, manually or otherwise

      • Sample or real complex courses

    • Invite users to test new features in sandboxes

      • “Here’s how to use it, can you give us some feedback”

      • Have some place where the data “sticks around” and can be reused

  • UAT (User Acceptance Testing)

  • Testing window

    1. Should we start testing sooner?

      1. PR/Sandbox testing is good

        1. Can be used to create the test cases that get used later

      2. Post-cutoff testing is still necessary, to catch regressions

    2. Re-testing point releases?

      1. Probably worth it testing a subset of the cases - ideally the ones that were affected by changes

    3. Changing the release cadence? Adding to the cadence with intermediate releases?

      1. Good question to ask the communityImproving frontend performance with code splitting in React

      2. The translation workflow should be taken into account

  • Spontaneous bug reporting from deployments running master

    • edX.org has provided this for us for a long time

    • “Click here to report a bug”

    • Other open source projects: Some run master, report bugs

    • Find partners to run master? Consortium?

      • Problem: Micro-upgrades?

    • Run it ourselves?

...

How can we help implement the changes? Are there specific patterns or tools to recommend? Who should lead this - should it continue to be delegated to BTR? What are the next steps?

  • Studio MFE project as guinea pig

    • Half has already been tested “the edx.org way”

      • Half has not

    • Ideally with end-user testing

  • What is preventing us from automating testing?

    • We need to invest in it

    • Product validates the test that gets run continuously

    • At the integration level we don’t need to test every single iteration - mostly basic stuff

    • Won’t replace all testing, but will save some BTR time

    • Once we build the tests, who’s responsible to maintaining them?

    • Exasperation with bokchoy

      • Maybe Cypress is viable

        • Not for Redwood

  • OEP-37: Dev Data

    • Tutor plugin that sets up test data

    • Making it a REST API would abstract out the model definitions

    • Bonus: would make it easier to provision the platform

    • A pre-requisite for testing at scale

  • Testing at scale

    • Expensive and difficult

    • We should do it eventually, but definitely not for Redwood