...
How much of the testing should be delegated to the development teams? To the reviewers? To a dedicated testing team?
...
General testing categories
Feature evaluation/user acceptance testing
Feature validation
Testing at scale
Edge cases
Performance
...
BTR should continue to function as gatekeepers for a release
Part III. What should we change for Redwood?
Product should take ownership of a “master” test spreadsheet
The master version should be ready by the cutoff date
The test manager still instantiates the release-specific version of the spreadsheet
Cut the official testing period in half
Pros: more features land in Redwood
“If the test cycle was smaller, we’d increase the chance of features getting into each release”
We (Axim) know BTR has taken the brunt of it, and are willing to help find resources to get testing done faster (more automation, more documentation, etc)
Do more point releases
On a cadence, or at every (important) backport?
Codify the process on docs.openedx.org
What the test manager needs to do: write this doc proactively (this Friday)
Move from spreadsheet to Github issues?
One issue for each test case
Tracks the whole lifecycle of the issue
Script to convert the product-owned master spreadsheet to issues
Happens at every release
Smart enough to create new issues if new rows are added
Process suggestions
Critical BTR roles should probably rotate mandatorily every two terms
Critical roles should probably have a primary and secondary
Make the test plan/state more visible/discoverable
A progress bar with milestones/target datesA “folder” pinned to the Slack channel would be great
Milestones (with dates) should be set
Week 1: 25%, week 2: 50%, and so on.
Maintainers are responsible for fixing bugs or getting the authors of the feature question to do so
...
Proper retesting after bugs are fixed / new features drop
Documented process for testers
Full procedure, from opening tickets to closing them
Continuous testing?
Product owns definition of test cases
Automate testing
Performance testing
Testing with multiple configurations
eg. Forums On/Off
A sandbox for end-users to test
Testing with complex data
Generate data (which we currently don’t have)
A place we control where there’s data that can be used, manually or otherwise
Sample or real complex courses
Invite users to test new features in sandboxes
“Here’s how to use it, can you give us some feedback”
Have some place where the data “sticks around” and can be reused
UAT (User Acceptance Testing)
Testing window
Should we start testing sooner?
PR/Sandbox testing is good
Can be used to create the test cases that get used later
Post-cutoff testing is still necessary, to catch regressions
Re-testing point releases?
Probably worth it testing a subset of the cases - ideally the ones that were affected by changes
Changing the release cadence? Adding to the cadence with intermediate releases?
Good question to ask the communityImproving frontend performance with code splitting in React
The translation workflow should be taken into account
Spontaneous bug reporting from deployments running master
edX.org has provided this for us for a long time
“Click here to report a bug”
Other open source projects: Some run master, report bugs
Find partners to run master? Consortium?
Problem: Micro-upgrades?
Run it ourselves?
...
How can we help implement the changes? Are there specific patterns or tools to recommend? Who should lead this - should it continue to be delegated to BTR? What are the next steps?
Studio MFE project as guinea pig
Half has already been tested “the edx.org way”
Half has not
Ideally with end-user testing
What is preventing us from automating testing?
We need to invest in it
Product validates the test that gets run continuously
At the integration level we don’t need to test every single iteration - mostly basic stuff
Won’t replace all testing, but will save some BTR time
Once we build the tests, who’s responsible to maintaining them?
Exasperation with bokchoy
Maybe Cypress is viable
Not for Redwood
Tutor plugin that sets up test data
Making it a REST API would abstract out the model definitions
Bonus: would make it easier to provision the platform
A pre-requisite for testing at scale
Testing at scale
Expensive and difficult
We should do it eventually, but definitely not for Redwood