...
How much of the testing should be delegated to the development teams? To the reviewers? To a dedicated testing team?
Prongs
Feature evaluation/user acceptance testing
Feature validation
Testing at scale
Edge cases
Performance
Part 0. Overview of current edx.org testing
Feature shipped but off by default
“Real world” beta testing
Via course waffle flags
Subset of users that are beta testers
Uses actual courses, with real users
Multi-stage roll-out of new features (for bigger features, usually)
Subset of orgs/courses
Allowlist/denylist for defaulting it to on
Extreme Example: Learning MFE (6 months - 1 year: an extreme case)
End-to-end testing
Cypress
Reverting commits
Because they’re running on master and have control of it
Monitoring + Continuous Delivery
Detecting new errors against specific release versions (down to specific PRs)
No specific philosophy/industry pattern adopted
Coursegraph:
Graph database where modulestore data is dumped
Page of canned queries
Example: “How many people use an LTI tool with this exact URL?”
Evaluate the priority/impact of features/bugs
Possible to replicate variety of content
Bug reporting pipeline (CRs)
Part I. Overview of the current community testing strategy
...
Announcement: https://discuss.openedx.org/t/join-the-test-team-for-quince-and-meet-the-test-champions-from-palm/11503
Test Sheet: https://docs.google.com/spreadsheets/d/1oVhrnRjk7zxUHMNyU8bJ-TLKyaSukLmN3jWom17O7pI/edit#gid=0
Project Board: https://github.com/orgs/openedx/projects/28/views/5
Example release blocker: https://github.com/openedx/wg-build-test-release/issues/318
Part II. What should probably continue for Redwood?
List the things that have worked well so far, and likely shouldn’t be messed with.
BTR should continue to function as gatekeepers
Part III. What should we change for Redwood?
Product should take ownership of a “master” test spreadsheet
The master version should be ready by the cutoff date
The test manager still instantiates the release-specific version of the spreadsheet
Cut the testing period in half
Pros: more features land in Redwood
“If the test cycle was smaller, we’d increase the chance of features getting into each release”
We know BTR has taken the brunt of it, and are willing to help find resources to get testing done faster (more automation, more documentation, etc)
Codify the process on docs.openedx.org
What the test manager needs to do: write this doc proactively (this Friday)
Move from spreadsheet to Github issues?
One issue for each test case
Tracks the whole lifecycle of the issue
Script to convert the product-owned master spreadsheet to issues
Happens at every release
Smart enough to create new issues if new rows are added
Process suggestions
Critical BTR roles should probably rotate mandatorily every two terms
Critical roles should probably have a primary and secondary
Make the test plan/state more visible/discoverable
A progress bar with milestones/target dates
A “folder” pinned to the Slack channel would be great
Maintainers are responsible for fixing bugs or getting the authors of the feature question to do so
Part IV. What should probably change for future releases?
Things that haven’t worked very well at achieving stated objectives, aren’t current done (but probably should), etc. Anything to add, remove, or modify.
Proper retesting after bugs are fixed / new features drop
Documented process for testers
Full procedure, from opening tickets to closing them
Continuous testing?
Product owns definition of test cases
Automate testing
Performance testing
Testing with multiple configurations
eg. Forums On/Off
A sandbox for end-users to test
Testing with complex data
Generate data (which we currently don’t have)
A place we control where there’s data that can be used, manually or otherwise
Sample or real complex courses
Invite users to test new features in sandboxes
“Here’s how to use it, can you give us some feedback”
Have some place where the data “sticks around” and can be reused
UAT (User Acceptance Testing)
Testing window
Should we start testing sooner?
PR/Sandbox testing is good
Can be used to create the test cases that get used later
Post-cutoff testing is still necessary, to catch regressions
Re-testing point releases?
Probably worth it testing a subset of the cases - ideally the ones that were affected by changes
Changing the release cadence? Adding to the cadence with intermediate releases?
Good question to ask the communityImproving frontend performance with code splitting in React
The translation workflow should be taken into account
Spontaneous bug reporting
edX.org has provided this for us for a long time
“Click here to report a bug”
Other open source projects: Some run master, report bugs
Find partners to run master? Consortium?
Problem: Micro-upgrades?
Run it ourselves?
Part IV. The weeds: how, exactly?
How can we help implement the changes? Are there specific patterns or tools to recommend? Who should lead this - should it continue to be delegated to BTR? What are the next steps?
Studio MFE
Half has already been tested “the edx.org way”
Half has not
Ideally with end-user testing
What is preventing us from automating testing?
We need to invest in it
Product validates the test that gets run continuously
At the integration level we don’t need to test every single iteration - mostly basic stuff
Won’t replace all testing, but will save some BTR time
Once we build the tests, who’s responsible to maintaining them?
Exasperation with bokchoy
Maybe Cypress is viable
Not for Redwood
Tutor plugin that sets up test data
Making it a REST API would abstract out the model definitions
Bonus: would make it easier to provision the platform