Teak.1 Retrospective ๐ŸŒณโช

Teak.1 Retrospective ๐ŸŒณโช

How to contribute ๐Ÿ“

1๏ธโƒฃ Review the template and add your thoughts under the relevant sections.
2๏ธโƒฃ Be specific! Share what worked well, what didnโ€™t, and any actionable suggestions for improvement.
3๏ธโƒฃ Tag yourself in your contributions (e.g., [Your Name]: Your feedback here).
4๏ธโƒฃ Keep it constructive. The goal is to learn and refine our process.

Where to focus your feedback ๐Ÿ”Ž

โœ… Successes & achievements โ€“ What went well? What should we continue doing?
โŒ Challenges & improvements โ€“ What issues did we face? How could we handle them better next time?
๐Ÿ“ˆ Action items for Ulmo โ€“ What concrete steps should we take to improve?

Some key areas to consider:

  • Product ๐Ÿ—๏ธ

  • Testing process ๐Ÿ”

  • Bug triage & fixing ๐Ÿ›

  • Security ๐Ÿ”

  • Backports โช

  • Release notes ๐Ÿ“„


I. Context, data, and highlights ๐Ÿ“ˆ

  • The Teak master branch was cut on 06/26, ~three weeks later than originally planned.

  • Master cut moved a few days to give time for critical PRs to be merged

  • Testing completion: 96.9%, a lot done during test-a-thons!

II. Successes and achievements ๐Ÿ…

  • The community helped a lot with triaging and fixing issues.

  • Async check-ins helped us find problems early-ish.

  • We got new contributors involved through well-labeled BTR issues!

  • Testathons gave us high test completion rates! Although we need a bit of help at the end

  • The new issue flow made it easier to work with maintainers (transferring issues after triage).

III. We should keep โœ…

  • Automated New Toggle/Waffle flag additions/removals (Priceless!)

  • Test-a-thons!

  • Using BTR issues to welcome new contributors

  • Starting test failures in the BTR repo and then moving them to the source repo to improve collaboration with maintainers

  • Doing async check-ins before the release cut

  • Prioritize & focus on solving high-priority issues, mainly release blockers

  • Find help within organizations actively participating in the BTR

IV. What didnโ€™t we do so well, and what could we have done to do better โŒ

Product

  • We need to ensure Transifex is ready for a new release and all new translations are committed to release/<release_name> branch before the tagging.

Testing

  • We need to make it clear to testers which tests require a technical skillset or comfortability with setting/re-setting admin permissions to complete the test. (Note: Persona can be user, but the tester's role and/or skillset is something else that should be evident from the testing sheet)

    • Use the Developer/Operator Persona.

    • Update test instructions to make it clear that users should read the tests and confirm they feel comfortable with the testing setup and instructions before claiming the test.

  • We need to make it clear to testers on tests that require testing that email works how to use proton mail to test - is this required? (is this something that requires a technical skillset or just clear instructions for setting up proton mail?).

    • This may no longer be an issue as of the Teak Release. Email was setup in the sandbox, so testers were able to test without having to setup proton mail.

  • We need to make it clear to testers where to navigate in the LMS or CMS to conduct the test (are they logged in or out? as well as what type of user active or admin, for example) for every test.

    • Can we take this effort to the documentation? Check the documentation for the test being conducted. Is this information there? If so, link to the documentation in the setup column. If not, update the documentation with this information and link to it in the setup column.

  • We need to make it clear to testers if multiple tests are linked (or at least that they would be easy to do one after another in one sitting, for example). Antonella had suggested the idea of linking these via numbers Test 2.1, Test 2.2, Test 2.3.

  • Currently, the process of adding a new test requires a new test case ID to be manually developed which is currently a manual, and error-prone/not efficient process. Can we automate this in some way?

    • Can we save the Teak Testing Sheet so that we have historic case ID information preserved for any test failures that still exist and reference historic test case IDs?

    • Moving forward, we could adjust the Testing Spreadsheet using a new test case ID assignment framework. The row number IS the test case ID. (When we need to deprecate a test, we strike through the row font, we highlight the row red, and we hide the row from the sheet, so that the row is not reused for a new test.)

  • Any updates to a page can make every/some test that touches that page out of date. We need to think through how to get product to review, flag, and update existing tests that need updating ahead of each release. (This will require making it clear what pages of the platform each test corresponds to).

  • We need to get instructions written for writing a good test case and adding/updating test cases for those who are reluctant to update the test sheet. Iโ€™ve drafted the following 2 documents to add to next releaseโ€™s testing sheet, but I welcome any BTR feedback/input on both of these documents:

  • Is google sheets the right tool for the testing spreadsheet - or can we use a table and a script to get each row as a ticket onto a kanban board for each release?

    • Sheets might make the most sense for more non-technical testers. We should do some outreach with testers from previous releasers to get feedback on the testing process and tools.

  • We need to make it clear to testers for how to create a filter view (and not filter the whole testing spreadsheet for all users) adding steps to filter only for "me" so that we don't filter for anyone, if we continue using spreadsheets.

    • Are there permissions that bar this?

    • Make sure to forbid this in testing instructions.

Bug triage and fixing

  • We should always triage high-priority issues carefully, checking if they are regressions so we can categorize them with the full context.

  • We should close issues that canโ€™t be reproduced after some time, so the board stays clean and easy to follow.

  • We need more core contributors from the authoring side to support maintainers. Internally, we donโ€™t have much front-end capacity, and most of the issues are front-end related.

Security

  • ย 

Release notes

  • This was the smoothest this process has ever gone and I thank @Sarina Canelake for automating the waffle/toggle flag add/remove list. This used to be by far the most painful part of the process!

Backports

  • ย 

VI. Action Items for Ulmo release ๐Ÿ“ˆ

Release

  • Invest time in maintaining and improving the release scripts. We could also look into improving CI.

Product

  • Categorize issues (high, medium, low) weekly during the release cycle, and increase the frequency as we get closer to the cut.

Testing

  • Hosted 2 virtual Test-a-thons for the community and 1 in-person Test-a-thon for Aximโ€™s in-person week. The in-person plus the first virtual Test-a-thon accounted for 36% of tests completed for Teak. The final virtual Test-a-thon had almost no attendance. In the future, 1 in-person (if possible) and 1 virtual Test-a-thon feels like a sweet spot. I added instructions/artifacts for hosting Test-a-thons to our wiki space here: https://openedx.atlassian.net/wiki/x/AwBJLwE

  • Finalize the Cypress basic/smoke testing for release and use it as a part of the release testing process: cypress-e2e-tests/cypress/e2e/lms at restructuring ยท raccoongang/cypress-e2e-tests

Bug triage and fixing

  • Prioritize fixing key issues for the next point release.

  • Add release testing label automation to more repos. This automation detects when the label is added to an issue and automatically links it to the BTR project. This also raises a question: what exactly counts as a release testing issue?

Security

  • ย 

Backports

  • ย 

Release notes

  • ย 

ย 

ย 

ย