To improve the efficacy and efficiency of inter-team code reviews. Minimizing impact on the reviewers.
What's Gone Well
- TNL works differently with different teams
- For Solutions, there's a 24 hr turnaround (at most 2 hrs) and limit to only 1 round. Minimizes impact to TNL, works quickly.
- It's on them to give a thumbs up in the context of TNL feedback
- For Lahore, 24 hr turnaround, but more detailed reviews
- ECOM: good to have the context of the feature
- Links to sandbox can be very helpful!
- DOC team gets tagged for text strings, sandbox and screenshots are super helpful here
- Working on a template for PRs
- Code review does go faster when there's a design session before the PR goes out.
- Also goes faster when you distinguish between FYIs and actual reviewers.
What Hasn't Gone Well
- For the a11y team, it can be unclear whom to ask for code review — also an issue Escalation faces
- For TNL — tagging specific developers isn't always the right move, since certain people get overloaded
- ECOM: bad not to have the context of the feature
- Context: things like screenshots, links to tickets, descriptions of what the PR is meant to do
- How much would this be solved by a good JIRA ticket?
- For features, no. For bugs, yes.
- Open source team asks for these things for open source PRs
- Should we have people sit down with reviews, or write descriptions? (descriptions seem better...)
- Although there's also value in having in-person explanations...
- It's hard to figure out where inter-team code reviews fit in the priorities of things we're already working on.
- Implicit contract: "You can respond quickly when you can respond properly"
- Would be helpful to have time estimate
- Size of PRs can be an issue
- Teams can be blocked by other teams. Unclear when teams are going to get to stuff. When teams do get to it, sometimes we've moved on to other things.
- Isn't always clear when we should do a cross-team review?
- Gut is when it's architectural
- Balance between getting cross-team review and getting team up to speed on what you're doing
- Are screenshots enough
- How much "review" should happen even before the code is written?
- Good to have general agreement on outcome of PR
- Not always consistency about what each team requires for a PR.
- Worth documenting PR requirements by team?
- Consistent "cross-team" policy could be useful
- Might be worthwhile to get at what we want to get out of a code review
- Maturity of platform figures in here
- Issues are staying in "code review" longer than in progress (4-5x in tnl/mobile)
What do we want to get out of code review?
Each team go through backlog and give examples of things that went well and things that didn't