Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
  • Experiment for this hour: Strong Opinions Loosely Held Might be the Worst Idea in Tech

    • Hypothesis: Qualifying statements with uncertainty %, allows for more inclusive conversations.

    • Q: How much of this applies to edX meetings, from experience?

    • Note: some of us are training ourselves to be more assertive in order to succeed in this industry.

    • Note: sometimes there is an asymmetric demand of proof (may apply to newer or junior engineers)

      • Sometimes this also a matter of perception where people feel like they need a rock solid proposal before they can push an idea.

    • Note: Riff Analytics is a video conferencing tool that provides immediate feedback on the inclusivity of the meeting.

    • Other Ideas:

      • Bringing up alternatives

      • Prefacing statements with “I don’t feel strongly about this”, “There may be other ideas. Here’s one.”

      • Pausing for input

        • Not all pauses need to be awkward 😃

      • Avoid forcing people to “stand and deliver”

        • reacting real time isn’t how all people operate or prefer to operate

      • Enable individual time before then opening up to group discussion

        • lets people contribute in their own way before being forced into larger group format discussion

      • Avoid asking people to specify if they “don’t know X or Y” in group setting, instead give high level summary

        • ex” Who here doesn’t know about Containerized Private Minions?”

        • Let's just stop asking "does everyone know what XYZ is?" and just give the explanation for acronyms and such up front.

  • +4 Rebranding edX.org and its i18n implications - carryover from last week

    • edX is gonna go through a re-branding effort, what kind of implications if any will this have for how we manage i18n.

    • Targeting January: new color, new type, new logo, etc

    • Would also require copy changes → translated strings would need to be re-translated

    • Note that for MFEs, we separate out the strings from the constants used in the code

      • This may allow us more flexibility to control when we change and deploy things. (60% certain)

    • Note: Add brand strings in well before we need to go live to give us time to translate.

  • +4 FYI: Paragon Proposed Changes

  • FYI: Signals as Hooks for extensions

  • FYI: Core Committers Phase 1 Goals

    • David Joy’s summary of the document:

      • Component changes to paragon are breaking changes in Paragon that apps need to understand / process before they upgrade.

      • With theme + brand being also incorporated into Paragon this update process becomes doubly harder.

      • The proposal is to move edX-brand out of paragon.

      • Some concerns - versioning and dependency, flexibility versus consistency.

    • Discussion / Notes on this:

      • 80% confident in the proposal - David

      • 70% confident this would help Open edX as well! - Nimisha

      • More feedback to come in Paragon working group / FEDX meetings!

  • +4 Arch Roadmap update

  • FYI: Signals as Hooks for extensions

  • FYI: Core Committers Phase 1 Goals

  • What does a good django model set of unit tests look like? - asked by JJ, who will not be able to attend but will read the notes (smile)

    • Specific example that I’m working on (also applies to the question on validations below): https://github.com/edx/ecommerce/pull/3087/files#diff-6aa12e1c48e74f5bcc0bd29fabdf3185R126-R173

      • Background info: I’m attempting to test the swap_states(cls) class method defined in the model. There are 3 possible rows in the table - “New”, “Current”, and “Discard”. There can only be one row in each state, so this method would be run (before deleting the existing “Discard” row & creating a “New” row) in order to move the existing row(s) along. I’m looking to set up a few different test scenarios, e.g. test case with 0 existing rows in the db, test case with 3 existing rows in the db, and I’d prefer to not do the setup inside the test itself (if the test fails, I want to know that the logic somewhere has changed, and not that my database state setup is broken, which is what would happen in that PR today)

      • Additional background: This is really easy to do in RoR

  • What kinds of validations/validation patterns do we use on models (if any?) It seems like Django models will let you save invalid models, so what do we do to protect against bad data in the DB, if we’re not going through UI form fields to save models (e.g. we’re invoking methods to save stuff purely in the backend, like from a cronjob) and we can’t add validations in the DB itself? - also asked by JJ, who will not be able to attend this tea time but will read the notes

...