Content Libraries Sync Notes
@Kyle McCormick @Dave Ormsbee (Axim) @Braden MacDonald @Jenna Makowski @Marco Morales @Sam Daitzman @Maksim Sokolskiy @Scott Dunn
Evergreen topics:
Any new decisions made in the past week?
Any big decisions that need to be made?
Check figma files - new decisions
Add any ADRs - new decisions
Roadmap status updates: https://github.com/orgs/openedx/projects/66/views/6
Jun 24, 2025
Settings decisions: Content Libraries Decision Log | Settings on imported blocks: What should be preserved in libraries, what should...
Import status
REST api PR just merged by Kyle
Max to resolve comments on unit test and have Andrei move from draft, Kyle will review (tests: cover new changes with tests by andrii-hantkovskyi · Pull Request #33 · kdmccormick/edx-platform )
Also finish reviews for unit testing
Kyle - Found a few bugs in main PR, need to be backported from prototype into main PR
Section/Subsection backend work
Braden: It’s done. That team needs more work now.
v1 Library Migration SOW under review at Axim
Kyle: There are some changes that need to land and writeup that needs to be written before doing legacy library migration work.
Jenna: This can get queued up for WGU in August. Not immediate blocker for frontend work.
Kyle: While working on courses on LC, we found some parts of the libraries in learning core code APIs (marked unstable). When would be a good time to slot in that refactoring?
Let’s make a top level epic on API stabilization
@Jenna Makowski will make new epics for API stabilization, documentation,
Performance:
Delay on renames – doesn’t need to block.
UX feedback can go as new tickets in the epic or by messaging.
Eddie: Settings
Eddie: On import, settings would be preserved, but would not be editable. Might be a settings tab, but would be shown as empty. Might be some indication that it’s coming soon. But within libraries, there’s no UI.
What to do when it goes back into another course?
Marco: Passthrough of the settings for the most part.
Braden: Studio has a UX for flagging incompatible states for XBlocks
Braden: We can bring the settings in but not copy them over when using it in a course.
Scott: Feels a bit weird to import a course into a library and have grading information into a lib and preserve the grading info, since that’s so course-centric. Maybe drop those fields?
Challenge - 2 different use cases adn workflows for creating conent
Marco:
While in libraries, what is shown?
When re-using in a course, extra step to communication validation/configuratio
What settings are viewable/not editable in libraries
Braden: See screenshots in this PR for example of existing course-level validation Fix logic of best practices checklist and remove mobile friendly check by diana-villalvazo-wgu · Pull Request #2038 · openedx/frontend-app-authoring
Dave: Author complaint - no good way to do bulk settings across content at scale
Marco: Find/replace 1.0
Everything is preserved on import.
Settings are viewable in libraries. UI question as to how.
Messaging for re-use use case.
Jun 17, 2025
Reviewing doc: https://docs.google.com/document/d/1H25c1_Wk7tlYO7FRwkvGywgl_4QHNlgCdbtVaNCBUfE/edit?tab=t.0
Solutions explored:
Will pursue:
Two author roles, one with write permissions and one with write+publish permissions.
A fully featured versioning/history log, possibly with the ability to do rollbacks.
Metadata fields for copyright on structural blocks
Will not pursue:
Collection-level permissions
Too many complexities around content existing in two or more places with different permissions
Too many complexities with permissions percolating up and down in structural blocks
Pending decisions about whether/how to pursue:
Setting permissions boundary at the library level.
Trade-off - Introduces some complexity as organizations may need to opt into creating multiple libraries rather than using collections as a management tool, but gets around the architectural and user complexities around content having different permissions in different places within the same library.
Lowest technical complexity
Add a new structure within libraries to contain permissions, tentatively calling stacks
Trade-off - Introduced complexity with a new feature users have to learn how to use, in addition to collections. However, may or may not be an issue if only small numbers of large organizations with already-complex publishing workflows, and would not be needed by others.
Medium technical complexity
Content owner role similar to course role where administrators can limit the people who are able to create new content and content creators can be limited to only being able to edit their own content.
Trade-off: Introduces more complexity in role management on large teams, issues when people off-board, transfer of content
High technical complexity
Limit structural block permissions such that when content is created in the context of a larger block, it must share the same permissions.
Trade-off- Adds much more complexity and possible barriers to reusing atomic content
Medium technical complexity
Product follow ups:
Include requirements for two library author roles in MVP for RBAC
Write requirements for an MVP for version logs
Explore 2-3 irl content management use cases against 1) setting permission boundaries at the library level vs 2) stacks within libraries
Questions
Do they have these issues with courses as well?
Marco: Have gotten feedback that having author/admin is a limitation. People want viewer role. Course publishing differentiation as well. (viewer/author/publisher/admin)
Jenna: There’s more recent support for this as well.
How do they want to organize libs?
MIT: Very large library, possibly at site level, many collections
Penn State:
Marco: Licensing data is captured at the course level, but we don’t do anything with it. (Also technically on a per-video level.)
Braden: We do have tags and taxonomy systems if folks like MIT want to use that.
Sam: Not restrictive, just informative.
Braden: Would be great to make this pluggable.
(more notes will come from the recording)
Jun 13, 2025
Migration backend work:
Kyle will work on grooming a ticket to pass off to WGU by the conference
Requirements for library-library export/import: Requirements - Libraries support Library-Library export and import (backup and restore)
Tapping WGU to build this - next steps
Story for backup/restore is part of it
Starting point - backend management command
“kick off” - try to get something on the calendar for next week or following (before conference)
Import behavior | Import / Export behavior (option for Ulmo: import to new library only) | Migration behavior (recommended) |
---|---|---|
Replace full library (destructive) | Links can be preserved if it’s the same library and same content exists. Links can be moved to new library, but it will break old links. | N/A |
Replace individual content by ID (potentially destructive) | Links can be preserved for content linked from same library. If a different library, links can be moved, but will break old links. | N/A |
Add content to target library (preserves existing content in target library) | Links can be moved, but it will break old links. | Links are redirected on first migrate. |
Add content to new (otherwise empty) library | Linking will move all links to new library, breaking old link. | Links are redirected on first migrate. |
Kyle: de-duping is already being dealt with import work, so could leverage it
Dave/Braden - partial import should be the default; aligns with vision for larger libraries
backup/restore will come first
partial import could also come with bulk editing tools
Possible phased rollout - Do designs to include partial import + editing
Open ?s around subsection/section requirements/technical input: assumption-checking from product/Schema
Syncing at the structural block level will sync all changes except titles and Text Component edits, where local overrides are always preserved (for Ulmo)
Text edits would be a net-new capability
Likely need to flag local text override
Possible alternative - allow replacing of texts instead of partial text edits/overrides
Decision: Seems like a feasible incremental addition for Ulmo. Local overrides only at the text block level within structural blocks. Reference current state of stand-alone component local overrides. Need UI thought for messaging/warnings. If it becomes too complex, revert to just title overrides for Ulmo.
Can local overrides be re-applied after a sync?
No, better to stay ignored rather than re-apply.
All other edits to structural blocks still disallowed (for Ulmo)
Yes.
We add “Unlink from library” structural block action in outline and at individual component level to allow lower level changes if needed
yes.
Sync workflow adds an option to unlink instead of applying sync
Accept all changes, or unlink and review syncable content at one level down
Preserves links at one level down, to allow edits at the level you have unlinked
“Unlink from library” action at lower level flows upward
For Ulmo: only allow unlink at top-level (unlink at lower-levels is a manual sequence of operations)
In other words, “unlink from library” action on a unit inside a library-linked section will either not be allowed, or will confirm before unlinking the section and subsection levels above, preserving all possible links that are not a parent of that unit
This option does introduce some complexity with multiple parents at a course level
Will not unlink others - independent by default
Sync continues to be supported for all content at the level it is still linked
Permissions conversation
Section/subsection reuse designs (stories on GitHub by early next week)
Jun 6, 2025
Figma show and tell - migration workflow
Decision - Ulmo is manual migration only
Verawood - options for forced migration
org-level library to house all v1 libraries
one v2 library per one v1 library
one v2 library per set of read access
never auto-migrate anything, frontend for v1 libraries is gone but we keep the backend so migration code can be run
Legacy library block - need to add messaging to indicate migration needs to be done, and move the legacy block to off by default
per content block (RCB) + in studio home
both might be useful to consider
at moment of authoring + at content review / access moment
Import/export options of libraries:
Export legacy library to a tar file and import, then can eventually import an import/export with bigger capabilities
Different use cases
V1 import/export: Get v1 data into the system - this would be a Verawood goal
Extra metadata in V2 that doesn’t exist in V1
Archive and backup that preserves ownership, history at the library level
Can we leverage course export infrastrucutre to do
Parity (for ulmo?) - can be a simple variant on the existing library/data formats
Would need light designs replicating the course export button
Single workflow to download a tar file
Need light written requirements - what does parity look like and what are the expectations for use cases
“Backup / restore” vs. “import / export” features and framing differ
Restore and replace full library / create new l
Discussion: Content level permissioning: [WIP - DRAFT] Content-level permission requirements in Libraries
Aligning on behavior expectations/spec around syncing within structural blocks
May 30, 2025
Retro feedback updates:
New decision log spaces
Content Libraries Concept Decision Logarchived - will archive this and combine into one decision log
Enhancements to board for better planning:
Adding both frontend and backend work https://github.com/orgs/openedx/projects/66/views/1
Using roadmap view https://github.com/orgs/openedx/projects/66/views/6
Ulmo planning session (30 mins)
Roadmap projection exercise
Follow-up retro items for discussion:
Braden: I’m interested in more integration tests or something like release testing on an ongoing basis. e.g. seems like some major bugs were on master / on the sandbox for 1 month + and nobody noticed.
Kyle: out of scope for this group, probably… but releasing Open edX more frequently so that when a feature misses the cutoff, it’s not such a big deal.
Who are the stakeholders?
BTR is the forum for discussion, but needs a champion
Do a feature freeze as part of release process so we can focus on bugs/testing
Automated integration tests is the prereq that would open door to more frequent releases later
To do: Put a story on the Board
Sam: clarity on timeline and state of draft product plans, expected dates they will be finalized / handed off to devs / move to AC testing.
How can we integrate more agile approaches/workflows into the 6-month release cycles?
May 23, 2025
Teak retro:
(Voting emojis: – copy one of these and paste in front of the items you vote for)
What went well?
Dev team pace in the final pre-cut sprint
Sam: Major improvements to content libraries at a product level, unlocked unit-level authoring and built toward future section/subsection authoring
Sam: More clear point-of-contact/tagging/followup after design handoff (still some room to improve, but seemed like fewer floating issues)
Braden: product team prepared a good plan to match scope of Teak Units work to available time (limited MVP)
Marco: Major step toward modular content, measurable learning at units smaller than a course
Ana: good organization from Jenna and Axim team to keep track and driving community progress
Marco: Major platform capability delivered with several open edx groups contributing! (much larger team / coordination / scope)
Max: great job tracking the streams in general
fast live dev conversations when needed
Ivan: ability to quickly and efficiency discuss tech details.
What didn’t go so well?
Eddie: Too many tickets in AC testing where OC team had to follow up for a response
Eddie: Wanted to note that Braden brought this up in on the syncs and we got better at. Retros are really useful, but in-process improvements also essential.
Sam: Some tickets covered multiple concerns, leading to lack of clarity on followup / state, and difficulty in AC testing
Sam: Largest issue was when bug tracker tickets with multiple tasks in one ticket and it came to AC test--was difficult to know what the current status was. Was the spec in this ticket actually implemented, 90% implemented, etc.? Tickets that were closed and immediately opened follow-up tickets for. Mix of high and low priority issues in the same tickets. New trying to use sub-issues more consistently/clearly. When there’s a story that describes the screen, and a high detail area of that screen, we use sub-issue to track that spec. Is this going to cause problems with accounting for these in sheets? Hoping that treating GH as source of truth will help make this more clear.
Sam: Would like to get feedback on whether there’s any difficulty in tracking sub-issues, or if the sub-issues and level of broken out detail is helpful for development.
Braden: I think it’s a good thing to try, and we’re trying it out on the subsections/sections project. Will have feedback.
Sam: Sometimes hard to coordinate feature changes/understand current state of decisions (need for a source of truth)
Braden: felt like more bugs than usual especially in post-cut testing
Braden: merging features right up to the cut meant little time for testing, more bug fixes had to be backported (which creates more work than fixing them before the cut).
Kyle: Hard to keep track of what was planned/inflight between various spreadsheets and tickets.
Braden: (related) some GitHub issues/boards/etc were out of date
Sam: (related) multiple files/boards/internal documents tracking plans simultaneously, with some drift
Discussion notes:
GH vs spreadsheets: scoping decision were often going into a spreadsheet, but some devs were looking at github tickets as a source of truth
Jenna: Adopt GH as source of truth, even if we just use sheets for SOWs?
Braden: Can also use Project boards in GH, so it stays updated and in sync.
Sam: It would help to know the current status of each of the epics/plans/decisions. We could configure it so we can see which epics have SOWs signed. Schema also had times when they changed the product plans without noticing that there were conflicts in other places. GH as whole source of truth could help with that.
Jenna: Can appoint one person to be in charge of moving stuff to the backlog.
Kyle: There’s often engineering work that we will either keep in our heads or make one-off GitHub tickets around. Would be good to get it into the workstream explicitly.
Max: not enough time for architectural decisions
Dave: It was easy to lose track of decisions around concepts/conceptual framing of things in the system. We have it written, but it’s more long-form and scattered at times.
Sam: (related) Some conversations around core product concepts ended up repeating; perhaps some of this documentation was either not shared widely enough, not kept up-to-date, or was lost (or, like you say, got too long-form)
Discussion notes:
Dave: Lots of decisions being made, some tech focused, but also stuff like edge-case behavior of containers and published states, etc
Dave: Often finds himself pointing to a particular GH issue, or google doc, or slack thread, when referencing decisions that we have made
Dave: Could be god to have a centralized list of concepts and decisions around them
Marco: Could be good to enumerate pending product/arch decisions. Lots of slack threads flying by with big decisions. Doesn’t want to necessarily suggest that we have bunch of formal product proposals for each decision, but it could be good to make more use of the product proposal process somehow, maybe by using the decisions as collateral.
Jenna: A lot also gets hashed out in Slack. Going back to product processes, maybe we could evolve the PRD to have an ever-evolving section where we track key decisions every week. Use PRD space in wiki to keep this up to date, keep 5-10 mins in sync touch points to keep this up to date.
Braden: Braden found it very helpful to write test cases as a way of codifying decisions. TDD. Obvi this only applies to certain decisions.
Kyle: Even when we can’t write a unit test, we can write it as a user story. +1 in using the wiki, e.g. a sub-page of the PRD. These decisions evolve/change quickly. ADRs aren’t as useful for this, too slow. Wiki would be a better place to capture these decisions that span dev and product/UX.
Jenna: Central space to report on this, and we can sync up at a regular basis.
Braden: ADRs in Learning Core are out of date, but code comments have been good, e.g. in models explaining.
Dave: Action Item: Create these docs to start with.
Ed: Issue by issue level: a lot of tickets where issues come up and a resolution happens in a GH ticket. It seems like something that should be kept on GH. Want to avoid devs having to go to multiple places.
Kyle: esp. since we have GH issues, Figma, new wiki page
Sam: example: for deletion dialogs, there are multiple GH issues where bits and pieces were added to requirements. Would be good to have single issue to track the spec and links out, and the spec gets updated.
Kyle: Would it be reasonable to consider Figma the source of truth?
Sam: If everyone is comfortable with Figma, could do that. Would have to be comfortable with Figma as a source of current and not just future exploration.
Marco: Good for Figma to have a bit more reference material, but these potential mini-tickets that connect to several tickets, a lot of times the nuance becomes requirement detail. Related to one of the notes earlier about the number of bugs. Seemed like a bunch of cases and states that were different on different pages, worked on at different times--didn’t have time to think it all through.
Braden: Esp. this project we were implementing a reduced scope MVP so we couldn’t necessarily use Figma directly. Also it’s helpful to have conversation in GH where we can be very explicit about what’s in scope and not.
Sam: To summarize: will still keep trying to keep screenshots in GitHub to show what’s in scope and out of scope, but link out to Figma for long term direction. Not necessarily an expectation or added work for developers.
Dave: Concern with figma - difficult to have in-the-weeds conversation about edge case handling due to its comment UI. Better to have those convos in the wiki.
In Summary:
Github issues as source of truth for work to-do
Figma as source of truth for intended UI end state
Wiki as a way to capture decisions and source-of-truth for describing complex behavior or data interactions
Ana: a few RG PRs ( import API/REST API) were not approved before released, what RG needs to do better in the future?
Ivan: changing import flow requirements after demo and large refactoring after it.
Dave: We once again had a big pre-release crunch.
There are many technical expectations that CCs/Axim have, which are not well communicated or documented. This led to lots of churn in PR review.
Kyle: In PRs, a lot of comments were for undocumented assumptions that we have. Documenting ahead of time would help people write code to spec the first time. Action item that I’m taking to my manager, particular around REST APIs. Open call for any other things people are getting tripped up on.
Braden: I get this for frontend-app- PRs, e.g. expectation of TypeScript.
Marco: Are there contributing best practices? Some repos are definitely “you must read these three OEPs, etc.” Some are general and should be in all repos, some are specific enough to one area of the code. Many OEPs, hard to know what’s more important.
Marco: Crossed my mind that AI tools might help identify patterns/PR reviews.
We have a greenlight to use Copilot AI PR review on this project if we want to
What did you learn?
Sam: Specificity and clear separation of concerns in tickets helped development effort & AC testing (can continue to improve in this area)
Marco: continuing to shift design collateral format incrementally to better document patterns that span multiple pages / areas, hopefully reducing case / page-specific type bugs
Kyle: Need to treat REST APIs more like a part of the product – design and review before building them.
Sam: Also URL structures (this has come up in-flight a few times, and I think is worth planning ahead of time more)
Marco: We should consider whether future milestones might echo the pattern of this release, with lots of work upfront to determine core APIs and infrastructure decisions, and lots of the integration testing, UI, etc coming together in the final few weeks. If so, anything that can be done to improve?
Marco: will we be gated on a bunch of backend stuff landing before UI stuff can follow?
Kyle: We try to stay ahead on the backend and what we can build now before product decisions are necessarily made. Can share more on the backend on what we’re thinking about and building.
Marco: Would be good to know what things are likely to affect the technical side the most in the coming weeks.
Braden: These weekly meetings have been useful.
What should we change?
Eddie: Clarity of source of truth for requirements after discussion in tickets
Sam: Potentially more use of sub-issues in project board to keep issue scope clear
Braden: I’m interested in more integration tests or something like release testing on an ongoing basis. e.g. seems like some major bugs were on master / on the sandbox for 1 month + and nobody noticed.
Kyle: out of scope for this group, probably… but releasing Open edX more frequently so that when a feature misses the cutoff, it’s not such a big deal.
Sam: clarity on timeline and state of draft product plans, expected dates they will be finalized / handed off to devs / move to AC testing.
May 21, 2025
Retro of Teak release for Content Libraries
May 16, 2025
Kyle: Mostly focused on Import API review and Learning Core work for the conference, which feeds into pulling courses into learning core and …?
OpenCraft is still doing bugfixing. Team is unclear on priorities at the moment. Need to find a way to communicate which bugs are higher priority.
Sam: Current list of bugs is not ordered by priority, but we could. Current prioritization of work overall is Ulmo-oriented.
Sam: Selection state clarity – getting clarity on this seems high priority
Braden: Would these need to be backported to Teak?
Kyle: Where to see all bugs for libraries?
Project link: https://github.com/orgs/openedx/projects/66
Last round of bugs: https://github.com/openedx/frontend-app-authoring/issues/1834
Braden has been working with Jenna on SOW w.r.t. Sections and Subsections UI. Also working with Kyle and Dave on LC work for conference.
Sam: Section & Subsection support epic is up, pending some additional detail
Anything to figure out with Meili for Ulmo, since we want content libraries to be a core feature rather than a beta feature?
We need to decide whether we’re going with meili, typesense, or both
Then we can remove “experimental” flags around content library content indexing
Dave will put something on the board just to make sure we remember this
Already done
Visual bug: double edit buttons on tagging sandbox (just flagging since this doesn’t quite fall under libraries, and I wasn’t sure if it should fall under the general issue tracker or Teak testing?)
Defnitely an issue on master
Sam will open a bug
Need to confirm whether or not in Teak
May 9, 2025
High level targets:
By May 15 - Aim to have groomed stories for creating new sections/subsections in libraries
By June 30 - Syncing and reuse in courses
By July 31 - Phase 1 of Studio sidebars (add new/use existing; metrics/analytics; help)
By Aug 31 - Import to enable reuse of any set of course content
By Sept 15 - migrations
By Oct 15 - MVP for unit templates
Open questions
-CC import into libraries (and support of external updates)
-export?
-LTI support in libraries
-localization in libraries
Apr 18, 2025
Any blockers?
Import
Next PR will be python APIs, coming from RG, needs review
This is refinement from the large PR last week that was broken up
No concerns on import
Migration - currently in progress
Currently in yellow simply based on timeline
Could be low complexity to backport
Note: Ivan will be vacation next week, Max will be stepping in
Backend supprot for subsections and section: On track
Subsections PR - close to merging, approved
Section PR - open but needs fixes, draft PR ready for review
Apr 11, 2025
Teak code cut pushed to April 24.
Any blockers?
Import
Code review in progress
Break into 2 PRs - models + APIs, then the REST api review
Unit FE implementation
No blockers, moving ahead
Migration
Blocked on the import PRs - should be unblocekd by breaking into 2 parts per above
Subsection/section support - backend
In progress, should be complete early next week
PRs tagged for review
Apr 4, 2025
Teak code cut pushed to April 24.
Any blockers?
Import
Section/subsection support - needs backend implementation first
OC will not be doing this by Teak
No discovery needed, just a matter of adding configuration
Open questions:
Is there space left on the Pearson contract?
Anastasiia will check on the Pearson contract to see if there is space left for RG to implement the subsection/section backend
If yes - next step is to get a handoff meeting with Braden, Dave and Max
Does RG have the background/context needed to implement the backend?
Braden suspects this is just a copy/past of the unit code wtih some tweaks
Migration
Just waiting on PR review (link)
Mar 28, 2025
Teak timeline on the agenda to be discussed at BTR on Monday.
Any blockers?
No blockers from OpenCraft, no demos, mostly backend work.
Max: Section/subsection support for libraries, blocking one feature of course import. Blocking on technical discussions in the PR (Kyle and Dave)
In current copy-paste functionality, if two or more courses have the same ID, name conflict. Need to decide what to do with this. (follow up about static assets)
--
Unit support (epic 12)
First PR to be merged today or Monday, already have adding components in units reviewed, should be merged shortly.
Import API
Ivan giving demo for custom admin action to import from a course into a library.
V1 migration
Blocked on the models landing for course import, assuming still sharing the models between the two. If the cutoff is on 4/9, we won’t have the migration, if on the 23rd, should be able to finish it.
Eddie: There’s a bug in the sandbox that’s blocking some testing. Has to do with using library content in a course and having errors when saving changes.
Copy-paste bugs:
Ivan: When we copy-paste one container block from course A to course B and both courses import to one library. Their block IDs are the same.
Resulting models gist: https://gist.github.com/kdmccormick/49c33c6bdc1476ed4ecff449cd3a4675/revisions
Mar 26, 2025
Dave/Kyle data modelling sync
PR for context: Support for grouped changesets of draft modifications by ormsbee · Pull Request #290 · openedx/openedx-learning
DraftChangeSideEffect, and using the old_version==new_version convention to denote "something about this has changed, even if the version for this thing hasn't", and how we talk about that consistently.
distinction between “version” and “change”
other possible side-effects: inheritance, centrally configured things like LTI where there is no explicit parent-child relationship.
Current Names (and their publish/draft equivalents)
PublishLog → DraftChangeSet
PublishLogRecord → DraftChange
Does old_version == new_version manifest in the publish log as well?
Kyle: Another way to think of this is marking something transitively “dirty”.
Kyle: Strawman: DraftChangeEffect without
Chained side-effects, i.e. if component C1 changes, and it affects Unit U1 and Sequence S1, are the side effect entries [(C1, U1), (C1, S1)] or are the entries [(C1, U1), (U1, S1)]?
Kyle: Can also capture everything, e.g. [(C1, U1, U1), (C1, U1, SS1), (C1, SS1, S1), (C1, S1, K1)]
(OriginalCause, ImmediateCause, Effect)
Kyle: End user probably cares about what the original change was
Conclusion: do the simple collapsed version for now: [(C1, U1), (U1, S1)].
Possible implications for other kinds of side-effects (e.g. inheritance :disappointed:, centralized LTI configuration, etc.)
Allow multiple DraftChanges for the same PublishableEntity in the same DraftChangeSet? (We discussed this in Slack, but I'm still on the fence, particularly as I add other things.)
Yes to collapsing
Try to bring naming convention inline with PublishLog/PublishLogRecord
Detour on potential race-condition issues around using the publishing equivalent of these as cache keys, e.g. {unit_identifier}@{publish_log_key}
A nice thing about the old_version=new_version convention for transitively-changed containers is that the PublishLog row becomes a cache key
Readthrough case: Just don’t update cache result if it’s not the most recent.
Long celery task: Re-run?
Feanil: Shim the entire course structure in a fixed way tied to the PublishLog?
Dave: is that the basis of read-only shim of modulestore?
Kyle: course blocks API could run from this new structure, instead of running block transformers on the modulestore.
Dave: Can go straight to learning core as well.
Kyle: Seems like we need a way to pre-empt or manage race conditions. If first step is to create this structure document and everything blocks on that.
Exciting implication: We can store course content in Learning Core, have it write out/compile a very SplitModuleStore-like set of structure + definition documents keyed on the PublishLog entry, and use that as the starting point for running Learning Core courses. This would make a transition to Learning Core much faster for the LMS.
This would be a read-only modulestore for LMS, and we’d have Studio actually write to Learning Core-based APIs.
We’d store the structure and definition documents in MySQL, but in a way that we can just cascade delete them when new publishes happen.
This should help minimize incompatibilities because only the lowest layer of ModuleStore has to change (the KV store level)
We would incrementally port over things to use Learning Core in a piecemeal fashion, e.g. Unit rendering, course outlines, parts of the course blocks API. Each piece would see a performance uplift from the shift.
We would then sunset the ModuleStore read-shim over some period of time.
Mar 21, 2025
.Any blockers?
Unit support implementation
Needs to be merged: https://github.com/openedx/opaque-keys/pull/369/files
Import APIs
All in progress except for subsection and section import