Randomized Content - Idealized Feature Ideation
There are a few potential versions of this feature that could exist in the future. For the purposes of this section, I’m going to assume that the feature we’re designing for has the sole intention of providing random content from content libraries, with a focus primarily on providing random problems as the primary use-case.
With that purpose in mind, the ideal version of this feature would conceivably have the following areas of features and functionality:
Content Discoverability
It should be possible for a course author to discover the content they need in content libraries, without prior knowledge of its existence, and to quickly find the content that they know exists. This means that however this feature is implemented, it would need the ability to:
Browse and view the contents of libraries
Search the titles and contents of libraries
Sort and filter the list of available libraries by common properties, such as:
Contents (such as by problems, HTML, Video)
Categories (organisation-defined, representing groups of libraries, such as by subject)
Tags (author-defined, representing properties of the library such as by difficulty or original course)
Author(s)
Date last updated
Date last live (End date of most recent course to use the library, or “Live until [end date]” if a course is currently live using this content)
Number of courses the library has been used in
Recommendations for content libraries for use based on the above based on the content of the course (some form of pattern matching would be required, this is more speculative)
Content Completion Persistence
Where content is reused, it should be possible to track that a learner has previously accessed or completed that content, and use that information to improve their learning experience. This means:
Providing alternative content from the library where possible if the same library content has already been assigned to that learner, whether on the same course, or on other courses.
For example, if a content library with 4 problems inserted at one point in a course, offering 2 problems, then inserted at a later point or in a different course with a further 2 problems, it should be guaranteed that the learner will see two random groups of 2 problems, using the whole library, rather than having any potential overlap.
Allowing the learner’s completion status to be automatically pulled through between instances of the same problem from a library, so that the learner does not need to complete the same content twice if it is assigned due to lack of other content being available.
This should be optional for the course author to allow for revision of content
This could also only carry across if the learner answered the problem correctly previously as an option. This would likely need to be definable as a policy at the site, org, library, and component level.
Content Selection and Randomisation
As would be expected, the content should be randomised, including:
Semi-randomisation by component types, allowing multiple types to be selected (such as Multiple Choice, Video, and Checkboxes)
This should be based on component types found within the library, not all possible component types
Semi-randomisation by manually tagged content within the library (such as “mastery questions”, “level 1 questions”, or even “questions for Bob’s group”)
Granular control over problem weight (maintain library weight or override weighting)
Granular control over numbers of component types randomly selected from the bank (select 3 multiple choice problems, 2 Poll XBlocks, and 1 component tagged “complex”)
Multi-selection of libraries (select content from these three content libraries)
Semi-randomisation by weight (provide 10 points’ worth of problems)
Re-weight as a group (divide 10 points between all assigned problems)
Specific content exclusions (do not ever include this specific component from this library) and inclusions (always pick this problem, and 3 others at random)
Randomised block-level variables (where X is used in these problems, X will always equal the same number)
Order preservation (Always display the selected components in the order that they exist in the library, even if not all components are used)
Granular editing of the library content (at the cost of disabling automatic centralised updates, see administrative tools below)
This could also be duplicating the library to a new library from within the same interface, with editing taking place in the new library
Option of adaptive randomisation (ensuring an even distribution of available content) vs. pure randomisation (assigning each learner a truly random selection of content)
Administrative tools
In addition, for randomised content to not cause significant administrative overhead for course admins and support, randomised content would need tools to:
Reset progress and/or force a single new problem to be delivered to a learner
Reset progress and force new problems to be delivered to a learner for the entire randomised block
View which randomised content has been delivered to a learner
Current ‘view as’ functionality is fine for this
Override grading for randomised components from the gradebook
Automatic centralised updating support
Settings to control how a problem behaves if source libraries are updated after content is pulled from them, such as:
Preserve current state (do nothing)
Update, purge (automatically update content, completely resetting all learners who have answered any problems or recorded a completion event, the nuclear option)
Update, preserve completion (automatically update problem content, preserving any grades, attempts, and progress from learners who previously answered)
Update, preserve correctness (automatically update content, preserving only the grades of learners who previously got the problem correct. For non-problem content, this would be functionally the same as preserving completion.)
It should be possible to easily view at the library level where problems were used in courses, but not updated, as well as places where problems were edited.
It should also be possible to force a single update without necessarily enabling automatic updates with the above granularity around progress preservation.
Reporting
It should be possible to report on the following areas, either directly or by extrapolating from other data:
Overall grades by content assigned
To identify trends caused by groups of assigned content (for example where a learner viewed video B, they were more likely to pass)
Grades by problem component
To identify issues with specific problems (for example Problem 1 has a 90% success rate while Problem 2 has a 2% success rate) to ensure fairness between learners and improve content libraries
Responses by other content assigned
To identify trends in responses (for example where learners saw video A, they were more likely to respond with option B)