[Draft] [Proposal] Adaptive Learning
Overview
This document proposes a means of adding native support for adaptive learning to the platform. Adaptive learning is the process of dynamically presenting content to the user based on their performance on previous problems. This is distinct from ‘choose-your-own-adventure’ style blocks that have deterministic pathways, or standard problem banks which do change their selection method based on previous choices.
Problem
Current course content features allow instructional designers to guide a learner through a concept via problems that progressively increase in difficulty, or contextual depth, as the learner progresses. However, these problems are either explicitly predefined, or are selected from problem banks of similar difficulty.
The result is that the course must be calibrated to the learning rate of an imagined learner. While experienced instructors can and do estimate the effective learning rate of the median learner for their courses, there are several limitations to this method:
The ability to predict the learning rate of the median learner, while possible, is not guaranteed. Instructors may overestimate or underestimate the difficulty in acquisition, especially as the curse of knowledge interferes with the ability of experts to fully grasp the limitations of the uninitiated.
Newer instructors may have more difficulty making these estimations, which can impact learners.
Even experienced instructors may have difficulty estimating if they are preparing content for a topic they have not previously taught.
Even if the median learning rate is accurately predicted overall, it may vary considerably for specific concepts.
Learners far below the median may become discouraged and drop out. While we may always expect some attrition, it is always preferable to reduce it if we can do so without unduly impacting standards.
Learners far above the median may become bored and have difficulty keeping attentive.
The median itself may be greatly affected by historical estimations of what the median should be, as those who have tried online learning before or have preconceptions about its challenges may self-select out.
Some means exist to offset these problems, but each have their own limitations:
Instructors may use analytics to determine at-risk learners and intervene early.
This can be very effective, and should always be done regardless, but requires disproportionate effort. The smaller this group can be made, the lower the impact to the rest of the learners, and the lower the cost to administrate the course.
Entrance exams can determine the knowledge level of learners ahead of the course.
While current knowledge may be somewhat correlated with learning rate, it is not perfectly correlated.
Entrance exams used gate entry may prevent the unprepared from entering a course, but may also exclude those with the capacity to make up the difference.
If the entrance exam is not used for gating entry, but is used to determine current knowledge, it may indicate that the course is not calibrated to the learners. Learning this too close to the course start time could result in significant extra time and resources spent to re-adapt the course or class schedule.
The course author may refine the course over time using continued analytics to better serve a larger audience and calibrate the median learner.
Like determining at-risk students, this data-driven decision making should be encouraged, but the general issues with designing for a median learner are not fully resolved.
It can take several runs of a course to dial in these optimizations.
These issues point to a consistent problem: The course itself cannot adapt to the needs of the learners. Significant effort must be spent by instructors to remediate this limitation. For courses which are self-paced and without an active instructor, little can be done other than to better chase the median.
One further limitation of static content is that it is limited in preparing learners for exams which use adaptive testing methods to select questions. These include major exams such as the SAT and PSAT, which are now digitally administered to allow this functionality. Likewise, similar testing methodologies are unavailable to instructors.
Use Cases
As a learner:
I want problems which adapt to my level of understanding, so that my grasp of a concept can continue to build.
I want a course whose exams mirror the functionality of exams which may have a large impact on my acceptance into academic programs, so that I am better prepared for them.
I want to continue working on a topic until I understand it, lest I move forward without understanding previous concepts.
As an instructor:
I want to provide adequate levels of challenge to my learners, so that they are neither overwhelmed nor under-challenged.
As a course creator:
I want auto-adapting problem selection so that my self-paced courses can serve a wider range of learners with a wider range of abilities more autonomously.
Proposed Solution: High-Level