Simple Problem Editor Comparator Research
In this section, I’ll be looking at a more in-depth breakdown of how different platforms conduct assessments and include problem-style content. This is likely to wander out of the bounds of just looking at the problem editor, but is important for proper comparison and analysis work later. This ties directly to https://docs.google.com/spreadsheets/d/1zGnXBl9zMfVZOzwE72-Lv0wEAdN4f7Bm8hsfIaJ1_0k/edit#gid=0 for an overview of problem types available. I’ve chosen to include any question types listed as supported under the basic, core question configuration options for each platform, i.e. those that do not require an extension, plug-in, custom scripting, or other activity that cannot be conducted by an author within each platform’s editing environment, as that is the closest each will get to the “common” problems of Open edX.
General Overview
Generally, there are a few key patterns and lessons to be found in the problem editing functionality of most LMS platforms:
Multiple choice and checkbox problems are ubiquitous, and a staple of basic assessment tools.
Problems and tasks that require staff input such as staff grading, or activities that require moderation and synchronous use between staff and learners, see less use and less investment on platforms designed for mass, asynchronous usage in favour of automated tools.
If those platforms have those features it’s only due to a small portion of their customer base (mis)using their platform for small-scale, high-touch courses.
“Sorting” style activities (such as categorization, drag and drop, and matching exercises) seem popular in low-stakes courses such as introductory MOOCs and low-complexity content, but are increasingly absent at platforms aimed at higher-level content, I assume due to pedagogic unsuitability for complex subject-matter.
Where the problem editing experience is simple, it almost always comes at the cost of flexibility.
Most platforms cannot handle assimilative learning content being mixed in with problems and interactive content. At best, it requires a different webpage, and at worst, quizzes are kept entirely separate and isolated from learning content in a “quizzes” tab or similar.
This ability to learn-practice-learn-practice is something desirable that is foundational to the design of Open edX, and needs to be emphasised and kept as a unique selling point of the platform.
The separation of questions into quizzes limits other platforms’ ability to do interpolated testing, but does provide significant advantages, such as being able to reset answers at a quiz level, total up scores from specific quizzes, and many other administrative and learner management tasks, which are all areas in which Open edX falls short.