/
Assessment Content Creation

Assessment Content Creation

Open edX

Within graded subsections, any type of component can be created, with those that return a grade to the LMS being the ones that contribute their grade to the associated assignment type. This includes (but is not limited to):

  • CAPA problem components

  • Graded XBlocks installed by default such as ORA

  • Third-party XBlocks installed by the site administrator such as the SCORM XBlock

This means that configuration options for graded content in Open edX can be incredibly bespoke, as specific XBlocks have their own settings related to grading and the behaviour of those components. The following graded components are the ones listed as fully supported by edX.org ’s documentation currently:

  • CAPA problems

    • Single-select (multiple choice, radio buttons)

    • Multi-select (checkboxes)

    • Dropdown

    • Text input

    • Numerical input

  • Advanced problems

    • Custom JavaScript Display and Grading

    • Math Expression Input Problems

  • Other XBlocks

    • Open Response Assessments (ORA)

      • Self-assessment

      • Staff assessment

      • Peer assessment

    • Drag and Drop Problem

    • LTI Component

    • Peer Instruction

There are many other graded content types that can technically be included, but as this list is effectively infinitely extensible, I’ll draw the line at those listed as fully supported in the edX.org documentation, as they should theoretically be the most complete components that reflect the ideal standard of gradable components for the platform. XBlocks in general can have entirely arbitrary settings, but for a full breakdown of configuration for ORA, see the ORA Feature Analysis.

Problems

CAPA Problems all share the same interface for authoring, as they are fundamentally all simply different implementations of the same component type. A full analysis of the simple problem editor as of Olive can be found in the Problem Editor Feature Analysis document, as well as a look at the new problem editor being worked on by 2U, which is optionally available.

Above: The basic problem editor

Above: The advanced problem editor

General settings

As all problems are created in the same component type, the same settings generally apply to all problems created in both the simple editor and the advanced editor, though the advanced editor can override some settings from within the problem (such as providing a different number of points than the correct answer’s weight):

  • Display Name

    • The title of the problem, which is a common field to all components, with the exception that display name is or isn’t displayed to learners almost completely arbitrarily depending on how the component has been built. Problems display their display name, which HTML components do not.

  • MatLab API key

    • A long-deprecated and no longer supported setting that is still present in Palm due to only being removed from the newer 2U editor

  • Maximum Attempts

    • How many attempts a learner can have at submitting the problem before no longer having the submit option

  • Problem Weight

    • How many points correctly answering the question grants the learner

  • Randomization

    • This does not affect anything other than Python script variables.

  • Show Answer

    • When the learner is allowed to view the correct answer from a list of different problem states that are only explained in the unlinked documentation

  • Show Answer: Number of Attempts

    • How many attempts to allow before the show answer button appears, provided the former option is set to β€œAfter some number of attempts”. This option appears regardless of whether that option is selected.

  • Show Reset Button

    • Whether a button should be shown to allow learners to clear their currently selected or typed responses (this does not reset attempts for obvious reasons).

  • Timer Between Attempts

    • A number of seconds that a learner must wait before reattempting the problem. This is always seconds, so a learner who must wait 3 days must be set to wait 259200 seconds.

Basic Problem Editor

The following standard problem types are supported in the basic editor:

  • Checkboxes

  • Dropdown

  • Multiple Choice

  • Numerical Input

  • Text Input

In the default/simple editor, these all essentially insert the same editor with different templates set up for these problem types, and multiple input types can be used within the same problem for multi-stage questions (such as a multiple choice asking β€œWhich of these is correct” followed by a checkbox problem asking β€œWhich of these statements are true about the correct answer?” as a single problem). The list for inserting these problems also includes versions that include the markup for hints and feedback.

CAPA problems support the following elements:

  • Context text that is irrelevant to the problem

  • A question, >>marked with chevrons<<

  • An [explanation] which surrounds the explanation text that appears when a problem is answered [explanation]

  • Basic markdown syntax for some elements of formatting like #Headings

  • Limited HTML insertion for other formatting and image insertion

  • || Hints ||

  • {{s: Feedback for selected answers and, u:Feedback for unselected answers}}

  • Various symbol-based markup options for options, such as (x) for a correct multiple-choice question option or [ ] for an incorrect checkbox.

  • Option randomization and fixing by including @ and ! in the option markup

As demonstrated above, most configuration of basic problems happens in the text of the problem, using various glyphs to denote what each element is, which is then interpreted by the LMS to provide the relevant objects, such as a dropdown menu.

This has a fairly steep learning curve, especially as the included cheat sheet doesn’t actually have all the options available on it, and it is far from ideal. The shortcomings of this approach are discussed in far more depth in the Problem Editor Feature Analysis, so I won’t spend too much longer on that here.

Advanced Problems

Custom JavaScript Grading problems and Math Expression Input Problems (as well as Custom Python-evaluated Input Problems, which are also present but only β€˜partially supported’) are effectively different implementations of advanced OLX problems, making use of <script> tags to inject arbitrary code into problem components. This is how tools like the chemical equation problem exist in their under-maintained state, with custom tags like <chemicalequationinput> being defined to provide input fields in order to provide an input to a custom grader. Somewhat entertainingly, custom JavaScript grading problems appear to make use of Python grading (the example uses Python to import the grading from the JS), so JavaScript Grading effectively requires Python grading problems to exist, despite the former being β€˜fully supported’ and the latter being β€˜partially supported’.

OLX is deep and often poorly documented, with the majority of the only definitive guide to OLX not having been significantly updated in anything up to 8 years at the time of writing. It’s possible (likely) that the majority of OLX hasn’t significantly changed in this time, but that is far from guaranteed.

OLX provides vastly higher flexibility than the standard problem editor, and I won’t pretend to be an expert in all that is technically possible using it, or to be aware of its shortcomings.

In addition to these, there are optional settings related to standard problems that can only be configured through the advanced editor using OLX, including:

  • Partial Credit

    • Partial credit awards a lower point value for correctness, but is not fully implemented in reporting, as all reports will report that the learner got the question correct and received 1 point, even if that is not the case.

  • Tooltips

  • Randomised answer pools for multiple choice problems

Content Libraries

The task of problem banking in Open edX is performed by Content Libraries, which also support non-problem content. A full analysis of this feature from the perspective of focusing purely on randomised content can be found in the Randomized Content Feature Analysis. The short version is that despite the fact that content libraries have been extended for non-problem content, the XBlock for pulling content out of these libraries is clunky and not fit-for-purpose, and needs a complete replacement that both performs the same function and enables the other uses of content libraries.

LTI

LTI content can return a grade, and is handled by the LTI consumer XBlock. While there are many graded XBlocks that exist, most LMSes included an external tool feature to get grades from an LTI assessment, so LTI deserves a quick mention.

Open edX has historically supported LTI 1.1 and 1.2, but has recently made significant steps towards LTI 1.3 compliance, but has significant issues still with LTI 1.3 for the wider community, such as multi-tenancy support. While Open edX was certified as LTI Advantage Complete as of Lilac, it wasn’t until Maple that grading support was implemented.

It does this through an XBlock called the LTI Consumer XBlock. LTI passports are defined at the course level, and the exact configuration steps will depend on the needs of the LTI tool in question. LTI content can launch inline, in a modal window, or in a new browser tab, is assigned a weight if relevant, and has a setting to ignore deadlines, for unknown reasons.

Open edX’s implementation of LTI is, as is typical, tailored to the edX.org use-case, with most settings existing at the course level or in the component itself, rather than having any site-level settings.

Proctored exams are technically an LTI integration with the supported proctoring services, but they are not advertised as such, and require significant additional configuration.

At the time of writing, LTI 1.3 does not yet have a UI, and is not productised as a feature.

Related content

Course-Level Assessment Configuration
Course-Level Assessment Configuration
More like this
Open edX - Course-Level Assessment Configuration Improvements
Open edX - Course-Level Assessment Configuration Improvements
More like this
Course-Level Assessment Configuration - Comparator Research
Course-Level Assessment Configuration - Comparator Research
More like this
Open edX - Assessment Setup Improvements
Open edX - Assessment Setup Improvements
More like this
Assessment Setup - Comparator Research
Assessment Setup - Comparator Research
More like this
Assessment Content Creation Comparator Research
Assessment Content Creation Comparator Research
More like this