Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Based on the above use cases, we are breaking down on high-level scope as follows:

In Scope

Out of Scope

Evaluating student responses using an out-of-the-box LLM and a pre-prompt from the course designer.

Training a model grader with human-graded student responses or fine-tuning a LLM

Creating a new XBlock for optional integration by app providers.

Integrating this work into the existing ORA2 project.

Evaluating student responses with a single criterion statement.

Incorporating rubrics and multiple criteria. (Stage 10?)

Auto-grading student responses via LLM.

Self, peer, and instructor grading of student responses

MVP Specs

Features & Requirements

In order to realize this MVP, we believe the following features will be required. Refer to the following flow chart for more details:

Feature

Requirements

Stage 0: New XBlock Foundations

Create a new XBlock that is available for custom addition to an edX site instance, leveraging the new react based editor components for the configuration and Studio experience.

Stage 1: Single entry, basic feedback response

The MVP XBlock echoes the basic text problem type, requiring a prompt from the course designer, collecting student input in a text box via a simple submission, and optionally showing feedback (constant response not AI).

The Studio editor should allow entry of the prompt text and feedback text to be optionally displayed for all students after submission. Grading to be based on submission / completion rather than any other criteria for now. Collected responses must be stored in a way that allows later retrieval from other pages using the same XBlock in the same course. This initial MVP echoes the basic text problem type, though much more will be added in future stages to situate this Xblock between the simplicity of a text input block and the complexity of an ORA2 block, with unique teaching and learning opportunities provided by future LLM / generative AI assessment and engagement modes.

Stage 2: Multiple linked blocks, and gated access

Building on the initial basic entry experience, authors would be able to linked multiple short response blocks in a course. This means that student submissions from previous blocks could be shown to students in other linked course blocks, and access to subsequent blocks could be gated to ensure students have completed earlier blocks or specific assignments if needed.

Stage 3 - prompt + feedback vs reflection modes, plus response length suggestions

In prompt + feedback mode, the supporting text of the block suggests to a learner that they are answering the prompt for submission, as is true for the Stage 1 + 2 iterations of the block.

For reflection mode, the supporting text is based on capturing a reflection and making visible previous personal reflections throughout the course if the block is set up to have multiple course entries.

Stage 4 - multiple submissions and submission history

The ability for each block to allow for resubmission of student answers, perhaps with the ability to show grades associated with previous answers. This would be controlled by an attempts setting similar to other problem blocks, enabling mastery learning and an improvement over ORA2’s single submission only model.

Stage 5 - LLM Response Feedback & Grading

LLM configuration would be something that could be determined at the site, organization, or potentially course level. If multiple LLM configurations exist, the block level might allow for selection of model based on cost, model fit, or other reasons determined by the course author. All LLM options would be linked via API to this XBlock (with initial configuration of ChatGPT-3.5 as a supported LLM option.) LLM based grading would be contingent on an LLM prompt that guides th LLM in determining whether a student has completed or mastered a given prompt. Separate from the LLM prompt, authors can provide a scoring guide visible to students that describes how they will be assessed (especially useful if the LLM prompt actually will be scoring students on a scale or range, not just for completion of the requirements).

Additional Details / Definition Coming Soon

Stage 6 - Supportive LLM Response Feedback

Beyond basic LLM response feedback, this update would support multiple stages of LLM feedback for students to improve submissions, receiving guidance from the LLM on how to improve their submission.

Technical Open Questions

We anticipate the following to some of the key questions that we will need answered during technical discovery.

...

  •  Marco Morales create page and share in #wg-educators
  •  Marco Morales share this with #wg-product-core as part of the proposal process for open source product contributions
  •  Marco Morales link to video for early development proof of concept for input / visibility.

Product & UX/UI FAQ

The following represent our Product view of key questions. However, we look to the UX/UI and technical teams to validate these as needed.

...