Proposed XBlock - Short Response XBlock (Static + LLM Feedback)
Status | PROPOSAL / DRAFT |
---|---|
Contributing Team | @Marco Morales - Schema Education |
Earlier Discovery | (ORA v3 - Schema Discovery not yet published!) |
Linked Initiatives | N/A |
Overview | This proposed XBlock offers a lightweight solution to short answer text responses with static and LLM based student feedback options. This XBlock fills the gap between pattern-matching short text input problems and multi-step rubric-graded ORAs. It integrates AI-powered instant feedback and coaching and allows for the re-display of student responses across multiple pages in a course. It is an ideal solution for student predictions, reflections, and section summaries, and allows all of these to later be built up into a larger project or essay. |
Overview
This proposed XBlock offers a lightweight solution to short answer text responses with static and LLM based student feedback options. This XBlock fills the gap between pattern-matching short text input problems and multi-step rubric-graded ORAs. It integrates AI-powered instant feedback and coaching and allows for the re-display of student responses across multiple pages in a course. It is an ideal solution for student predictions, reflections, and section summaries, and allows all of these to later be built up into a larger project or essay.
Key Use Cases
Instant Feedback
As a course designer, I can quickly create short response questions and add a LLM pre-prompt for instant feedback while avoiding timely grading by instructors.
As a learner, I can receive instant feedback on short responses throughout the learning process rather than waiting for delayed feedback on my writing after I moved on.
Open-ended Predictions
As a course designer, I can ask learners to make open-ended predictions and later explain how their understanding has changed to help them expose and overcome misconceptions.
As a learner, I can record my open-ended predictions at a snapshot in time and later review and write how my understanding has changed to clearly demonstrate my growth.
Built-up Reflections
As a course designer, I can ask learners to reflect frequently throughout the course, and then have them review, synthesize, and build on these in broader reflections.
As a learner, I can see my short reflections, with feedback, when I’m reflecting on larger timescales of learning.
Iteration
As a course designer, I can set checkpoint questions in a course that provide sufficient time and feedback cycles so that most learners can reach a specific form of proficiency before moving on.
As a learner, I can use writing as a tool for formulating and building on thoughts with several cycles of feedback before moving past a question, allowing me to advance quickly through topics I understand and slow down through parts that I do not.
Deliverables
Stage 1 - Single entry, basic feedback response
The MVP XBlock echoes the basic text problem type, requiring a prompt from the course designer, collecting student input in a text box via a simple submission, and optionally showing feedback (constant response not AI).
This is valuable for non-graded text responses (significantly less complicated than ORA2).
Stage 2 - Multiple linked blocks, and gated access
Building on the initial basic entry experience, authors would be able to linked multiple short response blocks in a course. This means that student submissions from previous blocks could be shown to students in other linked course blocks, and access to subsequent blocks could be gated to ensure students have completed earlier blocks or specific assignments if needed.
Stage 3 - prompt feedback vs reflection modes, plus response length suggestions
Building on the prior stages, this would add options to select a block mode which impacts xblock setting defaults, student facing descriptions, and more. The two initial modes are a prompt feedback mode as well as a reflection mode.
Suggested or required length of submissions would be added as an option to support full or partial credit based on this length suggestion or requirement (participation grading by length of submission).
Stage 4 - multiple submissions and submission history
The ability for each block to allow for resubmission of student answers, perhaps with the ability to show feedback / grades associated with previous answers. This would be controlled by an ‘maximum number of attempts’ setting similar to other problem blocks, enabling mastery learning and an improvement over ORA2’s single submission only model.
Stage 5 - LLM Response Feedback
This stage would would introduce the ability to have LLM based feedback, beyond the basic submission or length based grading of previous stages. Multiple LLMs could be configured at the site / org / course levels, and content blocks might allow for selection of model. Our initial API / configuration option would link to ChatGPT-3.5 and possibly Google Bard but many others are possible to add.
Stage 6 - Supportive LLM Response Feedback
Beyond basic LLM response feedback, this update would support multiple stages of LLM feedback for students to improve submissions, receiving guidance from the LLM on how to improve their submission.
Stage 7 - Coaching Engagement Mode
In addition to the prompt feedback and reflection modes, a new coaching mode takes supportive LLM response feedback and allows it to be packaged into an ongoing conversation between student and ‘coach’. Completion and grading requirements might be met only as students continue to engage with the LLM to provide a complete and satisfactory answer.
The LLM pre-prompt is a combination of default text created by the XBlock designers (to guide the system to keep asking the student for clarifications, improvements, and additional information) and course author text to specify the criteria by which the chat dialogue should focus around and require.
Stage 8 - Instructor Feedback & Grading Overrides
While this XBlock operates using either pre-defined feedback text or LLM feedback, we are interested in supporting instructor grading-overrides / additional feedback to student responses.
Stage 9 - Student Review Request / Challenge
This update would provide a way for students to challenge a grade or request a secondary review (limited to N times per course perhaps) to surface edge cases in LLM scoring quality and increase confidence for students and instructors in the system.
Stage 10 - Criteria Based Feedback and Grading
… Additional Details / Definition Coming Soon …
Users
Authors: Any authors that currently use ORA2 / JS or Python evaluated input problems / basic text problems
Learner: Any Open edX instance learner whose learning site uses this XBlock
In Scope / Out of Scope
Based on the above use cases, we are breaking down on high-level scope as follows:
In Scope | Out of Scope |
Evaluating student responses using an out-of-the-box LLM and a pre-prompt from the course designer. | Training a model grader with human-graded student responses or fine-tuning a LLM |
Creating a new XBlock for optional integration by app providers. | Integrating this work into the existing ORA2 project. |
Evaluating student responses with a single criterion statement. | Incorporating rubrics and multiple criteria. (Stage 10?) |
Auto-grading student responses via LLM. | Self, peer, and instructor grading of student responses |
MVP Specs
Features & Requirements
In order to realize this MVP, we believe the following features will be required. Refer to the following flow chart for more details:
Feature | Requirements |
Stage 0: New XBlock Foundations | Create a new XBlock that is available for custom addition to an edX site instance, leveraging the new react based editor components for the configuration and Studio experience. |
Stage 1: Single entry, basic feedback response | The MVP XBlock echoes the basic text problem type, requiring a prompt from the course designer, collecting student input in a text box via a simple submission, and optionally showing feedback (constant response not AI). |
Stage 2: Multiple linked blocks, and gated access | Building on the initial basic entry experience, authors would be able to linked multiple short response blocks in a course. This means that student submissions from previous blocks could be shown to students in other linked course blocks, and access to subsequent blocks could be gated to ensure students have completed earlier blocks or specific assignments if needed. |
Stage 3 - prompt + feedback vs reflection modes, plus response length suggestions | In prompt + feedback mode, the supporting text of the block suggests to a learner that they are answering the prompt for submission, as is true for the Stage 1 + 2 iterations of the block. For reflection mode, the supporting text is based on capturing a reflection and making visible previous personal reflections throughout the course if the block is set up to have multiple course entries. |
Stage 4 - multiple submissions and submission history | The ability for each block to allow for resubmission of student answers, perhaps with the ability to show grades associated with previous answers. This would be controlled by an attempts setting similar to other problem blocks, enabling mastery learning and an improvement over ORA2’s single submission only model. |
Stage 5 - LLM Response Feedback & Grading | LLM configuration would be something that could be determined at the site, organization, or potentially course level. If multiple LLM configurations exist, the block level might allow for selection of model based on cost, model fit, or other reasons determined by the course author. All LLM options would be linked via API to this XBlock (with initial configuration of ChatGPT-3.5 as a supported LLM option.) LLM based grading would be contingent on an LLM prompt that guides th LLM in determining whether a student has completed or mastered a given prompt. Separate from the LLM prompt, authors can provide a scoring guide visible to students that describes how they will be assessed (especially useful if the LLM prompt actually will be scoring students on a scale or range, not just for completion of the requirements). |
Additional Details / Definition Coming Soon | |
Stage 6 - Supportive LLM Response Feedback | Beyond basic LLM response feedback, this update would support multiple stages of LLM feedback for students to improve submissions, receiving guidance from the LLM on how to improve their submission. |
Technical Open Questions
We anticipate the following to some of the key questions that we will need answered during technical discovery.
Are there permission issues that will prevent data in one XBlock instance from being accessed by data in another instance in the same course?
How effective will the AI-generated feedback be for learners? How well can course designers be guided into writing effective per-question pre-prompts for the LLM’s feedback?
Is GPT3.5 sufficient for this use case? Do other LLMs offer a better solution and are there financial considerations with alternatives?
Open Tasks
Product & UX/UI FAQ
The following represent our Product view of key questions. However, we look to the UX/UI and technical teams to validate these as needed.
Q: No questions yet!
A:
Future Direction
No notes included for this project.
UI Examples