Adaptive Learning Tools and Engines

Open edX Adaptive Tools 


Pearson's Decoding Adaptive Tool TypeAdaptivity
MS/Harvard VPAL using TutorGen's SCALEAdaptive AssessmentProblems presented according to difficulty level, learning objectives and student mastery
Dillon's research project (Review xBlock)Adaptive ContentSpaced repetition based on failed attempts
Domoscio's integration for FUNAdaptive ContentSpaced repetition, using Domoscio's engine

MS/Harvard VPAL 

  • Designing Adaptive Learning and Assessment
    • Adaptive = dynamically change in response to student interactions within the MOOC, rather than on the basis of preexisting information such as a learner’s gender, age, or achievement test score.
    • The order of problems in a sequence is determined by a personalized learning progression, using learners’ real-time performance and statistical inferences on sub-topics they have mastered.
    • All problems in the course were manually tagged with one or several learning objectives.
    • Uses TutorGen's adaptive engine, SCALE®  - Student Centered Adaptive Learning Engine
      • Provides knowledge tracing, skill modeling, student modeling, adaptive problem selection, and automated hint generation for multi-step problems.
      • Knowledge components / skills (KCs) are tagged at the right level of granularity. Scale refines the tagging of these KCs after data has been collected from actual student interactions.
      • TutorGen extended SCALE algorithms to consider not only individual learning objectives (KCs), but also problem difficulty and problem selection within modules that group together various concepts and problems.
  • The Adaptive Experiment : Implementation
    • VPAL LTI tool
      1. receives learner activity data from edX
      2. passes a sanitized version to SCALE
      3. receives updates from SCALE
      4. provides appropriate next activity to learners
    • LTI tool provides a pass-through frame with an "activity sequence" (sequence of problems) and iframes XBlock URLs.
    • Hiding assessments
      • Relies on XBlock URLs not enforcing content experiment groups.
      • All assessments must be available to the control group.
      • Experiment group sees ONLY the LTI tool.
    • Passing grades and data
    • No one noticed: "Invisible implementation is a definite win."
  • The Bridge for Adaptivity
    • 2 endpoints on SCALE
      • Transaction: submit student problem attempts with student, activity, and grade data.
      • Activity: get ID representing the next activity recommended by the engine for the student.
  • Analyzing Data from an Adaptive MOOC
    • Performance (effectiveness)
    • Speed (efficiency)
    • Engagement (engaging)

Adaptive Learning Engines / Algorithms

SCALE by TutorGen

  • SCALE: Student Centered Adaptive Learning Engine
    • Unlike a pure machine learning solution:
      • SCALE is able to report to the developers exactly why the system behaves as it does.
      • Allows for human input to maximize improvements through refinement over time.
    • Does not require a priori expert-generated "student models".
    • Generates student models that build and improve as more data is collected.
    • Dynamically selects the students’ next problems to maximize student learning and minimize time needed to master a set of skills.
    • Knowledge Tracing and problem selection mechanisms use knowledge component (KC) modeling.

Hint generation algorithms used by TutorGen

  • Toward Automatic Hint Generation for Logic Proof Tutoring Using Historical Student Data
    • Primarily targets logic proofs in CS and philosophy.
    • Generates Markov Decision Processes that represent all student approaches to a particular problem, and uses the MDPs directly to automatically generate hints.
      • Reward for goal state (100)
      • Penalties for incorrect states (10)
      • Cost for taking an action (1) (to slightly favor shorter steps)
    • Comparison by both ordered and unordered matches with other students' responses.
    • One semester of data is sufficient to generate a significant amount of hints.
    • Alternatives
      • Constraint-based tutors can only provide condition violation feedback, not goal-oriented feedback.
      • Example-based authoring tools predict frequent correct and incorrect approaches.
      • Bootstrapping Novice Data also requires considerable authoring time.
      • ADVISOR predicts how long a student will take and provides (further) instructions accordingly.
      • Logic-ITA tutor warns students when they were likely to make mistakes.
  • Automating the Generation of Production Rules for Intelligent Tutoring Systems

Student Models

  • DataShop repository at Pittsburgh Science of Learning Center
    • Resource for educators and researchers to create, modify, and evaluate student models.
    • Data from thousands of students derived from interactions with on-line course materials and intelligent tutoring systems.
    • Categorized in terms of the hypothesized competencies or Knowledge Components (KCs), representing pieces of knowledge, concepts or skills that students need to solve problems.
  • Managing the Educational Dataset Lifecycle with DataShop
    • DataShop is focused on becoming the premier repository for educational data.
    • Data logging API
    • Data import via text or XML
    • Custom fields for student logs
  • Automated Student Model Improvement
    • CTA typically produces a symbolic representation of a student model, for instance, a rulebased production system of the skills in a domain.
    • An alternative is to use data and statistical inference to create a student model involving continuous parameters over latent variables with links to observed student performance variables.
    • When a specific set of KCs are mapped to a set of instructional tasks (usually steps in problems) they form a KC Model. A KC model is a specific kind of student model.

Sana Labs

Knewton

Aleks

Cerego