AL: Meeting with Harvard/VPAL

Harvard/VPAL, Mar 19th
Attendees from Harvard
Igal
Andrew - architecture and software development
Ilia - algorithms, knowledge tracing, analytics
As a platform, enable the course teams and research community.
2 years ago 
started major project on adaptive
create an architecture
more open-ended
can bring multiple adaptive engines
Tutorgen's SCALE
- funding from NSF
- wasn't in the MOOC space
initial pilot with HarvardX' Super Earth
LTI - reach for adaptivity
focus on feasibility
created a workflow with HarvardX
Content Tagging (lead by Ilia)
what will the engine need
created the spreadsheet and the workflow with course teams
content tagged with knowledge components (KCs)
LTI - Reach for Adaptivity
applicable for on-campus use with Canvas
25% of assessments were powered by adaptivity
evidence of learning gains
limited since only for 1/4 of the assessments
also limited users - needed to prime the engine
Next Step
Improve LTI end
Raccoon Gang is helping
Visibility into the optimization of the engine
Engine flexible enough to accommodate different use cases
HarvardX
On-campus
Vision of knowledge tracing
Adaptive engine optimizations
May not be as scalable right now
compared to Area9, Pearson, Tutorgen, etc.
Planning to present at Learning @ Scale
Need large number of assessments tagged by teams - triple the number
No tool on edX - so need to use spreadsheets, etc.
Residential use cases
Harvard will have a Matriculation course over the summer
with all Adaptive content
with Open edX instance
In Sept 2018 in Chinese language learning course
on Canvas
assessments and grammar
Igal teaching a course on Assessment design
lower profile
LTI provider
not enabled on prod - so had to host their own open edX instance
Tagging
collection created of tagged components
LTI integration indicates collection
did look into automate or guide
can do natural language processing on items
can cluster them based on that to prime the tags
have some good initial data
helps the subject matter expert
would ideal if the tags are across courses
multiple KCs within a section
not just difficulty levels
Concern about verifying Adaptive Engine's algorithm
How can we verify if they don't share the data?
Grading policy also needs to be adaptive
Different users will see different sequences
# points / max (# problems students solved, Q) where Q = optimal number of problems
Engine graded - knowledge tracing will say whether you actually know the concept
But the knowledge trace is a black-box so the learner can't tell what algorithm is used
Engine spits out a mastery level
Engine API
Knowledge Tracing
Recommendation Engine