Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Next »

The current calculation method for peer assessments is the median. This project aims to allow additional calculation methods such as the average.

Problem

The peer asessment step currently calculates the step score for each criterion by taking the median of the scores given by each peer. This decision may have been rooted in preventing outliers (extremely low scores in particular) from dragging the score down.

Alternative strategies such as the “Average all” or “Average drop low“ may be better suited for some educators or specific contexts.

Use cases:

  • As an instructor/learner, I need to be able to calculate the peer step grade as the average of all peer scores received, instead of the median.

    • Supporting market data: Please share any relevant user data, interviews, survey results that support the need for this proposal

Proposed solution:

How will you solve this problem?

  • Include any UX/UI designs

TBD

Other approaches considered:

  • What other approaches did you consider and why won’t they work?

TBD

Competitive research:

  • How do Canvas/Moodle/Coursera solve this problem?

Moodle features a lot of flexibility for the calculation of final scores in a “Workshop

The final grade can be split into 2 main weighted components:

  1. Grade for submission. The score a student gets for their submitted work

  2. Grade for assessment. The score a student gets for having reviewed other’s submitted work

Grade for submission

The final grade for every submission is calculated as weighted mean of particular assessment grades given by all reviewers of this submission. This includes the assessments given by peers and also the assessment given by the submitter, if allowed. The value is rounded to a number of decimal places set in the Workshop settings form.

The teacher can influence the grade in two ways:

  • by providing their own assessment, possibly with a higher weight than usual peer reviewers have

  • by overriding the grade to a fixed value

Grade for assessment

The grade for assessment tries to estimate the quality of assessments that the participant gave to the peers. This grade (also known as grading grade) is calculated by the artificial intelligence hidden within the Workshop module as it tries to do a typical teacher's job.

Proposed plan for any relevant usability/UX testing

TBD

Plan for long-term ownership/maintainership

TBD

Open questions for rollout/releases

TBD

  • No labels