Enhancing Open Response Assessment

One of OpenCraft's clients would like to add the following functionalities to the Open Response Assessment:

1. Allow learners to re-submit pdf attachment before final submission
2. Allow instructors to count the self-evaluation score in the final grade.
3. Detect unusual deviation in results, per criterion.
4. Detect and flag learners that are too generous or too severe in their grading compared to others (and apply penalty on them)

OpenCraft was thinking of the following ways of implementing these functionalities:

1. Allow learners to re-submit pdf attachment before final submission

Currently if the user needs to change PDF attachment(s), he needs to click Choose file button again and then choose new file(s) which will replace the old one(s) (it all happens before submitting the answer). Here's how it looks like:



We could move the Choose file button (together with the Upload files button) below the chosen files and add trash bin buttons next to the description field of each uploaded file so that user is able to remove the file from the submission by clicking the trash bin button.


We would also like to enhance a bit UX of PDF preview. Currently, once user uploads the files, he’s able to preview them in a browser by clicking their name (this has been checked for Chrome and Safari). We would like to add Click to preview text right beside the uploaded file’s description in this line of the template.


2. Allow instructors to count the self-evaluation score in the final grade.


Right now, with ORA/peer review it is possible to add a self evaluation in the process. But this self-evaluation is ignored (it is taken into account only if it’s the only grade given). Our client would like to be able to choose/customize the weighting of staff score vs. peer score vs. self score for each assignment making the final score the weighted mean of these three numbers.
In order to do so, we could:
- extend ORA XBlock settings to include weights for self score, peer score and staff score
- change get_score method and update Assessment API to support the new behavior

3. Detect unusual deviation in results, per criterion.


Our client would like to have standard deviation calculated for each criterion. Based on that - if the standard deviation was too high, the essay would be flagged as “no consensual evaluation on criterion XX”. This would automatically allow the assignment to be graded by an instructor. Regarding the implementation:
- there’s scores_by_criterion() method which returns dict of lists (of the form: {criterion: all peer scores for this criterion} ) so it should be easy to calculate standard deviation per criterion
- we would need to extend AssessmentWorkflow so that it automatically requires staff’s evaluation if standard deviation of peer scores is too high
- the UI should be updated to display informations about the standard deviation of peer scores being too high

4. Detect and flag learners that are too generous or too severe in their grading compared to others (and apply penalty on them)

We could wait until the ORA reaches its deadline and then run Celery task that'd figure out which people are too generous/drastic and apply penalty on their score (the penalty would be pre-determined and could be enabled/disabled prior to publishing the ORA). Regarding the implementation:

- we'd need to extend ORA Xblock settings to include the value of penalty for being too generous/drastic

- write Celery task

- extend UI so that it displays a message to too generous/drastic users

We would appreciate a lot if you could let us know:
  • Does the proposal sound good?
  • If so, would this be accepted upstream or you'd like us to provide some more extra information?