Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 2 Current »

Recording:

Video: https://drive.google.com/file/d/1bxvvoDNgHkqxaaaua6RXhJd19Ldcr5Bd/view?usp=drive_link

Transcript: https://drive.google.com/file/d/1bx4bEE7rRG-vpcotOPM9vQUXcmx17vGh/view?usp=drive_link

Time:

Monday, May 20, 2024

1pm EDT | 5pm UTC

Agenda:

1:00 pm - 1:30 pm EDT: Presentation of 5 Lessons Learned Building AI Assessments (Presentation)

1:30 pm - 1:45 pm EDT: Q&A

1:45 pm - 2:00 pm EDT: Open Agenda

Assets:

  1. 5 Lessons Learned Building AI Assessments (Presentation)

  2. Open Source Templates

    1. Assistant Template - For phased interactions with an AI that involve user input, AI feedback, and optional AI scoring. Can build things like case study reviews, writing feedback, AI debates, etc.

    2. Completion Template - For one-off interactions with an AI, like generating MCQs

  3. Demo Apps

    1. Guided Case Study : A case study review where students review a case study, critically reflect on it, and receive AI-generated feedback that is guided by what the faculty thinks is important about the case study. 

    2. Writing Feedback  : A writing exercise where a student drafts an introduction paragraph for a grant application. Again, they receive AI-generated feedback that is guided by what the faculty thinks is important to include in their writing. They then have a chance to revise their original draft. 

    3. AI Debate : A student has a chance to engage in a debate with AI (in this case, about Digital Health in Medicine as the topic). The AI is guided to debate the student for two rounds and then summarize lessons learned and good points made by the student. 

    4. MCQ Wizard : Generate MCQ questions with optional feedback and hinting based on faculty requirements.

Minutes:

00:15 - 02:07 Welcome and Introduction

John Swope, Education Technology Specialist, Chair at St. George's University AI in Higher Education committee, Author of Micro AI Apps in Online Education: Impacts on Efficiency, Quality and Future Directions to discuss 5 Lessons Learned Building AI Assessments.

02:07 - 40: 35 Presentation

  1. AI-Powered Assessment

    1. Sample video case study

5 Lessons:

  1. AI costing on each AI version

  2. AI strengths and weaknesses

  3. AI Scoring

  4. Prompt sequencing

  5. AI rapid evolvement

Other Thoughts

  • AI Assessments should be faculty guided

  • AI Assessments should be clearly disclosed

  • AI Assessments should be elective

  • Student’s literacy on AI if AI gets confused

40:36 - 56:41 Q&A

  1. Do you have any sites for demos?

    1. Yes (John Swope will share).

  2. If people are developing a curriculum with copyright ownership or privacy issues how they might go about using AI?

    1. My personal opinion is that AI is too big to fail. There is no legislation that's going to come in and kill the momentum of AI at this point it's too big. The worst is that they have to pay some money or something.

    2. Everyone who uses AI is responsible for the outputs. Anyone who takes AI's output verbatim is going to run into problems and our students should learn that as well.

  3. What do you think of students essentially using some sort of AI, whether that's co-pilot to get assessment feedback for themselves? What are the pros and cons of the rubric not built by the instructor?

    1. Pro/s - Student’s AI literacy is an important skill going forward.

    2. Cons - Wether the students are getting the right data from using AI, or if the students are using AI itself in a right manner.

  4. Have you compared any published literature, comparing results generated using the kinds of approaches that you shared versus a multi-agent workflow?

    1. No. There is no affinity on what I want to work with and hopefully, I can get there.

  5. Do you have lessons learned in video-based assessments?

    1. If you are talking about a scenario where you record some sort of interaction, no, that's part of the promise of 4 omni where it can actually understand a video, so that is theoretically something we're getting closer to.

  6. Cost comparison between using GPT assistance versus API.

    1. The assistant API has the potential to have cost overruns and to get out of hand quicker because you have this conversation that's happening. The AI needs to keep track of the context of that conversation. You can feed data into the Assistant API completion. API is a lot simpler, and the costs are a little bit easier to understand and manage.

  • No labels