Automatic conversational assessment using large language model technology

  • Jan Bergerhoff*
  • , Johannes Bendler
  • , Stefan Stefanov
  • , Enrico Cavinato
  • , Leonard Esser
  • , Tommy Tran
  • , Aki Härmä
  • *Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

Abstract

Student evaluation is an important, yet costly, part of instruction. Traditional exams are a burden for teachers and stressful for students. This paper uses a large language model (LLM) technology to create a system for Automated Conversational Assessment, ACA, where a dialog system, based on content and intended learning outcomes, interviews the student to determine the level of learning. In a pilot experiment in a university course, we found that the ACA system scores correlate with the grades given by a human and also have a positive correlation with the results of a conventional exam of the same students. Based on a questionnaire study, the students responded that the assessment was perceived to be fair and acceptable.
Original languageEnglish
Title of host publicationProceedings of the 2024 the 16th International Conference on Education Technology and Computers, ICETC 2024
Place of PublicationNew York, NY, USA
PublisherAssociation for Computing Machinery
Pages39-45
Number of pages7
ISBN (Print)9798400717819
DOIs
Publication statusPublished - 21 Jan 2025

Publication series

SeriesProceedings of the International Conference on Education Technology and Computers

Fingerprint

Dive into the research topics of 'Automatic conversational assessment using large language model technology'. Together they form a unique fingerprint.

Cite this