Personalized Stopping Rules in Bayesian Adaptive Mastery Assessment

Anni Sapountzi, Sandjai Bhulai, Ilja Cornelisz, Chris van Klaveren

Research output: Working paper / PreprintPreprint

Abstract

We propose a new model to assess the mastery level of a given skill efficiently. The model, called Bayesian Adaptive Mastery Assessment (BAMA), uses information on the accuracy and the response time of the answers given and infers the mastery at every step of the assessment. BAMA balances the length of the assessment and the certainty of the mastery inference by employing a Bayesian decision-theoretic framework adapted to each student. All these properties contribute to a novel approach in assessment models for intelligent learning systems. The purpose of this research is to explore the properties of BAMA and evaluate its performance concerning the number of questions administered and the accuracy of the final mastery estimates across different students. We simulate student performances and establish that the model converges with low variance and high efficiency leading to shorter assessment duration for all students. Considering the experimental results, we expect our approach to avoid the issue of over-practicing and under-practicing and facilitate the development of Learning Analytics tools to support the tutors in the evaluation of learning effects and instructional decision making.
Original languageEnglish
PublisherCornell University - arXiv
Number of pages12
Publication statusPublished - 5 Mar 2021

Publication series

SeriesarXiv.org
Number2103.03766
ISSN2331-8422

Keywords

  • adaptive assessment
  • performance model
  • mastery criteria
  • optimal stopping policy

Cite this