Verifying the Stability and Sensitivity of Learning Analytics Based Prediction Models: An Extended Case Study

Dirk T. Tempelaar*, Bart Rienties, Bas Giesbers

*Corresponding author for this work

Research output: Chapter in Book/Report/Conference proceedingChapterAcademic

Abstract

In this empirical contribution, a follow-up study of previous research [1], we focus on the issues of stability and sensitivity of Learning Analytics based prediction models. Do predictions models stay intact, when the instructional context is repeated in a new cohort of students, and do predictions models indeed change, when relevant aspects of the instructional context are adapted? Applying Buckingham Shum and Deakin Crick’s theoretical framework of dispositional learning analytics combined with formative assessments and learning management systems, we compare two cohorts of a large module introducing mathematics and statistics. Both modules were based on principles of blended learning, combining face-to-face Problem-Based Learning sessions with e-tutorials, and have similar instructional design, except for an intervention into the design of quizzes administered in the module. We analyse bivariate and multivariate relationships of module performance and track and disposition data to provide evidence of both stability and sensitivity of prediction models.
Original languageEnglish
Title of host publicationComputer Supported Education
Subtitle of host publication7th International Conference, CSEDU 2015, Lisbon, Portugal, May 23-25, 2015, Revised Selected Papers
EditorsS Zvacek, M. T. Restivo, J. Uhomoibhi, M. Helfert
PublisherSpringer Verlag
Pages256-273
VolumeCommunications in Computer and Information Science
Edition583
ISBN (Electronic)978-3-319-29585-5
ISBN (Print)978-3-319-29584-8
DOIs
Publication statusPublished - 11 Feb 2016

Publication series

SeriesCommunications in Computer and Information Science
Volume583
ISSN1865-0929

Cite this