A model for programmatic assessment fit for purpose

C. P. M. van der Vleuten*, L. W. T. Schuwirth, E. W. Driessen, J. Dijkstra, D. Tigelaar, L. K. J. Baartman, J. van Tartwijk

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

We propose a model for programmatic assessment in action, which simultaneously optimises assessment for learning and assessment for decision making about learner progress. This model is based on a set of assessment principles that are interpreted from empirical research. It specifies cycles of training, assessment and learner support activities that are complemented by intermediate and final moments of evaluation on aggregated assessment data points. A key principle is that individual data points are maximised for learning and feedback value, whereas highq-stake decisions are based on the aggregation of many data points. Expert judgement plays an important role in the programme. Fundamental is the notion of sampling and bias reduction to deal with the inevitable subjectivity of this type of judgement. Bias reduction is further sought in procedural assessment strategies derived from criteria for qualitative research. We discuss a number of challenges and opportunities around the proposed model. One of its prime virtues is that it enables assessment to move, beyond the dominant psychometric discourse with its focus on individual instruments, towards a systems approach to assessment design underpinned by empirically grounded theory.
Original languageEnglish
Pages (from-to)205-214
JournalMedical Teacher
Volume34
Issue number3
DOIs
Publication statusPublished - 2012

Fingerprint

Dive into the research topics of 'A model for programmatic assessment fit for purpose'. Together they form a unique fingerprint.

Cite this