Beyond standard checklist assessment: Question sequence may impact student performance

Jeff LaRochelle*, Steven Durning, John R. Boulet, Cornelis van der Vleuten, Jeroen van Merriënboer, Jeroen Donkers

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Introduction Clinical encounters are often assessed using a checklist. However, without direct faculty observation, the timing and sequence of questions are not captured. We theorized that the sequence of questions can be captured and measured using coherence scores that may distinguish between low and high performing candidates.

Methods A logical sequence of key features was determined using the standard case checklist for an observed structured clinical exam (OSCE). An independent clinician educator reviewed each encounter to provide a global rating. Coherence scores were calculated based on question sequence. These scores were compared with global ratings and checklist scores.

Results Coherence scores were positively correlated to checklist scores and to global ratings, and these correlations increased as global ratings improved. Coherence scores explained more of the variance in student performance as global ratings improved.

Discussion Logically structured question sequences may indicate a higher performing student, and this information is often lost when using only overall checklist scores.

Conclusions The sequence test takers ask questions can be accurately recorded, and is correlated to checklist scores and to global ratings. The sequence of questions during a clinical encounter is not captured by traditional checklist scoring, and may represent an important dimension of performance.

Original languageEnglish
Pages (from-to)95–102
Number of pages8
JournalPerspectives on Medical Education
Volume5
Issue number2
DOIs
Publication statusPublished - Apr 2016

Keywords

  • Clinical skills
  • Medical education
  • Assessment

Cite this