Exploring Validity Evidence Associated With Questionnaire-Based Tools for Assessing the Professional Performance of Physicians: A Systematic Review

Mirja W. van der Meulen*, Alina Smirnova, Sylvia Heeneman, Mirjam G. A. Oude Egbrink, Cees P. M. van der Vleuten, Kiki M. J. M. H. Lombarts

*Corresponding author for this work

Research output: Contribution to journal(Systematic) Review article peer-review

72 Downloads (Pure)

Abstract

Purpose To collect and examine-using an argument-based validity approach-validity evidence of questionnaire-based tools used to assess physicians' clinical, teaching, and research performance. Method In October 2016, the authors conducted a systematic search of the literature seeking articles about questionnaire-based tools for assessing physicians' professional performance published from inception to October 2016. They included studies reporting on the validity evidence of tools used to assess physicians' clinical, teaching, and research performance. Using Kane's validity framework, they conducted data extraction based on four inferences in the validity argument: scoring, generalization, extrapolation, and implications. Results They included 46 articles on 15 tools assessing clinical performance and 72 articles on 38 tools assessing teaching performance. They found no studies on research performance tools. Only 12 of the tools (23%) gathered evidence on all four components of Kane's validity argument. Validity evidence focused mostly on generalization and extrapolation inferences. Scoring evidence showed mixed results. Evidence on implications was generally missing. Conclusions Based on the argument-based approach to validity, not all questionnaire-based tools seem to support their intended use. Evidence concerning implications of questionnaire-based tools is mostly lacking, thus weakening the argument to use these tools for formative and, especially, for summative assessments of physicians' clinical and teaching performance. More research on implications is needed to strengthen the argument and to provide support for decisions based on these tools, particularly for high-stakes, summative decisions. To meaningfully assess academic physicians in their tripartite role as doctor, teacher, and researcher, additional assessment tools are needed.

Original languageEnglish
Pages (from-to)1384-1397
Number of pages14
JournalAcademic Medicine
Volume94
Issue number9
DOIs
Publication statusPublished - Sept 2019

Keywords

  • MULTISOURCE FEEDBACK TOOL
  • RESIDENT EVALUATION
  • PEER ASSESSMENT
  • PSYCHOMETRIC PROPERTIES
  • TEACHING EFFECTIVENESS
  • CLINICAL FACULTY
  • ASSESSMENT INSTRUMENT
  • ANESTHESIA RESIDENTS
  • EMERGENCY PHYSICIANS
  • COMPETENCE FRAMEWORK

Cite this