TY - JOUR
T1 - A New Multisource Feedback Tool for Evaluating the Performance of Specialty-Specific Physician Groups
T2 - Validity of the Group Monitor Instrument
AU - Bindels, Elisa
AU - Boerebach, Benjamin
AU - van der Meulen, Mirja
AU - Donkers, Jeroen
AU - van den Goor, Myra
AU - Scherpbier, Albert
AU - Lombarts, Kiki
AU - Heeneman, Sylvia
N1 - Funding Information:
The authors thank the Group Monitor project group for their contribution and work during the developmental stage of the Group Monitor instrument: Michael Muller, MD, and Cita van Til, PhD, from the Rijnstate Medical Center Arnhem, the Netherlands; Astrid van het Bolscher and Rob Stevens, from Q3 Consult, Zeist, the Netherlands. The authors show their gratitude to Medox.nl for their efforts in designing the Group Monitor web-based application.
Publisher Copyright:
© 2019 The Alliance for Continuing Education in the Health Professions, the Association for Hospital Medical Education, and the Society for Academic Continuing Medical Education.
PY - 2019
Y1 - 2019
N2 - Introduction: Since clinical practice is a group-oriented process, it is crucial to evaluate performance on the group level. The Group Monitor (GM) is a multisource feedback tool that evaluates the performance of specialty-specific physician groups in hospital settings, as perceived by four different rater classes. In this study, we explored the validity of this tool.Methods: We explored three sources of validity evidence: (1) content, (2) response process, and (3) internal structure. Participants were 254 physicians, 407 staff, 621 peers, and 282 managers of 57 physician groups (in total 479 physicians) from 11 hospitals.Results: Content was supported by the fact that the items were based on a review of an existing instrument. Pilot rounds resulted in reformulation and reduction of items. Four subscales were identified for all rater classes: Medical practice, Organizational involvement, Professionalism, and Coordination. Physicians and staff had an extra subscale, Communication. However, the results of the generalizability analyses showed that variance in GM scores could mainly be explained by the specific hospital context and the physician group specialty. Optimization studies showed that for reliable GM scores, 3 to 15 evaluations were needed, depending on rater class, hospital context, and specialty.Discussion: The GM provides valid and reliable feedback on the performance of specialty-specific physician groups. When interpreting feedback, physician groups should be aware that rater classes' perceptions of their group performance are colored by the hospitals' professional culture and/or the specialty.
AB - Introduction: Since clinical practice is a group-oriented process, it is crucial to evaluate performance on the group level. The Group Monitor (GM) is a multisource feedback tool that evaluates the performance of specialty-specific physician groups in hospital settings, as perceived by four different rater classes. In this study, we explored the validity of this tool.Methods: We explored three sources of validity evidence: (1) content, (2) response process, and (3) internal structure. Participants were 254 physicians, 407 staff, 621 peers, and 282 managers of 57 physician groups (in total 479 physicians) from 11 hospitals.Results: Content was supported by the fact that the items were based on a review of an existing instrument. Pilot rounds resulted in reformulation and reduction of items. Four subscales were identified for all rater classes: Medical practice, Organizational involvement, Professionalism, and Coordination. Physicians and staff had an extra subscale, Communication. However, the results of the generalizability analyses showed that variance in GM scores could mainly be explained by the specific hospital context and the physician group specialty. Optimization studies showed that for reliable GM scores, 3 to 15 evaluations were needed, depending on rater class, hospital context, and specialty.Discussion: The GM provides valid and reliable feedback on the performance of specialty-specific physician groups. When interpreting feedback, physician groups should be aware that rater classes' perceptions of their group performance are colored by the hospitals' professional culture and/or the specialty.
KW - MSF
KW - validity
KW - HEALTH-CARE
KW - RELIABILITY
KW - ASSESSMENTS
U2 - 10.1097/CEH.0000000000000262
DO - 10.1097/CEH.0000000000000262
M3 - Article
C2 - 31306280
SN - 0894-1912
VL - 39
SP - 168
EP - 177
JO - Journal of Continuing Education in the Health Professions
JF - Journal of Continuing Education in the Health Professions
IS - 3
ER -