TY - JOUR
T1 - More Consensus Than Idiosyncrasy: Categorizing Social Judgments to Examine Variability in Mini-CEX Ratings
AU - Gingerich, Andrea
AU - van der Vleuten, Cees P. M.
AU - Eva, Kevin W.
AU - Regehr, Glenn
PY - 2014/11
Y1 - 2014/11
N2 - Purpose Social judgment research suggests that rater unreliability in performance assessments arises from raters' differing inferences about the performer and the underlying reasons for the performance observed. These varying social judgments are not entirely idiosyncratic but, rather, tend to partition into a finite number of distinct subgroups, suggesting some "signal" in the "noise" of interrater variability. The authors investigated the proportion of variance in Mini-CEX ratings attributable to such partitions of raters' social judgments about residents. Method In 2012 and 2013, physicians reviewed video-recorded patient encounters for seven residents, completed a Mini-CEX, and described their social judgments of the residents. Additional participants sorted these descriptions, which were analyzed using latent partition analysis (LPA). The best-fitting set of partitions for each resident served as an independent variable in a one-way ANOVA to determine the proportion of variance explained in Mini-CEX ratings. Results Forty-eight physicians rated at least one resident (34 assessed all seven). The seven sets of social judgments were sorted by 14 participants. Across residents, 2 to 5 partitions (mode: 4) provided a good LPA fit, suggesting that subgroups of raters were making similar social judgments, while different causal explanations for each resident's performance existed across subgroups. The partitions accounted for 9% to 57% of the variance in Mini-CEX ratings across residents (mean = 32%). Conclusions These findings suggest that multiple "signals" do exist within the "noise" of interrater variability in performance-based assessment. It may be valuable to understand and exploit these multiple signals rather than try to eliminate them.
AB - Purpose Social judgment research suggests that rater unreliability in performance assessments arises from raters' differing inferences about the performer and the underlying reasons for the performance observed. These varying social judgments are not entirely idiosyncratic but, rather, tend to partition into a finite number of distinct subgroups, suggesting some "signal" in the "noise" of interrater variability. The authors investigated the proportion of variance in Mini-CEX ratings attributable to such partitions of raters' social judgments about residents. Method In 2012 and 2013, physicians reviewed video-recorded patient encounters for seven residents, completed a Mini-CEX, and described their social judgments of the residents. Additional participants sorted these descriptions, which were analyzed using latent partition analysis (LPA). The best-fitting set of partitions for each resident served as an independent variable in a one-way ANOVA to determine the proportion of variance explained in Mini-CEX ratings. Results Forty-eight physicians rated at least one resident (34 assessed all seven). The seven sets of social judgments were sorted by 14 participants. Across residents, 2 to 5 partitions (mode: 4) provided a good LPA fit, suggesting that subgroups of raters were making similar social judgments, while different causal explanations for each resident's performance existed across subgroups. The partitions accounted for 9% to 57% of the variance in Mini-CEX ratings across residents (mean = 32%). Conclusions These findings suggest that multiple "signals" do exist within the "noise" of interrater variability in performance-based assessment. It may be valuable to understand and exploit these multiple signals rather than try to eliminate them.
U2 - 10.1097/ACM.0000000000000486
DO - 10.1097/ACM.0000000000000486
M3 - Article
SN - 1040-2446
VL - 89
SP - 1510
EP - 1519
JO - Academic Medicine
JF - Academic Medicine
IS - 11
ER -