Modeling agreement on categorical scales in the presence of random scorers

S. Vanbelle*, E. Lesaffre

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Kappa coefficients are often used to assess agreement between two fixed scorers on categorical scales. Cohen's version is popular for nominal scales and the weighted version for ordinal scales. In the present paper, similar agreement coefficients are defined for random scorers. A partial-Bayesian methodology is then developed to directly relate these agreement coefficients to predictors through a multilevel model. Statistical properties of the proposed approach are studied using simulations. Finally, the approach is applied to gynecological and medical imaging data.

Original languageEnglish
Pages (from-to)79-93
Number of pages15
JournalBiostatistics
Volume17
Issue number1
DOIs
Publication statusPublished - 1 Jan 2016

Keywords

  • Markov chain Monte Carlo
  • Multilevel
  • Nested
  • Rater
  • Reliability
  • WEIGHTED KAPPA
  • COEFFICIENT
  • RELIABILITY
  • PARAMETERS

Fingerprint

Dive into the research topics of 'Modeling agreement on categorical scales in the presence of random scorers'. Together they form a unique fingerprint.

Cite this