Comparing dependent kappa coefficients obtained on multilevel data

Sophie Vanbelle*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Reliability and agreement are two notions of paramount importance in medical and behavioral sciences. They provide information about the quality of the measurements. When the scale is categorical, reliability and agreement can be quantified through different kappa coefficients. The present paper provides two simple alternatives to more advanced modeling techniques, which are not always adequate in case of a very limited number of subjects, when comparing several dependent kappa coefficients obtained on multilevel data. This situation frequently arises in medical sciences, where multilevel data are common. Dependent kappa coefficients can result from the assessment of the same individuals at various occasions or when each member of a group is compared to an expert, for example. The method is based on simple matrix calculations and is available in the R package multiagree. Moreover, the statistical properties of the proposed method are studied using simulations. Although this paper focuses on kappa coefficients, the method easily extends to other statistical measures.

Original languageEnglish
Pages (from-to)1016-1034
Number of pages19
JournalBiometrical Journal
Volume59
Issue number5
DOIs
Publication statusPublished - Sept 2017

Keywords

  • Clustered bootstrap
  • Delta method
  • Hierarchical
  • Intraclass
  • Rater
  • WEIGHTED KAPPA
  • ESTIMATING EQUATIONS
  • CLUSTERED DATA
  • HIGH AGREEMENT
  • COHENS KAPPA
  • BINARY DATA
  • RELIABILITY
  • PREVALENCE
  • STATISTICS
  • PARADOXES

Cite this