Review of sample size determination methods for the intraclass correlation coefficient in the one-way analysis of variance model

Dipro Mondal*, Sophie Vanbelle, Alberto Cassese, Math J. J. M. Candel

*Corresponding author for this work

Research output: Contribution to journal(Systematic) Review article peer-review

Abstract

Reliability of measurement instruments providing quantitative outcomes is usually assessed by an intraclass correlation coefficient. When participants are repeatedly measured by a single rater or device, or, are each rated by a different group of raters, the intraclass correlation coefficient is based on a one-way analysis of variance model. When planning a reliability study, it is essential to determine the number of participants and measurements per participant (i.e. number of raters or number of repeated measurements). Three different sample size determination approaches under the one-way analysis of variance model were identified in the literature, all based on a confidence interval for the intraclass correlation coefficient. Although eight different confidence interval methods can be identified, Wald confidence interval with Fisher's large sample variance approximation remains most commonly used despite its well-known poor statistical properties. Therefore, a first objective of this work is comparing the statistical properties of all identified confidence interval methods-including those overlooked in previous studies. A second objective is developing a general procedure to determine the sample size using all approaches since a closed-form formula is not always available. This procedure is implemented in an R Shiny app. Finally, we provide advice for choosing an appropriate sample size determination method when planning a reliability study.
Original languageEnglish
Pages (from-to)532-553
Number of pages22
JournalStatistical Methods in Medical Research
Volume33
Issue number3
Early online date1 Feb 2024
DOIs
Publication statusPublished - Mar 2024

Keywords

  • Intrarater reliability
  • interrater reliability
  • measurement errors
  • reproducibility (of results)
  • observer variation
  • CONFIDENCE-INTERVAL
  • REQUIREMENTS
  • RELIABILITY
  • DESIGN
  • AGREEMENT
  • PRECISION
  • RATIO

Cite this