A Comparison of Procedures to Test for Moderators in Mixed-Effects Meta-Regression Models

Wolfgang Viechtbauer*, Jose Antonio Lopez-Lopez, Julio Sanchez-Meca, Fulgencio Marin-Martinez

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

206 Downloads (Pure)

Abstract

Several alternative methods are available when testing for moderators in mixed-effects meta-regression models. A simulation study was carried out to compare different methods in terms of their Type I error and statistical power rates. We included the standard (Wald-type) test, the method proposed by Knapp and Hartung (2003) in 2 different versions, the Huber-White method, the likelihood ratio test, and the permutation test in the simulation study. These methods were combined with 7 estimators for the amount of residual heterogeneity in the effect sizes. Our results show that the standard method, applied in most meta-analyses up to date, does not control the Type I error rate adequately, sometimes leading to overly conservative, but usually to inflated, Type I error rates. Of the different methods evaluated, only the Knapp and Hartung method and the permutation test provide adequate control of the Type I error rate across all conditions. Due to its computational simplicity, the Knapp and Hartung method is recommended as a suitable option for most meta-analyses.
Original languageEnglish
Pages (from-to)360-374
JournalPsychological Methods
Volume20
Issue number3
DOIs
Publication statusPublished - Sept 2015

Keywords

  • meta-analysis
  • meta-regression
  • moderator analysis
  • heterogeneity estimator
  • standardized mean difference

Cite this