Abstract
OBJECTIVE: To assess improvement in the completeness of reporting COVID-19 prediction models after the peer review process.
STUDY DESIGN AND SETTING: Studies included in a living systematic review of COVID-19 prediction models, with both pre-print and peer-reviewed published versions available, were assessed. The primary outcome was the change in percentage adherence to the TRIPOD reporting guidelines between pre-print and published manuscripts.
RESULTS: 19 studies were identified including seven (37%) model development studies, two external validations of existing models (11%), and 10 (53%) papers reporting on both development and external validation of the same model. Median percentage adherence amongst pre-print versions was 33% (min-max: 10 to 68%). The percentage adherence of TRIPOD components increased from pre-print to publication in 11/19 studies (58%), with adherence unchanged in the remaining eight studies. The median change in adherence was just 3 percentage points (pp, min-max: 0-14pp) across all studies. No association was observed between the change in percentage adherence and pre-print score, journal impact factor, or time between journal submission and acceptance.
CONCLUSIONS: Pre-print reporting quality of COVID-19 prediction modelling studies is poor and did not improve much after peer review, suggesting peer review had a trivial effect on the completeness of reporting during the pandemic.
Original language | English |
---|---|
Journal | Journal of Clinical Epidemiology |
DOIs | |
Publication status | E-pub ahead of print - 14 Dec 2022 |