Risk of bias assessments in individual participant data meta-analyses of test accuracy and prediction models: a review shows improvements are needed

Brooke Levis*, Kym I E Snell, Johanna A A Damen, Miriam Hattle, Joie Ensor, Paula Dhiman, Constanza L Andaur Navarro, Yemisi Takwoingi, Penny F Whiting, Thomas P A Debray, Johannes B Reitsma, Karel G M Moons, Gary S Collins, Richard D Riley*

*Corresponding author for this work

Research output: Contribution to journalReview articlepeer-review

34 Downloads (Pure)

Abstract

OBJECTIVES: Risk of bias assessments are important in meta-analyses of both aggregate and individual participant data (IPD). There is limited evidence on whether and how risk of bias of included studies or datasets in IPD meta-analyses (IPDMAs) is assessed. We review how risk of bias is currently assessed, reported, and incorporated in IPDMAs of test accuracy and clinical prediction model studies and provide recommendations for improvement.

STUDY DESIGN AND SETTING: We searched PubMed (January 2018-May 2020) to identify IPDMAs of test accuracy and prediction models, then elicited whether each IPDMA assessed risk of bias of included studies and, if so, how assessments were reported and subsequently incorporated into the IPDMAs.

RESULTS: Forty-nine IPDMAs were included. Nineteen of 27 (70%) test accuracy IPDMAs assessed risk of bias, compared to 5 of 22 (23%) prediction model IPDMAs. Seventeen of 19 (89%) test accuracy IPDMAs used Quality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2), but no tool was used consistently among prediction model IPDMAs. Of IPDMAs assessing risk of bias, 7 (37%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided details on the information sources (e.g., the original manuscript, IPD, primary investigators) used to inform judgments, and 4 (21%) test accuracy IPDMAs and 1 (20%) prediction model IPDMA provided information or whether assessments were done before or after obtaining the IPD of the included studies or datasets. Of all included IPDMAs, only seven test accuracy IPDMAs (26%) and one prediction model IPDMA (5%) incorporated risk of bias assessments into their meta-analyses. For future IPDMA projects, we provide guidance on how to adapt tools such as Prediction model Risk Of Bias ASsessment Tool (for prediction models) and QUADAS-2 (for test accuracy) to assess risk of bias of included primary studies and their IPD.

CONCLUSION: Risk of bias assessments and their reporting need to be improved in IPDMAs of test accuracy and, especially, prediction model studies. Using recommended tools, both before and after IPD are obtained, will address this.

Original languageEnglish
Article number111206
JournalJournal of Clinical Epidemiology
Volume165
Early online date2 Nov 2023
DOIs
Publication statusPublished - Jan 2024

Bibliographical note

Copyright © 2023 The Authors

Keywords

  • Risk of bias
  • Individual participant data meta-analysis
  • Test accuracy
  • Prediction models
  • Applicability
  • Quality
  • QUADAS-2
  • PROBAST

Fingerprint

Dive into the research topics of 'Risk of bias assessments in individual participant data meta-analyses of test accuracy and prediction models: a review shows improvements are needed'. Together they form a unique fingerprint.

Cite this