TY - JOUR
T1 - Risk of bias assessment of test comparisons was uncommon in comparative accuracy systematic reviews
T2 - an overview of reviews
AU - Yang, Bada
AU - Vali, Yasaman
AU - Sharifabadi, Anahita Dehmoobad Sharifabadi
AU - Harris, Isobel
AU - Beese, Sophie
AU - Davenport, Clare
AU - Hyde, Christopher
AU - Takwoingi, Yemisi
AU - Whiting, Penny
AU - Langendam, Miranda
AU - Leeflang, Mariska M G
N1 - Final Version of Record not yet available as of 30/09/2020
PY - 2020/8/12
Y1 - 2020/8/12
N2 - Objectives: Comparative diagnostic test accuracy systematic reviews (DTA reviews) assess the accuracy of two or more tests and compare their diagnostic performance. We investigated how comparative DTA reviews assessed the risk of bias (RoB) in primary studies that compared multiple index tests. Study Design and Setting: This is an overview of comparative DTA reviews indexed in MEDLINE from January 1st to December 31st, 2017. Two assessors independently identified DTA reviews including at least two index tests and containing at least one statement in which the accuracy of the index tests was compared. Two assessors independently extracted data on the methods used to assess RoB in studies that directly compared the accuracy of multiple index tests. Results: We included 238 comparative DTA reviews. Only two reviews (0.8%, 95% confidence interval 0.1 to 3.0%) conducted RoB assessment of test comparisons undertaken in primary studies; neither used an RoB tool specifically designed to assess bias in test comparisons. Conclusion: Assessment of RoB in test comparisons undertaken in primary studies was uncommon in comparative DTA reviews, possibly due to lack of existing guidance on and awareness of potential sources of bias. Based on our findings, guidance on how to assess and incorporate RoB in comparative DTA reviews is needed.
AB - Objectives: Comparative diagnostic test accuracy systematic reviews (DTA reviews) assess the accuracy of two or more tests and compare their diagnostic performance. We investigated how comparative DTA reviews assessed the risk of bias (RoB) in primary studies that compared multiple index tests. Study Design and Setting: This is an overview of comparative DTA reviews indexed in MEDLINE from January 1st to December 31st, 2017. Two assessors independently identified DTA reviews including at least two index tests and containing at least one statement in which the accuracy of the index tests was compared. Two assessors independently extracted data on the methods used to assess RoB in studies that directly compared the accuracy of multiple index tests. Results: We included 238 comparative DTA reviews. Only two reviews (0.8%, 95% confidence interval 0.1 to 3.0%) conducted RoB assessment of test comparisons undertaken in primary studies; neither used an RoB tool specifically designed to assess bias in test comparisons. Conclusion: Assessment of RoB in test comparisons undertaken in primary studies was uncommon in comparative DTA reviews, possibly due to lack of existing guidance on and awareness of potential sources of bias. Based on our findings, guidance on how to assess and incorporate RoB in comparative DTA reviews is needed.
KW - Bias
KW - Diagnostic accuracy
KW - Meta-analysis
KW - Systematic review
KW - Test comparison
UR - http://www.scopus.com/inward/record.url?scp=85091688732&partnerID=8YFLogxK
U2 - 10.1016/j.jclinepi.2020.08.007
DO - 10.1016/j.jclinepi.2020.08.007
M3 - Review article
SN - 0895-4356
VL - 127
SP - 167
EP - 174
JO - Journal of Clinical Epidemiology
JF - Journal of Clinical Epidemiology
ER -