Comparative reviews of diagnostic test accuracy in imaging research: evaluation of current practices

Research output: Contribution to journalArticle

Authors

  • Anahita Dehmoobad Sharifabadi Sharifabadi
  • Mariska M G Leeflang
  • Lee Treanor
  • Noemie Kraaijpoel
  • Mostafa Alabousi
  • Nabil Asraoui
  • Jade Choo-Foo
  • Matthew DF McInnes

Colleges, School and Institutes

Abstract

Purpose: The purpose of this methodological review was to determine the extent to which comparative imaging systematic reviews of diagnostic test accuracy (DTA) use primary studies with comparative or non-comparative designs. Methods: MEDLINE was used to identify DTA systematic reviews published in imaging journals between January 2000 and May 2018. Inclusion criteria: systematic reviews comparing at least two index tests (one of which was imaging-based); review characteristics were extracted. Study design and other characteristics of primary studies included in the systematic reviews were evaluated. Results: One hundred three comparative imaging reviews were included; 11 (11%) included only comparative studies, 12 (11%) included only non-comparative primary studies, and 80 (78%) included both comparative and non-comparative primary studies. For reviews containing both comparative and non-comparative primary studies, the median proportion of non-comparative primary studies was 81% (IQR 57–90%). Of 92 reviews that included non-comparative primary studies, 86% did not recognize this as a limitation. Furthermore, among 4182 primary studies, 3438 (82%) were non-comparative and 744 (18%) were comparative in design. Conclusion: Most primary studies included in comparative imaging reviews are non-comparative in design and awareness of the risk of bias associated with this is low. This may lead to incorrect conclusions about the relative accuracy of diagnostic tests and be counter-productive for informing guidelines and funding decisions about imaging tests. Key Points: • Few comparative accuracy imaging reviews include only primary studies with optimal comparative study designs. Among the rest, few recognize the risk of bias conferred from inclusion of primary studies with non-comparative designs. • The demand for accurate comparative accuracy data combined with minimal awareness of valid comparative study designs may lead to counter-productive research and inadequately supported clinical decisions for diagnostic tests. • Using comparative accuracy imaging reviews with a high risk of bias to inform guidelines and funding decisions may have detrimental impacts on patient care.

Details

Original languageEnglish
Pages (from-to)5386-5394
Number of pages9
JournalEuropean Radiology
Volume29
Issue number10
Early online date21 Mar 2019
Publication statusPublished - Oct 2019

Keywords

  • Comparative effectiveness research, Diagnostic test, routine, Sensitivity and specificity