Abstract
Objectives: (1) To identify and classify comparative diagnostic test accuracy (DTA) study designs; and (2) to describe study design labels used by authors of comparative DTA studies.
Methods: We performed a methodological review of 100 comparative DTA studies published between 2015 and 2017, randomly sampled from studies included in 238 comparative DTA systematic reviews indexed in MEDLINE in 2017. From each study report, we extracted six design elements characterizing participant flow and the labels used by authors.
Results: We identified a total of 46 unique combinations of study design features in our sample, based on six design elements characterizing participant flow. We classified the studies into five study design categories based on how participants were allocated to receive each index test: ‘fully paired’ (n=79), ‘partially paired, random subset’ (n=0), ‘partially paired, nonrandom subset’ (n=2), ‘unpaired randomized’ (n=1) and ‘unpaired nonrandomized’ (n=3). The allocation method used in 15 studies was unclear. Sixty-one studies reported, in total, 29 unique study design labels but only four labels referred to specific design features of comparative studies.
Conclusion: Our classification scheme can help systematic review authors define study eligibility criteria, assess risk of bias, and communicate the strength of the evidence. A standardized labelling scheme could be developed to facilitate communication of specific design features.
Methods: We performed a methodological review of 100 comparative DTA studies published between 2015 and 2017, randomly sampled from studies included in 238 comparative DTA systematic reviews indexed in MEDLINE in 2017. From each study report, we extracted six design elements characterizing participant flow and the labels used by authors.
Results: We identified a total of 46 unique combinations of study design features in our sample, based on six design elements characterizing participant flow. We classified the studies into five study design categories based on how participants were allocated to receive each index test: ‘fully paired’ (n=79), ‘partially paired, random subset’ (n=0), ‘partially paired, nonrandom subset’ (n=2), ‘unpaired randomized’ (n=1) and ‘unpaired nonrandomized’ (n=3). The allocation method used in 15 studies was unclear. Sixty-one studies reported, in total, 29 unique study design labels but only four labels referred to specific design features of comparative studies.
Conclusion: Our classification scheme can help systematic review authors define study eligibility criteria, assess risk of bias, and communicate the strength of the evidence. A standardized labelling scheme could be developed to facilitate communication of specific design features.
Original language | English |
---|---|
Pages (from-to) | 128-138 |
Number of pages | 11 |
Journal | Journal of Clinical Epidemiology |
Volume | 138 |
Early online date | 26 Apr 2021 |
DOIs | |
Publication status | E-pub ahead of print - 26 Apr 2021 |
Bibliographical note
Funding Information:Funding: Amsterdam UMC (The Netherlands) provided funding for this study. The funding organization had no role in the design, collection, analysis, and interpretation of the data or the decision to approve publication of the finished manuscript.
Funding Information:
We thank Pieter Zwanenburg, MSc (University of Amsterdam), for his comments and suggestions on improving a previous draft. Yemisi Takwoingi is funded by a UK National Institute for Health Research (NIHR) Postdoctoral Fellowship, and is supported by the NIHR Birmingham Biomedical Research Centre. The views expressed are those of the authors and not necessarily those of the NHS, NIHR, or the Department of Health and Social Care.
Publisher Copyright:
© 2021
Keywords
- Diagnostic accuracy
- Test comparison
- Study design
- Comparative accuracy studies
- Bias