Research waste in diagnostic trials: a methods review evaluating the reporting of test-treatment interventions
Research output: Contribution to journal › Article › peer-review
Colleges, School and Institutes
- Biostatistics, Evidence Synthesis and Test Evaluation Research Group, Institute of Applied Health Research, University of Birmingham, Birmingham, B15 2TT, UK.
- University of Exeter
- Biostatistics, Evidence Synthesis and Test Evaluation Research Group, Institute of Applied Health Research, University of Birmingham, Birmingham, B15 2TT, UK. email@example.com.
BACKGROUND: The most rigorous method for evaluating the effectiveness of diagnostic tests is through randomised trials that compare test-treatment interventions: complex interventions comprising episodes of testing, decision-making and treatment. The multi-staged nature of these interventions, combined with the need to relay diagnostic decision-making and treatment planning, has led researchers to hypothesise that test-treatment strategies may be very challenging to document. However, no reviews have yet examined the reporting quality of interventions used in test-treatment RCTs. In this study we evaluate the completeness of intervention descriptions in a systematically identified cohort of test-treatment RCTs.
METHODS: We ascertained all test-treatment RCTs published 2004-2007, indexed in CENTRAL. Included trials randomized patients to diagnostic tests and measured patient outcomes after treatment. Two raters examined the completeness of test-treatment intervention descriptions in four components: 1) the test, 2) diagnostic decision-making, 3) management decision-making, 4) treatments.
RESULTS: One hundred and three trials compared 105 control with 119 experimental interventions, most commonly in cardiovascular medicine (35, 34%), obstetrics and gynecology (17%), gastroenterology (14%) or orthopedics (10%). A broad range of tests were evaluated, including imaging (50, 42%), biochemical assays (21%) and clinical assessment (12%). Only five (5%) trials detailed all four components of experimental and control interventions, none of which also provided a complete care pathway diagram. Experimental arms were missing descriptions of tests, diagnostic-decision making, management planning and treatments (36%, 51%, 55% and 79% of trials respectively); control arms were missing the same details in 61%, 66%, 67% and 84% of trials.
CONCLUSION: Reporting of test-treatment interventions is very poor, inadequate for understanding the results of these trials, and for comparing or translating results into clinical practice. Reporting needs to improve, with greater emphasis on describing the decision-making components of care pathways in both pragmatic and explanatory trials. Please see the companion paper to this article: http://bmcmedresmethodol.biomedcentral.com/articles/10.1186/s12874-016-0287-z .
|Journal||BMC Medical Research Methodology|
|Publication status||Published - 24 Feb 2017|
- RCT, Test-treatment, Test Evaluation, Diagnostic accuracy, Patient outcomes, Reporting quality, Intervention reporting