Publication and related bias in quantitative health services & delivery research: systematic reviews, case studies, inception cohorts and informant interviews
Research output: Book/Report › Commissioned report
Colleges, School and Institutes
- Warwick CTU, University of Warwick
Background Bias in the publication and reporting of research findings (referred to as publication and related bias here) poses a major threat in evidence synthesis and evidence-based decision making. While this bias has been well documented in clinical research, little is known about its occurrence and magnitude in health services and delivery research (HSDR). Objectives To obtain empirical evidence on publication and related bias in quantitative HSDR; to examine current practice in detecting/mitigating this bias in HSDR systematic reviews; and to explore stakeholders’ perception and experiences concerning such bias. Methods The project included five distinct but interrelated work packages (WPs). WP1 was a systematic review of empirical and methodological studies. WP2 involved a survey (meta-epidemiological study) of randomly selected systematic reviews of HSDR topics (n=200) to evaluate current practice in the assessment of publication and outcome reporting bias during evidence synthesis. WP3 included four case studies to explore the applicability of statistical methods for detecting such bias in HSDR. In WP4 we followed up four cohorts of HSDR studies (total n=300) to ascertain their publication status and examined whether publication status was associated with statistical significance or perceived ‘positivity’ of study findings. WP5 involved key informant interviews with diverse HSDR stakeholders (n=24) and a focus group discussion with patient and service user representatives (n=8). Results We identified only four studies that set out to investigate publication and related bias in HSDR in WP1. Three of these focused on health informatics research and one concerned health economics. All four studies reported evidence of the existence of this bias but had methodological weaknesses. We also identified three HSDR systematic reviews in which findings were compared between published and grey/unpublished literature. These reviews found that the quality and volume of evidence and effect estimates sometimes differed significantly between published and unpublished literature. WP2 showed low prevalence ofconsidering/assessing publication (43%) and outcome reporting (17%) bias in HSDR systematic reviews. The prevalence was lower among reviews of associations than reviews of interventions. Case studies in WP3 highlighted limitations in current methods for detecting these bias due to heterogeneity and potential confounders. Follow-up of HSDR cohorts in WP4 showed positive association between publication status and having statistically significant or positive findings. Diverse views concerning publication and related bias and insights into how features of HSDR might influence its occurrence was uncovered through the interviews with HSDR stakeholders and focus group discussion conducted in WP5. Conclusions This study provided prima facie evidence on publication and related bias in quantitative HSDR. This bias does appear to exist, but its prevalence and impact may vary depending on study characteristics such as study design and motivation for conducting the evaluation. Emphasis on methodological novelty and focus beyond summative assessments may mitigate/lessen the risk of such bias in HSDR. Methodological and epistemological diversity in HSDR and changing landscape in research publication need to be considered when interpreting the evidence. Collection of further empirical evidence and exploration of optimal HSDR practice are required. Funding details UK NIHR HS&DR Programme 15/71/06.
|Publisher||National Institute for Health Research|
|Number of pages||186|
|Publication status||E-pub ahead of print - 2020|