Do the methods used to analyse missing data really matter? An examination of data from an observational study of Intermediate Care patients

Billingsley Kaambwa, Stirling Bryan, Lucinda Billingham

Research output: Contribution to journalArticlepeer-review

11 Citations (Scopus)
103 Downloads (Pure)

Abstract

Missing data is a common statistical problem in healthcare datasets from populations of older people. Some argue that arbitrarily assuming the mechanism responsible for the missingness and therefore the method for dealing with this missingness is not the best option-but is this always true? This paper explores what happens when extra information that suggests that a particular mechanism is responsible for missing data is disregarded and methods for dealing with the missing data are chosen arbitrarily. Regression models based on 2,533 intermediate care (IC) patients from the largest evaluation of IC done and published in the UK to date were used to explain variation in costs, EQ-5D and Barthel index. Three methods for dealing with missingness were utilised, each assuming a different mechanism as being responsible for the missing data: complete case analysis (assuming missing completely at random-MCAR), multiple imputation (assuming missing at random-MAR) and Heckman selection model (assuming missing not at random-MNAR). Differences in results were gauged by examining the signs of coefficients as well as the sizes of both coefficients and associated standard errors.
Original languageEnglish
Article number330
JournalBMC Research Notes
Volume5
DOIs
Publication statusPublished - 27 Jun 2012

Fingerprint

Dive into the research topics of 'Do the methods used to analyse missing data really matter? An examination of data from an observational study of Intermediate Care patients'. Together they form a unique fingerprint.

Cite this