External validation of prognostic models to predict stillbirth using the International Prediction of Pregnancy Complications (IPPIC) Network database: an individual participant data meta-analysis

IPPIC Collaborative Network

Research output: Contribution to journalArticlepeer-review

6 Downloads (Pure)

Abstract

OBJECTIVE: Stillbirth is a potentially preventable complication of pregnancy. Identifying women at risk can guide decisions on closer surveillance or timing of birth to prevent fetal death. Prognostic models have been developed to predict the risk of stillbirth, but none have yet been externally validated. We externally validated published prediction models for stillbirth using individual participant data (IPD) meta-analysis to assess their predictive performance.

METHODS: We searched Medline, EMBASE, DH-DATA and AMED databases from inception to December 2020 to identify stillbirth prediction models. We included studies that developed or updated prediction models for stillbirth for use at any time during pregnancy. IPD from cohorts within the International Prediction of Pregnancy Complication (IPPIC) Network were used to externally validate the identified prediction models whose individual variables were available in the IPD. We assessed the risk of bias of the models and IPD using PROBAST, and reported discriminative performance using the C-statistic, and calibration performance using calibration plots, calibration slope and calibration-in-the-large. We estimated performance measures separately in each study, and then summarised across studies using random-effects meta-analysis. Clinical utility was assessed using net benefit.

RESULTS: We identified 17 studies reporting the development of 40 prognostic models for stillbirth. None of the models were previously externally validated, and only a fifth (20%, 8/40) reported the full model equation. We were able to validate three of these models using the IPD from 19 cohort studies (491,201 pregnant women) within the IPPIC Network database. Based on evaluating their development studies, all three models had an overall high risk of bias according to PROBAST. In our IPD meta-analysis, the models had summary C-statistics ranging from 0.53 to 0.65; summary calibration slopes of 0.40 to 0.88, and generally with observed risks predictions that were too extreme compared to observed risks; and little to no clinical utility as assessed by net benefit. However, there remained uncertainty in performance for some models due to small available sample sizes CONCLUSION: The three validated models generally showed poor and uncertain predictive performance in new data, with limited evidence to support their clinical application. Findings suggest methodological shortcomings in their development including overfitting of models. Further research is needed to further validate these and other models, identify stronger prognostic factors, and to develop more robust prediction models. This article is protected by copyright. All rights reserved.

Original languageEnglish
JournalUltrasound in Obstetrics and Gynecology
Early online date18 Aug 2021
DOIs
Publication statusE-pub ahead of print - 18 Aug 2021

Bibliographical note

This article is protected by copyright. All rights reserved.

Fingerprint

Dive into the research topics of 'External validation of prognostic models to predict stillbirth using the International Prediction of Pregnancy Complications (IPPIC) Network database: an individual participant data meta-analysis'. Together they form a unique fingerprint.

Cite this