Cross-validation aggregation for combining autoregressive neural network forecasts

Devon K. Barrow*, Sven F. Crone

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

32 Citations (Scopus)

Abstract

This paper evaluates k-fold and Monte Carlo cross-validation and aggregation (crogging) for combining neural network autoregressive forecasts. We introduce Monte Carlo crogging which combines bootstrapping and cross-validation (CV) in a single approach through repeated random splitting of the original time series into mutually exclusive datasets for training. As the training/validation split is independent of the number of folds, the algorithm offers more flexibility in the size, and number of training samples compared to k-fold cross-validation. The study also provides for crogging and bagging: (1) the first systematic evaluation across time series length and combination size, (2) a bias and variance decomposition of the forecast errors to understand improvement gains, and (3) a comparison to established benchmarks of model averaging and selection. Crogging can easily be extended to other autoregressive models. Results on real and simulated series demonstrate significant improvements in forecasting accuracy especially for short time series and long forecast horizons.

Original languageEnglish
Pages (from-to)1120-1137
Number of pages18
JournalInternational Journal of Forecasting
Volume32
Issue number4
Early online date1 Jun 2016
DOIs
Publication statusPublished - 1 Oct 2016

Keywords

  • Bootstrapping
  • Cross-validation
  • Forecast combination
  • Monte Carlo
  • Time series

ASJC Scopus subject areas

  • Business and International Management

Fingerprint

Dive into the research topics of 'Cross-validation aggregation for combining autoregressive neural network forecasts'. Together they form a unique fingerprint.

Cite this