Time-dependent series variance learning with recurrent mixture density networks

Nikolay Nikolaev*, Peter Tino, Evgueni Smirnov

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

3 Citations (Scopus)


This paper presents an improved nonlinear mixture density approach to modeling the time-dependent variance in time series. First, we elaborate a recurrent mixture density network for explicit modeling of the time conditional mixing coefficients, as well as the means and variances of its Gaussian mixture components. Second, we derive training equations with which all the network weights are inferred in the maximum likelihood framework. Crucially, we calculate temporal derivatives through time for dynamic estimation of the variance network parameters. Experimental results show that, when compared with a traditional linear heteroskedastic model, as well as with the nonlinear mixture density network trained with static derivatives, our dynamic recurrent network converges to more accurate results with better statistical characteristics and economic performance.

Original languageEnglish
Pages (from-to)501-512
Number of pages12
Publication statusPublished - 25 Dec 2013


  • GARCH models
  • Mixture density neural networks
  • Real-time recurrent learning algorithm

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Science Applications
  • Cognitive Neuroscience


Dive into the research topics of 'Time-dependent series variance learning with recurrent mixture density networks'. Together they form a unique fingerprint.

Cite this