Evolving hybrid ensembles of learning machines for better generalisation

Arjun Chandra, Xin Yao

Research output: Contribution to journalArticle

107 Citations (Scopus)

Abstract

Ensembles of learning machines have been formally and empirically shown to outperform (generalise better than) single predictors in many cases. Evidence suggests that ensembles generalise better when they constitute members which form a diverse and accurate set. Additionally, there have been a multitude of theories on how one call enforce diversity within a combined predictor setup. We recently attempted to integrate these theories together into a co-evolutionary framework with a view to synthesising new evolutionary ensemble learning algorithms using the fact that multi-objective evolutionary optimisation is a formidable ensemble construction technique. This paper explicates oil the intricacies of the proposed framework in addition to presenting detailed empirical results and comparisons with a wide range of algorithms in tile machine learning literature. The framework treats diversity and accuracy as evolutionary pressures which are exerted at multiple levels of abstraction and is shown to be effective. (c) 2006 Elsevier B.V. All rights reserved
Original languageEnglish
Pages (from-to)686-700
Number of pages15
JournalNeurocomputing
Volume69
Issue number7-9
DOIs
Publication statusPublished - 1 Mar 2006

Keywords

  • multi-objective optimisation
  • hybrid ensembles
  • evolutionary computation
  • ensemble learning
  • neuroevolution

Fingerprint

Dive into the research topics of 'Evolving hybrid ensembles of learning machines for better generalisation'. Together they form a unique fingerprint.

Cite this