Artificial Bee Colony training of neural networks: comparison with back-propagation

Research output: Contribution to journalArticlepeer-review

Authors

Colleges, School and Institutes

Abstract

The Artificial Bee Colony (ABC) is a swarm intelligence algorithm for optimization that has previously been applied to the training of neural networks. This paper examines more carefully the performance of the ABC algorithm for optimizing the connection weights of feed-forward neural networks for classification tasks, and presents a more rigorous comparison with the traditional Back-Propagation (BP) training algorithm. The empirical results for benchmark problems demonstrate that using the standard “stopping early” approach with optimized learning parameters leads to improved BP performance over the previous comparative study, and that a simple variation of the ABC approach provides improved ABC performance too. With both improvements applied, the ABC approach does perform very well on small problems, but the generalization performances achieved are only significantly better than standard BP on one out of six datasets, and the training times increase rapidly as the size of the problem grows. If different, evolutionary optimized, BP learning rates are allowed for the two layers of the neural network, BP is significantly better than the ABC on two of the six datasets, and not significantly different on the other four.

Details

Original languageEnglish
Pages (from-to)171-182
Number of pages12
JournalMemetic Computing
Volume6
Issue number3
Early online date20 Jul 2014
Publication statusPublished - Sep 2014