Artificial Bee Colony training of neural networks: comparison with back-propagation

John Bullinaria, Khulood Alyahya

Research output: Contribution to journalArticlepeer-review

21 Citations (Scopus)
307 Downloads (Pure)


The Artificial Bee Colony (ABC) is a swarm intelligence algorithm for optimization that has previously been applied to the training of neural networks. This paper examines more carefully the performance of the ABC algorithm for optimizing the connection weights of feed-forward neural networks for classification tasks, and presents a more rigorous comparison with the traditional Back-Propagation (BP) training algorithm. The empirical results for benchmark problems demonstrate that using the standard “stopping early” approach with optimized learning parameters leads to improved BP performance over the previous comparative study, and that a simple variation of the ABC approach provides improved ABC performance too. With both improvements applied, the ABC approach does perform very well on small problems, but the generalization performances achieved are only significantly better than standard BP on one out of six datasets, and the training times increase rapidly as the size of the problem grows. If different, evolutionary optimized, BP learning rates are allowed for the two layers of the neural network, BP is significantly better than the ABC on two of the six datasets, and not significantly different on the other four.
Original languageEnglish
Pages (from-to)171-182
Number of pages12
JournalMemetic Computing
Issue number3
Early online date20 Jul 2014
Publication statusPublished - Sept 2014


Dive into the research topics of 'Artificial Bee Colony training of neural networks: comparison with back-propagation'. Together they form a unique fingerprint.

Cite this