An Experimental Study on Competitive Coevolution of MLP Classifiers

Marco Castellani, Rahul Lalchandani

Research output: Chapter in Book/Report/Conference proceedingConference contribution


This paper investigates the effectiveness and efficiency of two competitive (predator-prey) evolutionary procedures for training multi-layer perceptron classifiers: Co-Adaptive Neural Network Training, and a modified version of Co-Evolutionary Neural Network Training. The study focused on how the performance of the two procedures varies as the size of the training set increases, and their ability to redress class imbalance problems of increasing severity. Compared to the customary backpropagation algorithm and a standard evolutionary algorithm, the two competitive procedures excelled in terms of quality of the solutions and execution speed. Co-Adaptive Neural Network Training excelled on class imbalance problems, and on classification problems of moderately large training sets. Co-Evolutionary Neural Network Training performed best on the largest data sets. The size of the training set was the most problematic issue for the backpropagation algorithm and the standard evolutionary algorithm, respectively in terms of accuracy of the solutions and execution speed. Backpropagation and the evolutionary algorithm were also not competitive on the class imbalance problems, where data oversampling could only partially remedy their shortcomings.
Original languageEnglish
Title of host publicationMendel 2017 - 23rd International Conference on Soft Computing
Place of PublicationBrno, Czech Republic
Publication statusPublished - Jun 2017


  • evolutionary algorithms
  • coevolution
  • predator-prey systems
  • multi-layer perceptron
  • Pattern classification


Dive into the research topics of 'An Experimental Study on Competitive Coevolution of MLP Classifiers'. Together they form a unique fingerprint.

Cite this