A New Adaptive Merging and Growing Algorithm for Designing Artificial Neural Networks

Research output: Contribution to journalArticle

Standard

A New Adaptive Merging and Growing Algorithm for Designing Artificial Neural Networks. / Islam, M; Sattar, A; Amin, F; Yao, Xin; Murase, K.

In: IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics), Vol. 39, No. 3, 01.06.2009, p. 705-722.

Research output: Contribution to journalArticle

Harvard

APA

Vancouver

Author

Bibtex

@article{ffb841d3c5504cbc9f0481741f65019f,
title = "A New Adaptive Merging and Growing Algorithm for Designing Artificial Neural Networks",
abstract = "This paper presents a new algorithm, called adaptive merging and growing algorithm (AMGA), in designing artificial neural networks (ANNs). This algorithm merges and adds hidden neurons during the training process of ANNs. The merge operation introduced in AMGA is a kind of a mixed mode operation, which is equivalent to pruning two neurons and adding one neuron. Unlike most previous studies, AMGA puts emphasis on autonomous functioning in the design process of ANNs. This is the main reason why AMGA uses an adaptive not a predefined fixed strategy in designing ANNs. The adaptive strategy merges or adds hidden neurons based on the learning ability of hidden neurons or the training progress of ANNs. In order to reduce the amount of retraining after modifying ANN architectures, AMGA prunes hidden neurons by merging correlated hidden neurons and adds hidden neurons by splitting existing hidden neurons. The proposed AMGA has been tested on a number of benchmark problems in machine learning and ANNs, including breast cancer, Australian credit card assessment, and diabetes, gene, glass, heart, iris, and thyroid problems. The experimental results show that AMGA can design compact ANN architectures with good generalization ability compared to other algorithms.",
keywords = "generalization ability, Adding neurons, retraining, merging neurons, artificial neural network (ANN) design",
author = "M Islam and A Sattar and F Amin and Xin Yao and K Murase",
year = "2009",
month = jun,
day = "1",
doi = "10.1109/TSMCB.2008.2008724",
language = "English",
volume = "39",
pages = "705--722",
journal = "IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)",
issn = "1083-4419",
publisher = "Institute of Electrical and Electronics Engineers (IEEE)",
number = "3",

}

RIS

TY - JOUR

T1 - A New Adaptive Merging and Growing Algorithm for Designing Artificial Neural Networks

AU - Islam, M

AU - Sattar, A

AU - Amin, F

AU - Yao, Xin

AU - Murase, K

PY - 2009/6/1

Y1 - 2009/6/1

N2 - This paper presents a new algorithm, called adaptive merging and growing algorithm (AMGA), in designing artificial neural networks (ANNs). This algorithm merges and adds hidden neurons during the training process of ANNs. The merge operation introduced in AMGA is a kind of a mixed mode operation, which is equivalent to pruning two neurons and adding one neuron. Unlike most previous studies, AMGA puts emphasis on autonomous functioning in the design process of ANNs. This is the main reason why AMGA uses an adaptive not a predefined fixed strategy in designing ANNs. The adaptive strategy merges or adds hidden neurons based on the learning ability of hidden neurons or the training progress of ANNs. In order to reduce the amount of retraining after modifying ANN architectures, AMGA prunes hidden neurons by merging correlated hidden neurons and adds hidden neurons by splitting existing hidden neurons. The proposed AMGA has been tested on a number of benchmark problems in machine learning and ANNs, including breast cancer, Australian credit card assessment, and diabetes, gene, glass, heart, iris, and thyroid problems. The experimental results show that AMGA can design compact ANN architectures with good generalization ability compared to other algorithms.

AB - This paper presents a new algorithm, called adaptive merging and growing algorithm (AMGA), in designing artificial neural networks (ANNs). This algorithm merges and adds hidden neurons during the training process of ANNs. The merge operation introduced in AMGA is a kind of a mixed mode operation, which is equivalent to pruning two neurons and adding one neuron. Unlike most previous studies, AMGA puts emphasis on autonomous functioning in the design process of ANNs. This is the main reason why AMGA uses an adaptive not a predefined fixed strategy in designing ANNs. The adaptive strategy merges or adds hidden neurons based on the learning ability of hidden neurons or the training progress of ANNs. In order to reduce the amount of retraining after modifying ANN architectures, AMGA prunes hidden neurons by merging correlated hidden neurons and adds hidden neurons by splitting existing hidden neurons. The proposed AMGA has been tested on a number of benchmark problems in machine learning and ANNs, including breast cancer, Australian credit card assessment, and diabetes, gene, glass, heart, iris, and thyroid problems. The experimental results show that AMGA can design compact ANN architectures with good generalization ability compared to other algorithms.

KW - generalization ability

KW - Adding neurons

KW - retraining

KW - merging neurons

KW - artificial neural network (ANN) design

U2 - 10.1109/TSMCB.2008.2008724

DO - 10.1109/TSMCB.2008.2008724

M3 - Article

C2 - 19203888

VL - 39

SP - 705

EP - 722

JO - IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)

JF - IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics)

SN - 1083-4419

IS - 3

ER -