Mixed second order partial derivatives decomposition method for large scale optimization

Lin Li, Licheng Jiao, Rustam Stolkin, Fang Liu

Research output: Contribution to journalArticlepeer-review

6 Citations (Scopus)
193 Downloads (Pure)

Abstract

This paper focuses on decomposition strategies for large-scale optimization problems. The cooperative co-evolution approach improves the scalability of evolutionary algorithms by decomposing a single high dimensional problem into several lower dimension sub-problems and then optimizing each of them individually. However, the dominating factor for the performance of these algorithms, on large-scale function optimization problems, is the choice of the decomposition approach employed. This paper provides a theoretical analysis of the interaction between variables in such approaches. Three theorems and three lemma are introduced to investigate the relationship between decision variables, and we provide theoretical explanations on overlapping subcomponents. An automatic decomposition approach, based on the mixed second order partial derivatives of the analytic expression of the optimization problem, is presented. We investigate the advantages and disadvantages of the differential grouping (DG) automatic decomposition approach, and we propose one enhanced version of differential grouping to deal with problems which the original differential grouping method is unable to resolve. We compare the performance of three different grouping strategies and provide the results of empirical evaluations using 20 benchmark data sets.
Original languageEnglish
Pages (from-to)1013-1021
JournalApplied Soft Computing
Volume61
Early online date31 Aug 2017
DOIs
Publication statusPublished - 1 Dec 2017

Keywords

  • large-scale optimization
  • evolutionary algorithm
  • cooperative co-evolution
  • divide-and-conquer
  • decomposition method
  • nonseperability
  • curse of dimensionality

Fingerprint

Dive into the research topics of 'Mixed second order partial derivatives decomposition method for large scale optimization'. Together they form a unique fingerprint.

Cite this