R-Metric: Evaluating the Performance of Preference-Based Evolutionary Multi-Objective Optimization Using Reference Points

Ke Li, Kalyanmoy Deb, Xin Yao

Research output: Contribution to journalArticlepeer-review

33 Citations (Scopus)
200 Downloads (Pure)

Abstract

Measuring the performance of an algorithm for solving multi-objective optimization problem has always been challenging simply due to two conflicting goals, i.e., convergence and diversity of obtained trade-off solutions. There are a number of metrics for evaluating the performance of a multi-objective optimizer that approximates the whole Pareto-optimal front. However,
for evaluating the quality of a preferred subset of the whole front, the existing metrics are inadequate. In this paper, we suggest a systematic way to adapt the existing metrics to quantitatively evaluate the performance of a preference-based evolutionary multi-objective optimization algorithm using reference points. The basic idea is to pre-process the preferred solution set according to a multi-criterion decision making approach before using a regular metric for performance assessment. Extensive experiments on several artificial scenarios and benchmark problems fully demonstrate its effectiveness in evaluating the quality of different preferred solution sets with regard to various
reference points supplied by a decision maker.
Original languageEnglish
Pages (from-to)821 - 835
Number of pages23
JournalIEEE Transactions on Evolutionary Computation
Volume22
Issue number6
Early online date26 Sept 2017
DOIs
Publication statusPublished - Dec 2018

Keywords

  • User-preference
  • performance assessment
  • reference point
  • multi-criterion decision making
  • evolutionary multi-objective optimization

Fingerprint

Dive into the research topics of 'R-Metric: Evaluating the Performance of Preference-Based Evolutionary Multi-Objective Optimization Using Reference Points'. Together they form a unique fingerprint.

Cite this