R-Metric: Evaluating the Performance of Preference-Based Evolutionary Multi-Objective Optimization Using Reference Points

Research output: Contribution to journalArticlepeer-review


Colleges, School and Institutes

External organisations

  • University of Exeter
  • Michigan State University


Measuring the performance of an algorithm for solving multi-objective optimization problem has always been challenging simply due to two conflicting goals, i.e., convergence and diversity of obtained trade-off solutions. There are a number of metrics for evaluating the performance of a multi-objective optimizer that approximates the whole Pareto-optimal front. However,
for evaluating the quality of a preferred subset of the whole front, the existing metrics are inadequate. In this paper, we suggest a systematic way to adapt the existing metrics to quantitatively evaluate the performance of a preference-based evolutionary multi-objective optimization algorithm using reference points. The basic idea is to pre-process the preferred solution set according to a multi-criterion decision making approach before using a regular metric for performance assessment. Extensive experiments on several artificial scenarios and benchmark problems fully demonstrate its effectiveness in evaluating the quality of different preferred solution sets with regard to various
reference points supplied by a decision maker.


Original languageEnglish
Pages (from-to)821 - 835
Number of pages23
JournalIEEE Transactions on Evolutionary Computation
Issue number6
Early online date26 Sep 2017
Publication statusPublished - Dec 2018


  • User-preference, performance assessment, reference point, multi-criterion decision making, evolutionary multi-objective optimization