BIAS: a toolbox for benchmarking structural bias in the continuous domain

Diederick Vermetten, Bas van Stein, Fabio Caraffini, Leandro Minku, Anna V. Kononova

Research output: Contribution to journalArticlepeer-review

Abstract

Benchmarking heuristic algorithms is vital to understand under which conditions and on what kind of problems certain algorithms perform well. Most benchmarks are performance-based, to test algorithm performance under a wide set of conditions. There are also resource- and behaviour-based benchmarks to test the resource consumption and the behaviour of algorithms. In this article, we propose a novel behaviour-based benchmark toolbox: BIAS (Bias in Algorithms, Structural). This toolbox can detect structural bias per dimension and across dimension based on 39 statistical tests. Moreover, it predicts the type of structural bias using a Random Forest model. BIAS can be used to better understand and improve existing algorithms (removing bias) as well as to test novel algorithms for structural bias in an early phase of development. Experiments with a large set of generated structural bias scenarios show that BIAS was successful in identifying bias. In addition we also provide the results of BIAS on 432 existing state-of-the-art optimisation algorithms showing that different kinds of structural bias are present in these algorithms, mostly towards the centre of the objective space or showing discretization behaviour. The proposed toolbox is made available open-source and recommendations are provided for the sample size and hyper-parameters to be used when applying the toolbox on other algorithms.
Original languageEnglish
JournalIEEE Transactions on Evolutionary Computation
Publication statusAccepted/In press - 28 Jun 2022

Bibliographical note

Not yet published as of 08/08/2022

Fingerprint

Dive into the research topics of 'BIAS: a toolbox for benchmarking structural bias in the continuous domain'. Together they form a unique fingerprint.

Cite this