Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
XML PDF e-pub
PDF e-Pub

Section: New Results


Participants : Anne Auger, Dimo Brockhoff, Nikolaus Hansen, Umut Batu, Dejan Tusar.

In his thesis, Ouassim AitElHara has been investigating the benchmarking of algorithms in large dimensions [1]. In this context, the first steps for a testbed of the COCO platform in large dimension have been done. Particularly, the methodology for building a large-scale testbed has been defined: it consists in replacing the usual orthogonal transformation by block-diagonal orthogonal matrices multiplied to the left and to the right by permutation matrices. While still under testing, we expect to be able to release the large-scale testbed in the coming year.

The population size is one of the few parameters, a user is supposed to touch in the state-of-the-art optimizer CMA-ES. In [7], a new approach to also adapt the population size in CMA-ES is proposed and benchmarked on the bbob test suite of our COCO platform. The method is based on tracking the non-decrease of the median of the objective function values in each slot of S successive iterations to decide whether to increase or decrease or keep the population size in the next slot of S iterations. The experimental results show the efficiency of our approach on some multi-modal functions with adequate global structure.

Benchmarking budget-dependent algorithms (for which parameters might depend on the given budget of function evaluations) is typically done for a fixed (set of) budget(s). This, however, has the disadvantage that the reported function values at this budget are difficult to interpret. Furthermore, assessing performance in this way does not give any hints how an algorithm would behave for other budgets. Instead, we proposed in [8] a new way to do “Anytime Benchmarking of Budget-Dependent Algorithms” and implemented this functionality in our COCO platform. The idea is to run several experiments for varying budgets and report target-based runtimes in the form of empirical cumulative distribution functions (aka data profiles) as in the case of anytime algorithms.