Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
XML PDF e-pub
PDF e-Pub

Section: New Results

Comparing Continous Optimizers Platform

Participants : Anne Auger, Dimo Brockhoff, Nikolaus Hansen, Umut Batu, Dejan Tusar.

Thanks to the ADT support for Dejan Tušar (since November, previously supported by ESA) and Umut Batu (since July), as well as due to an increased effort from the core development team, we could progress on several aspects regarding our Comparing Continuous Optimizers platform (COCO, in 2017.

Most notably, we provide the new functionality of data archives which allows to access the available data of 200+ algorithms much easier. We also made significant progress towards a first constrained test suite—in particular did we add logging support for constrained problems. The postprocessing module is finally python 3 compatible and zip files are supported as input files. The reference worst f-values-of-interest are exposed to the (multiobjective) solver, algorithms can now be displayed in the background, and simplified example experiment scripts (in python) are available (for both anytime, and budget-dependend algorithms, see also [8]). We also improved our continuous integration support, now using also CircleCI and AppVeyor in addition to Inria's Jenkins system. Version 2.0, released in January 2017, saw new functionality of reference algorithms for the multiobjective test suite, a new format of reference algorithms that allow to use any existing data set as reference, improved HTML output and navigation, the COCO version number being part of the plots now, and new regression tests for all provided test suites.

COCO facts for 2017

Currently, we are working on an entire rewrite of the postprocessing (ADT COCOpost project of Umut Batu), an improved cocoex module for proposing test suites, functions, data loggers etc. in python (ADT COCOpysuites of Dejan Tušar), a first constrained test suite (in particular Asma Atamna via the PGMO project NumBER), and a large-scale test suite (part of Konstantinos Varelas' PhD thesis, based on the PhD work of Ouassim AitElHara).

Finally, we continued to use COCO also for teaching, in particular for the group project (“controle continue”) of our Introduction to Optimization (about 40 Master students) and the Derivative-Free Optimization lectures at Université Paris-Sud (about 30 Master students).