## Section: New Results

### Continuous Optimization

Participants : Anne Auger, Nikolaus Hansen, Mohamed Jebalia, Marc Schoenauer, Olivier Teytaud, Raymond Ros, Fabien Teytaud, Dimo Brockhoff, Zyed Bouzarkouna.

Research in continuous optimization at TAO is centered on stochastic search algorithms, most often population-based and derivative-free. The stress is put on theoretical and algorithmic aspects, as well as on the comparative assessment of diverse, (stochastic and deterministic search methods. One key feature of stochastic optimization algorithms regards the online adaptation of the algorithm parameters, at the intersection of Continuous Optimization and Crossing the Chasm SIGs. The study focuses on the so-called Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm that adapts the covariance matrix of the Gaussian mutation of an Evolution Strategy.

#### Performance assessment

In the last decades, quite a few bio-inspired algorithms have been proposed to handle continuous optimization problems; in the meanwhile gradient-based quasi-Newton algorithms, pattern search algorithms and more recently Derivative-Free-Optimization algorithms have been proposed, with provable convergence guarantees under some (more or less) restrictive conditions. We have conducted comparisons of several bio-inspired algorithms (CMA-ES, Particle Swarm Optimization (PSO), Differential Evolution (DE)) with the deterministic derivative-free BFGS and NEWUOA algorithms [21] , [32] , [6] , addressing the known shortcomings of the litterature (few optimizers are usually tested within a single paper; different papers commonly use different experimental settings, hampering the results comparability, and comparisons are often biased to certain types of problems, e.g. Evolutionary Algorithms (EAs) have long been tested mostly on separable problems).

For these reasons we have organized a workshop on Black Box Optimization Benchmarking (BBOB) http://coco.gforge.inria.fr/doku.php?id=bbob-2009 at the ACM Genetic and Evolutionary Computation Conference (GECCO 2009). Participants have been provided with (1) the implementation of a well-motivated noise-less [109] , [102] and noisy [110] , [103] benchmark function testbed, (2) the experimental set-up [107] , (3) the implementation of the testbeds in Matlab and C, including generation of data output, (4) post-processing tools in Python for the presentation of the results in graphs and tables [107] .

We have used the BBOB-2009 set-up for benchmarking various evolutionary algorithms [26] , [25] , [30] , [31] , [53] , [54] , [72] , [73] , [64] as well as derivative-free optimizers [55] , [76] , [77] and BFGS [74] , [75] also in comparison to pure Monte-Carlo search [34] , [33] . The analysis of the results from the workshop (about 50 benchmarking data sets) is ongoing.

#### Optimization in presence of uncertainties

Evolutionary algorithms (EAs) are known to be robust in presence of uncertainties, i.e when the objective function is noisy (for example, the function can be the result of an experiment in a lab where the outcome might fluctuate with the temperature). Robustness of EAs in terms of convergence and convergence rates have been theoretically investigated in [16] . Moreover, deficiency of formulating noisy optimization problems in terms of minimization of expectation for multiplicative noise models have been pointed out [16] . Part of this work was done in the context of the ANR/RNTL OMD project. In the context of Steffen Finck's visit (until end of 2008, in collaboration with Voralberg, Austria) we have been investigating the Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm designed for optimization of noisy objective functions and the related method Evolutionary Gradient Search (EGS). A multi-start version of CMA-ES (BIPOP-CMA-ES) has been shown superior results on the BBOB-2009 noisy testbed [54] . Complexity lower and bounds have been shown in [71] for noisy optimization with variance decreasing to 0. Finally, a new algorithm for measuring and handling uncertainty in rank-based search algorithms has been proposed [15] and applied to CMA-ES for online optimization of controllers. The case of structured spaces has also been considered, with the publication of the first penalization rule provably avoiding the “bloat” effect [22] .

#### Covariance Matrix Adaptation Evolution Strategy (CMA-ES)

In [19] a method for an efficient covariance matrix update is proposed. The update is quadratic in the dimension and can therefore be applied in each iteration step without changing the asymptotic time complexity of the algorithm. In Raymond Ros' PhD (defended in Dec. 2009) variants of CMA-ES with a reduced number of internal parameters (block diagonal matrices) are presented and investigated [6] . In [97] the principle design ideas of the CMA-ES algorithm and its key properties, like invariance, are explicated. In [95] an object oriented approach to the implementation of optimization algorithms is proposed and among others the implementation of CMA-ES is presented in this framework.

#### Multi-Objective optimization

Multi-objective (MO) optimization (or vector optimization) consists in
optimizing simultaneously two or more conflicting objective
functions. Recent MO algorithms are based on selecting a set of
solutions maximizing the volume defined by this set and a given
reference point called hypervolume. The spread of optimal set of points on the Pareto
front and the influence of the reference point have been investigated [29] , [7] . Efficient methods for computing the hypervolume while setting the decision maker preferences have been proposed [28] , [27] . Based on previous work on Covariance Matrix
Adaptation for MO(C. Igel, N. Hansen, and S. Roth. Covariance matrix adaptation for multi-objective optimization. *Evolutionary Computation* , 15(1):1–28, 2007.), a recombination
scheme for the strategy parameters in the MO-CMA-ES has been recently
developed [81] . Furthermore, within the GENNETEC project, MO-CMAES was applied to the identification of the parameters of an ODE-based model for a Genetic Regulatory Network [49] . This work has obtained the Best Paper award at the EvoBIO'09 conference.
A book chapter on the theory of multi-objective algorithms has been written [94] .
Ilya Loschilov's PhD started in Sept. 2009 within the CSDL project,
aimed at learning and exploiting a model of the Pareto front for the multi-objective optimization
of expensive functions; note that this work will be immediately relevant
to Mouadh Yagoubi's PhD (CIFRE PSA).

#### Complexity bounds and parallel optimization

The non-validity of the No-Free-Lunch Theorem in continuous optimization and the design of optimal continuous optimization algorithms has been investigated [8] .

[12] provided a complete and general set of bounds extending the state of the art for several variants of evolutionary algorithms. This has shown in particular the large gap between complexity (upper and lower) bounds and experimental results. The case of parallel executions has been analysed, showing that most evolutionary algorithms are in fact poorly parameterized for the parallel case [79] . The convergence rate has been improved with respect to the state of the art by a factor which goes to infinity with the number of processors. [80] also considered the risk of premature convergence in Estimation of Distribution Algorithms, a very important family of algorithms. A simple one-line modification is proposed, with proved asymptotic properties when the population size (often related to the number of processors) is large.

#### On the efficiency of derandomization

It is now known that the technique of quasi-random mutations, developped in 2008, can also be applied in other algorithms as well [45] .

On the other hand, the team applied evolutionary algorithms for optimizing the discrepancy of a point set; this is a joint work with the Laval University at Quebec [48] that obtained the Best Paper Award in the “Real World Application“ track at ACM-GECCO conference.

#### Calibration for traffic simulation and for well placement

In the context of the ANR Travesti project coordinated by Cyril Furtlehner, we are investigating the calibration of mesoscopic car traffic simulators. As simulator, we use the METROPOLIS software, an agent-based traffic simulator developed by Fabrice Marchal (formerly at Laboratoire d'Economie des Transports in Lyon). In particular, we study the influence of several parameters of the simulator such as vehicle length and origin and destination of the cars on the traffic prediction with respect to the fit with simulated data (expected to be real world dynamic data from traveling cars in the future). The optimization itself is noisy and is therefore carried out with a noise-handling version of CMA-ES (work in progress by Dimo Brockhoff in collaboration with Anne Auger and Fabrice Marchal). The problem of the placement of petrol wells to maximize the productivity of each well is investigated in the PhD thesis of Zyed Bouzarkouna (Cifre IFP). CMA-ES coupled with meta-models is used for that purpose, the objective function taking several minutes to evaluate.

#### And beyond

Beyond continuous optimization our fundamental research is well integrated into the “Theory of randomized search heuristics” dealing with theory of optimization algorithms for discrete and continuous search spaces as witnessed by the book “Theory of randomized search heuristics–Foundations and Recent Developments” [99] .