Team tao

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Continuous Optimization

Participants : Anne Auger, Nikolaus Hansen, Mohamed Jebalia, Marc Schoenauer, Olivier Teytaud, Raymond Ros, Fabien Teytaud.

Research in continuous optimization at TAO is centered on stochastic search algorithms that are often population based and typically derivative-free. We are interested in fundamental aspects, algorithm design and comparison of different search methods (stochastic and deterministic). One key feature of stochastic optimization algorithms is how the parameters of the search distributions are adapted during the search. Studies on adaptive algorithms are also part of crossing the chasm module. They are based on the well-known Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm that adapts the covariance matrix of the Gaussian mutation of an Evolution Strategy.

Performance assessment

In the last decades, numerous algorithms taking inspiration from nature have been proposed to handle continuous optimization problems. On the other hand, gradient-based, quasi-Newton algorithms, pattern search algorithms and more recently Derivative-Free-Optimization algorithms have been proposed and improved by applied mathematicians, relying on theoretical studies that guarantee their convergence under some (more or less) restrictive hypotheses. We have conducted comparisons of several bio-inspired algorithms (CMA-ES, Particle Swarm Optimization (PSO), Differential Evolution (DE)) with the deterministic derivative-free algorithms: BFGS and NEWUOA [50] . Moreover, though comparisons of the performance of different optimizers are made in research studies, few optimizers are usually tested within a single work. Different works usually do not use the same experimental settings, hampering their comparability, and often comparisons are biased to certain types of test functions: for a long time, most Evolutionary Algorithms (EAs) have been tested mostly on separable problems. For those reasons we are preparing a workshop on Black Box Optimization Benchmarking http://coco.gforge.inria.fr/doku.php?id=bbob-2009 for the next Genetic and Evolutionary Computation Conference (GECCO 2009) where we will provide to the participants (1) the implementation of a well-motivated benchmark function testbed, (2) the experimental set-up, (3) generation of data output for (4) post-processing and presentation of the results in graphs and tables.

Optimization in presence of uncertainties

Evolutionary algorithms (EAs) are known to be robust in presence of uncertainties, i.e when the objective function is noisy (for example, the function can be the result of an experiment in a lab where the outcome might fluctuate with the temperature). Robustness of EAs in terms of convergence and convergence rates have been theoretically investigated in [22] , [1] . Moreover, deficiency of formulating noisy optimization problems in terms of minimization of expectation for multiplicative noise models have been pointed out [22] . Part of this work was done in the context of the ANR/RNTL OMD project. In the context of Steffen Finck's visit (collaboration with Voralberg, Austria) we are testing the Simultaneous Perturbation Stochastic Approximation (SPSA) algorithm designed for optimization of noisy objective functions. Finally, a new algorithm for measuring and handling uncertainty in rank-based search algorithms has been proposed [21] , [4] and applied to CMA-ES. A revision of the uncertainty measurement and further methods for uncertainty treatment are work in progress.

Multi-Objective optimization

Multi-objective (MO) optimization (or vector optimization) consist in optimizing simultaneously two or more conflicting objective functions. Recent MO algorithms are based on selecting a set of solutions maximizing the volume defined by this set and a given reference point. The spread of optimal set of points on the Pareto front and the influence of the reference point has been investigated [4] . Based on previous work on Covariance Matrix Adaptation for MO (C. Igel, N. Hansen, and S. Roth. Covariance matrix adaptation for multi-objective optimization. Evolutionary Computation , 15(1):1–28, 2007.), a recombination scheme for the strategy parameters in the MO-CMA-ES has been recently developed [30] .

Complexity bounds

Some theoretical bounds have been derived for black-box optimization [29] , showing ultimate limits for evolutionary algorithms and for algorithms reaching some optimality properties; billiard algorithms were used for implementing algorithms as close as possible to the theoretical limits.

Also, non-validity of the No-Free-Lunch Theorem in continuous optimization and the design of continuous optimization algorithms has been investigated [2] .

On the efficiency of derandomization

The efficiency of quasi-random points has been analyzed in [28] and applied to CMA-ES. These results in particular show that this “derandomization” is very stable, and better for wide families of criteria. In [49] a two-point step-size adaptation rule has been proposed for evolution strategies. The new rule is a derandomized implementation of self-adaptation and can be advantageous in noisy environments.


previous
next

Logo Inria