Section: New Results
Crossing the Chasm
Participants : Alejandrao Arbelaez, Anne Auger, Robert BusaFekete, Luis Da Costa, Alvaro Fialho, Nikolaus Hansen, Balázs Kégl, Marc Schoenauer, Michèle Sebag.
Many forefront techniques in both Machine Learning and Stochastic Search have been very successful in solving difficult realworld problems. However, their application to newly encountered problems, or even to new instances of known problems, remains a challenge, even for experienced researchers of the field  not to mention newcomers, even if they are skilled scientists or engineers from other areas. Theory and/or practical tools are still missing to make them crossing the chasm (from Geoffrey A. Moore's book about the Diffusion of Innovation). The difficulties faced by the users arise mainly from the significant range of algorithm and/or parameter choices involved when using this type of approaches, and the lack of guidance as to how to proceed for selecting them. Moreover, stateoftheart approaches for realworld problems tend to represent bespoke problemspecific methods which are expensive to develop and maintain. Several works are ongoing at TAO are concerned with “Crossing the Chasm”, be it in the framework of the joint MSRINRIA lab in collaboration with Youssef Hamadi (Microsoft Research Cambridge), or within the EvoTest project, where TAO is in charge of automatic generation of the Evolutionary Engine.
Note that a longerterm goal, that could be useful for all of the ongoing work described below, is the design of accurate decsriptors that would allow us to describe a given problem (or instance). From thereon, we would be able to learn from extensive experiments what are the good algorithms/parameters for classes of instances, or even indvidual instances, like has been done in the SAT domain by Y. Hamadi and coauthors (F.Hutter, Y.Hamadi, H.H.Hoos, and K.LeytonBrown. Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms, CP'2006.).
Adaptive Operator Selection
In order to adapt online the mechanism that chooses among the different variation operators in Evolutionary Algorithms, we have proposed two original features

Using a MultiArmed Bandit algorithm for operator selection [13] : each operator is viewed as an arm of a MAB problem. Because the context is dynamic, a statistical test (PageHinkley) is used to detect abrupt changes and restart the bandit;

Using Extreme values rather than averages as a reward for operators: It has been advocated in many domains that extremely rare but extremely beneficial events can be much more consequential than average good events. This has been validated on the OneMax problem, where the optimal strategy for a given fitness level is known [15] , as well as on kpath problems (paper to be presented in LION'09 conference in Trento, January 09).

Ongoing work is investigating the recombination of the above ideas with the Compass approach of our colleagues from Angers University (J.Maturana, F.Saubion: A Compass to Guide Genetic Algorithms. PPSN 2008: 256–265.).
Adaptation for Continuous Optimization
Building on the wellknown Covariance Matrix Adaptation Evolution Strategy (CMAES) algorithm, that adapts the covariance matrix of the Gaussian mutation of an Evolution Strategy based on the path followed by the evolution, several improvements and generalizations have been proposed:

an adaptive encoding that can apply to any distributionbased search strategy has been proposed [20] . The mechanism renders the underlying search strategy rotational invariant and facilitates an adaptation to nonseparable problems. On nonseparable, badly scaled problems adaptive encoding can improve the performance of many search algorithms by orders of magnitude.

a version of CMA with linearly scaling computational complexity and linear space requirement has been proposed at [27] (compared to quadratic for the original algorithm). In high dimensional search spaces (larger than hundred) the new variant can be advantageous not only on cheap to evaluate search problems but even on very expensive nonseparable problems.
Metaparameter tuning for Machine Learning Algorithms
Nonparametric learning algorithms usually require the tuning of hyperparameters that determine the complexity of the learning machine. Tuning these parameters is usually done manually based on (cross) validation schemes. The goal of this theme is to develop principled methods to carry out this optimization task automatically using global optimization algorithms. The theme is part of the MetaModel project ( https://users.web.lal.in2p3.fr/kegl/metamodel ).
Learning Heuristics Choice in Constraint Programming
Several heuristics have been proposed to choose which branch to explore next within Constraint Programming algorithms. The idea we are exploring is to learn which one is the best given the characteristics of the current node of the tree (e.g. domain sizes, number of still unsatisfied constraints, etc) [9] .