Team tao

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Crossing the Chasm

Participants : Alejandrao Arbelaez, Anne Auger, Robert Busa-Fekete, Luis Da Costa, Alvaro Fialho, Nikolaus Hansen, Balázs Kégl, Marc Schoenauer, Michèle Sebag.

Many forefront techniques in both Machine Learning and Stochastic Search have been very successful in solving difficult real-world problems. However, their application to newly encountered problems, or even to new instances of known problems, remains a challenge, even for experienced researchers of the field - not to mention newcomers, even if they are skilled scientists or engineers from other areas. Theory and/or practical tools are still missing to make them crossing the chasm (from Geoffrey A. Moore's book about the Diffusion of Innovation). The difficulties faced by the users arise mainly from the significant range of algorithm and/or parameter choices involved when using this type of approaches, and the lack of guidance as to how to proceed for selecting them. Moreover, state-of-the-art approaches for real-world problems tend to represent bespoke problem-specific methods which are expensive to develop and maintain. Several works are on-going at TAO are concerned with “Crossing the Chasm”, be it in the framework of the joint MSR-INRIA lab in collaboration with Youssef Hamadi (Microsoft Research Cambridge), or within the EvoTest project, where TAO is in charge of automatic generation of the Evolutionary Engine.

Note that a longer-term goal, that could be useful for all of the on-going work described below, is the design of accurate decsriptors that would allow us to describe a given problem (or instance). From thereon, we would be able to learn from extensive experiments what are the good algorithms/parameters for classes of instances, or even indvidual instances, like has been done in the SAT domain by Y. Hamadi and co-authors (F.Hutter, Y.Hamadi, H.H.Hoos, and K.Leyton-Brown. Performance Prediction and Automated Tuning of Randomized and Parametric Algorithms, CP'2006.).

Adaptive Operator Selection

In order to adapt on-line the mechanism that chooses among the different variation operators in Evolutionary Algorithms, we have proposed two original features

Adaptation for Continuous Optimization

Building on the well-known Covariance Matrix Adaptation Evolution Strategy (CMA-ES) algorithm, that adapts the covariance matrix of the Gaussian mutation of an Evolution Strategy based on the path followed by the evolution, several improvements and generalizations have been proposed:

Meta-parameter tuning for Machine Learning Algorithms

Non-parametric learning algorithms usually require the tuning of hyper-parameters that determine the complexity of the learning machine. Tuning these parameters is usually done manually based on (cross) validation schemes. The goal of this theme is to develop principled methods to carry out this optimization task automatically using global optimization algorithms. The theme is part of the MetaModel project ( https://users.web.lal.in2p3.fr/kegl/metamodel ).

Learning Heuristics Choice in Constraint Programming

Several heuristics have been proposed to choose which branch to explore next within Constraint Programming algorithms. The idea we are exploring is to learn which one is the best given the characteristics of the current node of the tree (e.g. domain sizes, number of still unsatisfied constraints, etc) [9] .


previous
next

Logo Inria