Team tao

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: Scientific Foundations

Scientific Foundations

Abstract:

One of the goals of Machine Learning and Data Mining is to extract optimal hypotheses from (massive amounts of) data. What "optimal" means varies with the problem. The goal might be to induce useful knowledge, allowing new cases to be classified with optimal confidence (predictive data mining), or to synthesize the data into a set of understandable statements (descriptive data mining).

On the other hand, Evolutionary Computation and stochastic optimization are adapted to ill-posed optimization problems, such as involved in machine learning, data mining, identification, optimal policies, and inverse problems. However, optimization algorithms must adapt themselves to the search landscape; in other words, they need learning capabilities.

Machine learning, Data Mining, Inductive Logic Programming

Learning and mining are concerned with i) choosing the form of knowledge to be extracted (e.g., rules, Horn clauses, distributions, patterns, equations), referred to as hypothesis space or language ; ii) exploring this (huge) search space to find the best hypotheses it contains.

Formally, learning and mining can be cast as optimization problems under incomplete information. For instance, statistical learning can be viewed as an incomplete information game; while the player only knows some cards of the game (the training examples), the goal is to find some hypothesis with minimal expected loss (over all possible examples in the application domain). Likewise, a data mining algorithm is expected to provide the expert user with ``interesting'' regularities, even though in general the expert's interestingness criteria can only be discovered along the process.

New learning criteria have been investigated, related either to the structure of the hypothesis space (e.g. Bayesian nets), or to the expert's priors (e.g. ROC-based criteria, applied to medicine and bio-informatics) or preferences (e.g., multi-objective criteria for spatio-temporal data mining, applied to brain imagery).

Meanwhile, learning and mining can also be formalized as constraint satisfaction problems (CSP), particularly so for Machine Learning in First Order Logic, referred to as Inductive Logic Programming. Thorough and fruitful efforts have been made to transport the statistical computational analysis developed for CSPs, to the learning and mining disciplines. This cross-disciplinary research led to discover the presence of a phase transition in the search landscape, with far fetched consequences on the competence and scalability of the existing algorithms.

Evolutionary Computation, Stochastic Optimization

Considering the lack of a universal optimization algorithm, the power of an optimization algorithm is measured by its ability in acquiring and exploiting problem-specific information. Long based on heuristics, the use of prior knowledge most often results in customized representations and search spaces, specific evolution operators, and/or additional constraints. One of our long-term objectives is to develop self-adaptive operators, able to automatically detect and exploit regularities in the search space. Another objective is to investigate a principled use of prior knowledge in every level of evolutionary algorithms, ranging from the representation and the variation operators, to the selection operator and the tuning of the fitness function, and the choice of the hyper-parameters.

Modelling and Control of Complex Systems

In previous years, the field of Autonomous Robotics most naturally motivated the tight coupling of Learning and Optimization approaches. A posteriori , it appears that many key aspects of this field (size of the state and decision spaces; continuous vs discrete modelization; possibly different training and test distributions; stability of the control; etc) are relevant to the modelling and control of complex systems at large. Such links between modelling and control of autonomous complex systems have been explored along several directions:

 

Reinforcement learning and control

The open platform OpenDP (Section 5.7 ) hybridizes the standard Bellman decomposition with (i) machine learning algorithms; (ii) derivative-free and evolutionary optimization.

Modelling and autonomy

The AAA study (Robea Contract: Agir, Anticiper, s'Adapter. 2002-2005) aimed at providing the robotic system with a model of itself: self-awareness is viewed as a step toward autonomous behavior. While the targeted complex system initially was a robot, the approach is now being extended to Grid Modelling (see Section 6.3.2 ).

 

Estimating and action selection

The Multi-Armed Bandit framework, concerned with the pervasive "exploration vs exploitation" dilemma, has been considered in the context of dynamic environments and large numbers of options (see Section 6.1.5 ).


previous
next

Logo Inria