Team tao

Overall Objectives
Scientific Foundations
Application Domains
New Results
Contracts and Grants with Industry
Other Grants and Activities

Section: Scientific Foundations

Scientific Foundations


This section describes Tao's main research directions, first presented during Tao's evaluation in November 2007. Four strategic issues had been identified at the crossroad of Machine Learning and Evolutionary Computation:

Table 1.
WhereWhat is the search space and how to search it.
Representations, Navigation Operators and Trade-offs .
WhatWhat is the goal and how to assess the solutions.
Optimal Decision under Uncertainty.
How.1How to bridge the gap between algorithms and computing architectures ?
Hardware-aware software and Autonomic Computing.
How.2How to bridge the gap between algorithms and users?
Crossing the chasm

Six Special Interest Groups (SIGs) have been defined in TAO, investigating the above complementary issues from different perspectives. The comparatively small size of Tao SIGs enables in-depth and lively discussions; the fact that all TAO members belong to several SIGs, on the basis of their personal interests, enforces the strong and informal collaboration of the groups, and the fast information dissemination.

Representations and Properties

The choice of the solution space is known to be the crux of both Machine Learning (model selection) and Evolutionary Computation (genotypic-phenotypic mapping).

The first research theme in TAO thus concerns the definition of an adequate representation, or search space Im1 $\#8459 $ , together with that of adequate navigation operators. Im1 $\#8459 $ and its navigation operators must enforce flexible trade-offs between expressivity and compacity on the one hand, and stability and versatility on the other hand.

Expressivity/compacity tradeoff (static property): Im1 $\#8459 $ should simultaneously include sufficiently complex solutions - i.e. good-enough solutions for the problem at hand - and offer a short description for these solutions, thus making it feasible to find them.

Stability/versatility tradeoff (dynamic property): while most modifications of a given solution in Im1 $\#8459 $ should only marginally modify its behaviour (stability), some modifications should lead to radically different behaviours (versatility). Both properties are required for efficient optimization in complex search spaces; stability, also referred to as “strong causality principle”(I. Rechenberg: Evolutionstrategie: Optimierung Technisher Systeme nach Prinzipien des Biologischen Evolution . Fromman-Hozlboog Verlag, Stuttgart, 1973.) is needed for optimization to do better than random walk; versatility potentially speeds up optimization through creating short-cuts in the search space.

This research direction is investigated in:

Optimal Decision Under Uncertainty

Benefitting from the MoGo expertise, TAO investigates several extensions of the Multi-Armed Bandit (MAB) framework and the Monte-Carlo tree search. Some main issues raised by optimal decision under uncertainty are the following:

This research direction is chiefly investigated by the Optimal Decision Making SIG (section 6.5 ), in interaction with the Complex System and the Crossing the Chasm SIGs (sections 6.2 and 6.3 ).

Hardware-Software Bridges

Historically, the apparition of parallel architectures only marginally affected the art of programing; the main focus has been on how to rewrite sequential algorithms to make them parallelism-compliant. The use of distributed architectures however calls for a radically different programming style/computational thinking, seamlessly integrating:

Message passing algorithms such as Page Rank or Affinity Propagation(Frey, B., Dueck, D.: Clustering by passing messages between data points. In: Science. Volume 315. (2007) 972–976.) are prototypical examples of distributed algorithms. The analysis is shifted from the static properties (termination and computational complexity) to the dynamic properties (convergence and approximation) of the algorithms, after the guiding principles of complex systems.

Symmetrically, modern computing systems are increasingly viewed as complex systems of their own, due to their ever increasing resources and computational load. The huge need of scalable administration tools, supporting grid monitoring and maintenance of the job running process, paved the way toward Autonomic Computing(J. O. Kephart and D. M. Chess, “The vision of autonomic computing,” Computer , vol. 36, pp. 41–50, 2003.). Autonomic Computing (AC) Systems are meant to feature self-configuring, self-healing, self-protecting and self-optimizing skills(I. Rish, M. Brodie, and S. M. et al, “Adaptive diagnosis in distributed dystems,” IEEE Transactions on Neural Networks (special issue on Adaptive Learning Systems in Communication Networks) , vol. 16, pp. 1088–1109, 2005.). A key milestone for Autonomic Computing is to provide the system with a phenomenological model of itself (self-aware system), built from the system logs using Machine Learning and Data Mining.

This research direction is investigated in the Complex System SIG (section 6.2 ) and in the Autonomic Computing SIG (section 6.1 ).

Crossing the chasm

This fourth strategic priority, inspired by Moore's book(Moore, G.A.: Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customer. Collins Business Essentials (1991).), is motivated by the fact that many outstandingly efficient algorithms never make it out of research labs. One reason for it is the difference between editor's and programmer's view of algorithms. In the perspective of software editors, an algorithm is best viewed as a single “Go” button. The programmer's perspective is radically different: as he/she sees that various functionalities can be ented on the same algorithmic core, the number of options steadily increases (with the consequence that users usually master less than 10% of the available functionalities). Independently, the programmer gradually acquires some idea of the flexibility needed to handle different application domains; this flexibility is most usually achieved through defining parameters and tuning them. Parameter tuning thus becomes a barrier to the efficient use of new algorithms.

This research direction is chiefly investigated by the Crossing the Chasm SIG (section 6.3 ) and also by the Continuous Optimization SIG (section 6.4 ).


Logo Inria