Section: Scientific Foundations
This section describes Tao's main research directions, first presented during Tao's evaluation in November 2007. Four strategic issues had been identified at the crossroad of Machine Learning and Evolutionary Computation:
What is the search space and how to search it.
Representations, Navigation Operators and Trade-offs.
What is the goal and how to assess the solutions.
Optimal Decision under Uncertainty.
How to bridge the gap between algorithms and computing architectures ?
Hardware-aware software and Autonomic Computing.
How to bridge the gap between algorithms and users?
Crossing the chasm.
Six Special Interest Groups (SIGs) have been defined in Tao, investigating the above complementary issues from different perspectives. The comparatively small size of Tao SIGs enables in-depth and lively discussions; the fact that all Tao members belong to several SIGs, on the basis of their personal interests, enforces the strong and informal collaboration of the groups, and the fast information dissemination.
Representations and Properties
The choice of the solution space is known to be the crux of both Machine Learning (model selection) and Evolutionary Computation (genotypic-phenotypic mapping).
The first research axis in Tao thus concerns the definition of an adequate representation, or search space H , together with that of adequate navigation operators. H and its operators must enforce flexible trade-offs between expressivity and compacity on the one hand, and stability and versatility on the other hand.
The first trade-off corresponds to the fact that H should simultaneously include sufficiently complex solutions - i.e. good-enough solutions for the problem at hand - and offer a short description for these solutions, thus making it feasible to find them.
The second tradeoff is actually related to the navigation in H ; while most modifications of a given solution should only marginally modify its behaviour (stability), some modifications should lead to radically different behaviours (versatility). Both properties are required for efficient optimization in complex search spaces; stability, also referred to as “strong causality principle” (I. Rechenberg: Evolutionstrategie: Optimierung Technisher Systeme nach Prinzipien des Biologischen Evolution . Fromman-Hozlboog Verlag, Stuttgart, 1973.)is needed for optimization to do better than random walk; versatility potentially speeds up optimization through creating short-cuts in the search space.
This research direction is investigated in the Complex System SIG (section 6.2 ) about developmental representations for Design and sequential representations for Temporal Planning, in the Reservoir Computing SIG (section 6.6 ) for large Neural Network Topologies, and in the Continuous Optimization SIG, studying some adaptive representation.
Optimal decision under uncertainty
Benefitting from the expertise acquired when designing and developing MoGo, Tao investigates several extensions of the Multi-Armed Bandit (MAB) framework and the Monte-Carlo tree search. Some main issues raised by optimal decision under uncertainty are the following:
Regret minimization and any-time behaviour.
The any-time issue is tightly related to the scalability of Optimal Decision under Uncertainty; typically, MAB was found better suited than standard Reinforcement Learning to large-scale problems as its criterion (the regret minimization) is more amenable to fast approximations.
Dynamic environments (non stationary reward functions).
The issue of dynamic environments was first raised through the Online Trading of Exploration vs Exploitation Challenge (The OTEE Challenge, funded by Touch Clarity Ltd and organized by the PASCAL Network of Excellence, models the selection of news to be displayed by a Web site as a multi-armed bandit, where the user's interests are prone to sudden changes; the OTEE Challenge was won by the TAO team in 2006.); it is relevant to e.g. online parameter tuning (see section 6.3 ).
Use of side information / Multi-variate MAB
The use of side information by MAB is meant to exploit prior knowledge and/or complementary information about the reward. Typically in MoGo, the end of the game can be described at different levels of precision (e.g., win/lose, difference in the number of stones); estimating the local reward estimate depending on the available side information aims at a better robustness.
The bounded rationality issue considers the case where the action space is large relatively to the time horizon, meaning that only a sample of the possible actions can be considered in the imparted time.
Many applications actually involve antagonistic criteria; for instance autonomous robot controllers might simultaneously want to explore the robot environment, while preserving the robot integrity. The challenge raised by Multi-objective MAB is to find the “Pareto-front” policies for a moderately increased computational cost compared to the standard mono-objective approach.
Historically, the apparition of parallel architectures only marginally affected the art of programing; the main focus has been on how to rewrite sequential algorithms to make them parallelism-compliant. The use of distributed architectures however calls for a radically different programming spirit, seamlessly integrating computation, communication and assessment.
The main issues thus become i) to aggregate the local information with the information transmitted by the other nodes (computation); ii) to abstract the local information in order to transmit it to the other nodes (communication); iii) possibly, to model and assess the other nodes, aimed at modulating the way the transmitted information is exploited, Message passing algorithms such as Page Rank or Affinity Propagation (Frey, B., Dueck, D.: Clustering by passing messages between data points. In: Science. Volume 315. (2007) 972–976.)are prototypical examples of distributed algorithms; the analysis is shifted, from the algorithm termination and computational complexity, to its convergence and approximation properties.
Symmetrically, the ever increasing resources of modern computing systems, as well as their computational load entail a like increase on the administrator workload, in charge of monitoring the grid status and maintaining the job running process. The huge need for scalable administration tools paved the way toward Autonomic Computing (J. O. Kephart and D. M. Chess, “The vision of autonomic computing,” Computer , vol. 36, pp. 41–50, 2003.). Autonomic Computing (AC) Systems are meant to feature self-configuring, self-healing, self-protecting and self-optimizing skills (I. Rish, M. Brodie, and S. M. et al, “Adaptive diagnosis in distributed dystems,” IEEE Transactions on Neural Networks (special issue on Adaptive Learning Systems in Communication Networks) , vol. 16, pp. 1088–1109, 2005.). A pre-requisite for Autonomic Computing is to provide an empirical model of the (complex) computational system at hand, applying Machine Learning on the running traces of the system.
Crossing the chasm
This fourth strategic priority, referring to the influential Moore's book (Moore, G.A.: Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customer. Collins Business Essentials (1991).), is motivated by the fact that many outstandingly efficient algorithms never make it out of research labs. One reason for it is the difference between editor's and programmer's view of algorithms. In the perspective of software editors, an algorithm is best viewed as a single “Go” button. The programmer's perspective is radically different: as he/she sees that various functionalities can be ented on the same algorithmic core, the number of options steadily increases (with the consequence that users usually master less than 10% of the available functionalities). Independently, the programmer gradually acquires some idea of the flexibility needed to handle different application domains; this flexibility is most usually achieved through defining parameters and tuning them. Parameter tuning thus becomes a barrier to the efficient use of new algorithms.