Section: Scientific Foundations
Shifting to modern computing
Notable breakthroughs have revolutionized computer science in the last decade, along several perspectives: hardware, communication/networks, and usage. With the evolution of means and ends (multi-core/multi-thread machines; grid systems; distributed/ubiquitous computing; resource-aware/anytime/scalable algorithms; ambient/pervasive intelligence), the software world gradually becomes more aware that new algorithmic paradigms are required to make the most of new architectures, to handle new demands, and to sail towards New Intelligence Frontiers (While this new world struggles with the same issues as the old one (scalability, reliability, communication efficiency), the paradigm shift can be best understood by analogy with Herbert Simon's bounded rationality. Bounded rationality indeed faces the same problems as good old rationality, except for one additional, distinctive, feature: the reasoning cost now is an element of the reasoning process. This simple modification entails the loss of equilibrium results; the overwhelming importance of initial conditions; the existence of information asymmetry; in brief, this modest modification leads to a significantly different Economics Theory.).
While TAO is already interested in modern computing as a source of applications (e.g. through Autonomic Computing and our collaboration with Alchemy [Oops!] ), this third direction of research aims at developing a different approach of algorithms, along the Computational Thinking (Term coined by Pr. Jeanette Wing, CMU. “Computing in the 21st Century”, Microsoft Research Asia Conferences, Nov. 2005. Cited in Towards 2020 Science, Microsoft White Book.)perspective. Formally, Computational Thinking is meant to revisit the art of problem setting and algorithm design, through a broader view of computing systems.
A first theme in this research direction aims at the global optimization of the information processing chain (Reminding that the optimal chain is not usually obtained by combining elements that have been optimized independently.). Precisely, learning and optimization algorithms most often involve elementary processes in charge of i) information selection (e.g., active learning, feature construction, or population-based sampling); ii) model construction (learning, building surrogate models, covariance matrix adaptation); iii) optimization (assessing the current result and going further; tuning or adapting the hyper-parameters involved in the other two tasks). While these processes have generally been considered independently, a fundamental issue is to handle their interactions, and for instance to be able to optimally (dynamically) allocate the computational efforts among them. Preliminary studies along this line, considering the adaptation of MoGo to highly parallel architectures, have started in collaboration with Bull. A longer-term research, concerned with automatic adjustment, is the goal of next section.
Secondly, Computational Thinking advocates the ability of dealing with large complex (algorithmic) systems without understanding every detail thereof, which shall be referred to as Smart Black Box Design. This second theme of research is relevant to the search of Deep Representations and the Gennetec project, already mentioned in section 3.2 . It also includes the studies related to complex systems (section 6.4 ), where tools from statistical physics are used to provide a manageable model at the macro-scale of a system described at the micro-scale. Last, but not least, the DigiBrain project aims at creating the means for an effective interaction with the user without spelling the interaction rules.