Team tao

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Large and Deep Networks

Participants : Hélène Paugam-Moisy, Nicolas Bredèche, Alexandre Devert, Fei Jiang, Cédric Hartland, Miguel Nicolau, Marc Schoenauer, Michèle Sebag.

The Large and Deep Networks (formerly, Reservoir Computing) SIG focuses on stochastic or hierarchical network structures, aimed at generative and/or discriminant modelling.

This research theme, initially included in the Complex Systems SIG, has been boosted by Hlne Paugam-Moisy's arrival at TAO (Pr. Université Lyon-2, dlgation INRIA), the collaboration with INRIA Alchemy (Fei Jiang's PhD) and the GENNETEC Strep project.

Echo State Networks (ESN) and Deep Networks (DN)

ESNs are stochastically specified from the number n of nodes, the density of their connexions, and the highest eigenvalue of the connexion matrix. Beyond these 3 hyper-parameters, an ESN instance is described from its output weights, thus with complexity Im2 ${\#119978 (n)}$ . Their excellent expressivity w.r.t. the space of dynamic systems is combined with a frugal search space.

Fei Jiang's PhD (defended in December 2009; coll. Alchemy and TAO) examines the relationships between the structure of the network and its computational properties [4] . He pioneered the Evolutionary Reinforcement Learning of the ESN output weights, using CMA-ES; other results show that evolutionary optimization can be applied to optimize the sparseness of the network while preserving its performance.

Alexandre Devert's PhD (defended in May 2009, already mentioned in the Complex Systems SIG) used ESNs as basic controllers for developmental representations (Continuous Cellular Automata).

Cdric Hartland's PhD (defended in Nov. 2009, already mentioned in the Complex Systems SIG) used ESNs as robotic controllers, investigating their memory capacities [56] , [88] , [3] .

Deep Networks, aimed at the unsupervised and iterative learning of hierarchical structures, are investigated by Ludovic Arnold (PhD starting in Dec. 2009, coll. LIMSI), with preliminary results regarding the generalization behaviour [85] .

Genetic Regulatory Network models

Genetic Regulatory Networks (GRNs) have been initially proposed by W. Banzhaf(W. Banzhaf, Artificial Regulatory Networks and Genetic Programming, in R. Riolo, Ed., Genetic Programming Theory and Practice 2003 , pp 43–62, Kluwer) as generative models for networks complying with statistical requirements. TAO has been investigating the evolutionary optimization of GRNs within the GENNETEC Strep,

This work was consolidated in 2009, showing that GRNs are far more evolvable than randomly-generated networks when it comes to generate small-world [66] or scale-free [18] networks. Further work within GENNETEC aims at bridging the gap between the GRN model and the semantic of Genetic Programming. In collaboration with W. Banzhaf, Memorial University of Newfoundland (Canada), preliminary results in Evolutionary Reinforcement Learning using GRNs as computational units have been obtained ([65] , to appear).


previous
next

Logo Inria