Team tao

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Large and Deep Networks

Participants : Ludovic Arnold, Pierre Allegraud, Sylvain Chevallier, Cédric Gouy-Pailler, Anthony Mouraud, Hélène Paugam-Moisy, Sébastien Rebecchi, Michèle Sebag.

Optimization of deep network architectures

An unsupervised and layer-wise method, based on the criterion of reconstruction error, has been successfully applied to the selection of minimal size for efficient learning in successive hidden layers of RBM deep networks. The method holds for stacked auto-associators. The dependency between the model selection task and the training effort has been investigated [101] , [26] .

Simulation of large spiking neuron networks

The DAMNED simulator is a “Distributed And Multithreaded Neural Event-Driven” development framework suitable for modeling dynamic interactions between large spiking neuron networks. An hypothesis made in neuroscience on the network architecture for controlling saccadic eye movements has been tested through time, thanks to DAMNED [84] .

Dynamic organization at a large network scale

From a minimal model of decision-making process, based on two spiking neurons, at the node level, a macroscale organization emerges through time in a large sparse network modeling an ant colony. The model reproduces the different regimes of synchronization observed by biologists for the division of labor in real insect societies [103] , [59] .

Sébastien Rebecchi and Sylvain Chevallier joined the team as post-docs in September on the ASAP ANR project. Their role is to investigate on new tracks (e.g. sparse coding and compressed sensing) for improving learning in deep networks.


previous
next

Logo Inria