Overall Objectives
Scientific Foundations
Application Domains
New Results
Other Grants and Activities

Section: Overall Objectives

Overall Objectives

The goal of our research is to study the properties and computational capacities of distributed, numerical and adaptative networks, as observed in neuronal systems. In this context, we aim to understand how complex high level properties may emerge from such complex systems including their dynamical aspects. In close reference to our domain of inspiration, Neuroscience, this study is carried out at three scales, namely neurons, population, cerebral region.

  1. Neurons: At the microscopic level, our approach relies on precise and realistic models of neurons and of the related dynamics, analyzing the neural code in small networks of spiking neurons (cf.  §  3.2 ).

  2. Population of neurons: At the mesoscopic level, the characteristics of a local circuit are integrated in a high level unit of computation, i.e. a dynamic neural field (cf.  §  3.3 ). This level of description allows us to study larger neuronal systems, such as cerebral maps, as observed in sensori-motor loops.

  3. Higher level functions: At the macroscopic level, the analysis of physiological signals and psychometric data is to be related to more behavioral hints. This is for instance the case with electroencephalographic (EEG) recordings, allowing to measure brain activity, including in brain computer interface paradigms (cf.  §  3.4 ).

Very importantly, these levels are not studied independently and we target progresses at the interface between levels. The microscopic/mesoscopic interface is the place to consider both the analog and asynchronous/event-based mechanisms and derive computational principles coherent across scales. The mesoscopic/macroscopic interface is the place to understand the emergence of functions from local computations, by means of information flow analysis and study of interactions.

Learning is a central issue at each level. At the microscopic level, the pre/post synaptic interactions are studied in the framework of Spike Time Dependent Plasticity (STDP). At the mesoscopic level, spatial and temporal patterns of activity in neural population are the cues to be memorized (e.g. via the BCM rule). At the macroscopic level, behavioral skills are acquired along time, through incremental strategies, e.g. using conditioning, unsupervised or reinforcement learning.

Our research is linked to several scientific domains (cf.  §  3.1 ). In the domain of computer science, we generate novel paradigms of distributed spatial computation and we aim at explaining their properties, intrinsic (e.g. robustness) as well as functional (e.g. self-organization). In the domain of cognitive science, our models are used to emulate various functions (e.g. attention, memory, sensori-motor coordination) which are consequently fully explained by purely distributed asynchronous computations. In the domain of neuroscience, we share with biologists, not only data analysis, but also frameworks for the validation of biological and computational assumptions in order to validate or falsify existing models. This is the best way to increase knowledge and improve methods in both fields.

In order to really explore these kinds of bio-inspired computations, the key point is to remain consistent with biological and ecological constraints. Among computational constraints, computations have to be really distributed, without central clock or common memory. The emerging cognition has to be situated (cf.  §  3.6 ), i.e. resulting from a real interaction in the long term with the environment. As a consequence, our models are particularly well validated with parallel architectures of computations (e.g. FPGA, clusters, cf.  §  3.5 ) and embodied in systems (robots) that interact with their environment (cf.  §  3.6 ).

Accordingly, four topics of research have been carried out this year.

with a transversal topic related to:


Logo Inria