Section: New Results
Dynamic Neural Fields
Participants : Frédéric Alexandre, Yann Boniface, Laurent Bougrain, Mauricio Cerda, Hervé Frezza-Buet, Bernard Girau, Thomas Girod, Axel Hutt, Mathieu Lefort, Nicolas Rougier, Wahiba Taouali, Thierry Viéville, Thomas Voegtlin.
The work reported this year represents both extensions of previous works and new results linked to the notion of neural population, considered at (i) a formal level (theorical studies of neural fields), (ii) a numerical level (interface with the spike level) and (iii) a more embodied one (implementations).
Formal Level
Synchronous and Asynchronous Computations
Several artificial neuron models are best described by a set of continuous differential equations that define the evolution of some variables over time, e.g. the membrane potential of the neuron. When these models are connected together, we obtain a differential equation system with complex inter-dependent interactions. To gain the solution of such a system, in general it requires a numerical integration since in the vast majority of cases there is no analytical solution. Regardless of the numerical method used, we emphasize the fact that all these numerical methods actually require a central clock to synchronize computations. In this context, we would like to study the extent to which we can remove this central clock and implement asynchronous computations. We have thus studied this phenomenon in some details and characterized the relation between noise, synchronous evaluation (the “regular” mathematical integration) and asynchronous evaluation in the case of a simple dual particle system [33] . More generally, we aim at explaining the behavior of a general differential equation system when it is considered as a set of particles that may or may not be iterated by synchronous computations.
Algorithmic adjustment of neural field parameters.
We have completed this study of neural-field calculation maps parameter adjustment in the discrete case. Algorithmic mechanisms allowing to choose a right set of parameters in order to both (i) guaranty the stability of the calculation and (ii) tune the shape of the output map, have been proposed. These results do not “prove” the existence of stable bump solutions, this being already known and extensively verified numerically, but allow to calculate algorithmically the related parameters. The results apply to scalar and vectorial neural-fields thus allowing to bypass the inherent limitations brought by mean frequency models and also take the laminar structure of the cortex or high-level representation of cortical computations into account [50] .
Numerical level
Learning.
Feed-forward, feed-back and lateral information flows can be extracted in cortically-inspired neural fields. Beyond the question of their respective effects stands the question of the learning rules that can be associated to each of them and of their interactions. We are studying the question with learning rules inspired from the BCM paradigm(Bienenstock, E.L. and Cooper, L.N. and Munro, P.W.; Theory for the Development of Neuron Selectivity: Orientation Specificity and Binocular Interaction in Visual Cortex. J Neurosci (2); 1982), where a dynamic threshold between LTP and LTD (resp. Long Term Potentiation and Depression) depends on the history of activations. Such learning rules are local and unsupervised, which is very interesting in the framework of neural fields, and have demontrated the ability to learn orientation selectivity from feed-forward and lateral connectivity. In this framework, we have also shown that the feed-back flow could be used as a modulatory influence, for example to signal the interest of some stimuli and accelerate their learning [27] .
Qualitatively quantifying neural fields.
In collaboration with Supelec, we have proposed to define the properties of a neural field through a set of behaviors it should display, when facing certain characteristic inputs. Accordingly, we have defined some statistical measurements to quantify the performances of a neural field in such situations. On this basis, we have proposed a new neural field model [18] particularly suited to implement the competitive processing required in a map performing self organization. With the addition of Kohonen-like learning rules on the input information flow, we therefore obtain a self organization process emerging from purely distributed computations, which was not possible with Kohonen-like self-organizing maps. Future works correspond to the implementation of the model on a really distributed architecture and to its extension to multi-maps multimodal learning.
Multimodal learning through joint dynamic neural fields
This work relates to the development of a coherent multimodal learning for a system with multiple sensory inputs in order to obtain different maps which are topographically organized (two spatially close neurons respond to close stimuli). We have modified the BCM synaptic rule, a local learning rule, to obtain the self organization of our neuronal inputs maps and we use a CNFT based competition to drive the BCM rule. In practice, we introduce a feedback modulation of the learning rule, representing multimodal constraints of the environment. This feedback is obtained using a relaxation between the different layers of the sensory and associative maps of the system.
Dynamic Neural Field using spikes
We've been studying the spiking diffusion of a neural field model that is an extension of lateral inhibition-type neural field models. The major breakthrough of this work is the possibility to use both spiking neurons (instead of regular rate-coding neuron models) and a restricted pattern of lateral connectivity. The suppression of the common global inhibition signal is compensated by a diffusion phenomenon that allows to transport information from one point to another. In the end we obtain a model of fast visual tracking that aimed to be implemented on FPGA hardware allowing real time multi-target tracking.
Embodied level
Motion detection.
We develop bio-inspired neural architectures to detect, extract and segment the direction and speed components of the optical flow within sequences of images. The structure of these models derives from the course of the optical flow in the human brain. It begins in the retina and receives various treatments at every stages of its magnocellular pathway through the thalamus and the cortex. Our models mostly handle the properties of three cortical areas called V1 (primary visual area), MT (middle temporal), and MST (middle superior temporal): the MT area detects patterns of movement, while spatio-temporal integration is made at the local level by V1 and at the global level by both MT and MST. This work faces many concrete difficulties, such as specular effects, shadowing, texturing, occlusion and aperture problems. Moreover, the complexity of this task must be dealt with within the implementation constraint of real-time processing. Recent works have focused on two extensions of our initial models.
-
We have developped a bio-inspired parallel architecture to perform detection of motion, providing a wide range of operation and avoiding the error propagation associated with the usual serial multiscale approaches [24] . Our architecture is inspired by biological experiments that show that human motion perception seems to follow a parallel multiscale scheme.
-
We now address the complex task of recognizing visual motion patterns such as walking, fighting and face gestures among others. Based on experiments in psychophysics, electro-physiological data and functional imaging techniques, we show that several key features of the human recognition of visual motion patterns may be modeled using 2D asymmetric neural fields [34] . Our model implements template-based recognition, that may be related to the existence of units or populations acting as snapshots in the dorsal pathway. To validate our model on real video sequences, we have defined a setup for the acquisition of synchronized 2D and 3D sequences based on Vicon cameras. This work has been performed in relation with our STIC-AmSud BAVI project depicted in § 8.4 .
Modeling the superior colliculus by mean of a neural field.
In the context of the ANR MAPS project (cf. § 8.2 ), we have been studying the superior colliculus in tight collaboration with Laurent Goffart from the Institut de Neurosciences Cognitives de la Méditerranée. Considering the cortical magnification induced by the non homogeneous distribution of retina rods and cones on the retina surface, we modeled the superior colliculus using a dynamic neural field that may explain the stereotyped nature of colliculus activity. The process of building experimental setup using monkeys to check model predictions is currently ongoing. In the same context in collaboration with the Maia team, we are studying the cerebellum structure in order to understand how the motor command from the colliculus can be adjusted/modulated through learning.
Modeling of neural activity during anaesthesia.
Anaesthesia plays an important role in medical surgery though its neural mechanism is still poorly understood. Besides several different molecular and behavioral phenomena, the administration of anaesthetic agents affects the power spectrum of electro-encephalographic activity (EEG) in a characteristic way. The theoretical study aims to model the power spectrum changes in EEG subject to the concentration of the specific anaesthetic agent propofol. The work developed a neural model involving two neuron types and synapse types while taking into account the synaptic effect of propofol. The mathematical derivation of the power spectrum allows for the investigation of suitable physiological parameters which reproduce the experimental effect of propofol. Several mathematical conditions on physiological parameters have been derived and the EEG-power spectrum during the administration of different concentration levels of propofol has been modeled successfully.