ODYSSEE is a joint project team with the Paris Ecole Normale Supérieure, Computer Science Department, and the Ecole Nationale des Ponts et Chaussées.

ODYSSEE focuses on computational neuroscience and some of its applications. We try to unveil the principles that govern the functioning of neurons and assemblies thereof, to understand the relations between the anatomy of the human brain and its functions and to use our results to bridge the gap between biological and computational vision. Our work is very mathematical but we make heavy use of computers for numerical experiments and simulations. We have close ties with several top groups in biological neuroscience. We are pursuing the idea that the "unreasonable effectiveness of mathematics" can also be brought to bear on neuroscience.

We conduct research in the following five main areas.

Modeling and simulating single neurons

Modeling and simulating assemblies of neurons

Measuring and modeling the human brain anatomical connectivity using diffusion magnetic resonance.

Measuring and modeling the functioning of the human brain through its electrical activity using magneto- and electroencephalography.

Computational and biological vision.

The project of installing a magnetoencephalography machine (MEG) in La Timone, Marseille, has been successfully completed after some six years of efforts between the “Laboratoire de Neurophysiologie et de Neuropsychologie, Service de Neurophysiologie Clinique”, directed by Professor Patrick Chauvel, and the Odyssée project team. This will reinforce the collaboration between INRIA, INSERM and CNRS and contribute greatly to the creation of a pole of excellence in theoretical and clinical neuroscience in the PACA region. The equipment has been paid for by a combination of sponsors including CNRS, INRIA, INSERM, the local collectivities Conseil Général 13, Conseil Général 06, Conseil Régional PACA, Marseille Provence Métropole, the Ministry of enseignement supérieur et de la recherche and the AP-HM.

A usually invasive technique that allows to display a visual correlate of the activity of the cortex. One distinguishes intrinsic and extrinsic optical imaging.

Understanding the principles of information processing in the brain is challenging, theoretically and experimentally. Computational neuroscience attempts to build models of neurons at a
variety of levels, microscopic, i.e. the minicolumn containing of the order of one hundred or so neurons, mesoscopic, i.e. the macrocolumn containing of the order of
10
^{4}-10
^{5}neurons, and macroscopic, i.e. a cortical area such as the primary visual area V1.

Modeling such assemblies of neurons and simulating their behaviour involves putting together a mixture of the most recent results in neurophysiology with such advanced mathematics as dynamic systems theory, bifurcation theory, probability theory, stochastic calculus, and statistics, as well as the use of simulation tools

In order to test the validity of the models we rely heavily on experimental data. These data come from single or multi electrode recordings and optical imaging and are provided by our
collaborations with neurophysiology laboratories such as the UNIC
http://

The Odyssee team works at the three levels. We have proposed two realistic models of single neurons , by making use of physiological data and the theory of dynamic systems and bifurcations. At this level of analysis we have also proposed a variety of theoretical tools from the theory of stochastic calculus and solved an open problem of determining the probability law of the spike intervals for a simple but realistic neuron model, the leaky integrate and fire with exponentially decaying synaptic currents . We have also provided a mathematical analysis, through bifurcation theory, of the behaviour of a particular mesoscopic model , the one due to Jansen and Rit .

We have also started some efforts at the macroscopic level, in particular for modeling visual areas, see below. For this particular level, information about the anatomical connectivity such as the one provided by diffusion imaging techniques is of fundamental importance, see below.

Because the relationship between brain structure and brain function is fundamental to neuroscience, developing techniques that allow to recover the anatomical connectivity in the in vivo brain is of utmost importance and a major goal to achieve if one wants to understand how the brain works and acquire a better understanding of its mechanisms.

Diffusion Magnetic Resonance Imaging (DMRI) not only gives scientists access to data relating to local white matter architecture but is also the unique non invasive method currently available to explore the microstructure of biological tisssues like those of the white matter in the human brain. This is why our research deals with the development of new processing tools for DMRI. Because of the complexity of the data, this imaging modality raises a large amount of mathematical and computational challenges. We have therefore started by developing new algorithms relying on Riemannian geometry, differential geometry, partial differential equations and front propagation techniques to correctly and efficiently estimate, regularize, segment and process Diffusion Tensor MRI (DT-MRI). (see ,

Diffusion Tensor Magnetic Resonance Imaging is an MRI technique that allows to measure in-vivo and in a non-invasise way the restricted diffusion of water molecule in a biological tissue. A tensor describes the 3D shape of diffusion

However, due to the limited current resolution of diffusion-weighted (DW) MRI, one third to two thirds of imaging voxels in the human brain white matter contain fiber crossing bundles. Therefore, it's also of utmost importance to tackle the problem of recovering fiber crossing and develop techniques that go beyond the limitations of diffusion tensor imaging (DTI). We are contributing towards these objectives and or recent work deals with the development of local reconstruction methods, segmentation and tractography algorithms able to infer multiple fiber crossing from diffusion data. To do so, high angular resolution diffusion imaging (HARDI) is used to measure diffusion images along several directions. Q-ball imaging (QBI) is a recent such HARDI technique that reconstructs the diffusion orientation distribution function (ODF), a spherical function that has its maxima aligned with the underlying fiber directions at every voxel. QBI and the diffusion ODF play a central role in our work focused on the development of a robust and linear spherical harmonic estimation of the HARDI signal and our development of a regularized, fast and robust analytical QBI solution that outperform the state-of-the-art ODF numerical technique available. Those contributions are fundamentals and have already started to impact on the Diffusion MRI, HARDI and Q-Ball Imaging community. These contributions are the basis of our probabilistic and deterministic tractography algorithms exploiting the full distribution of the fiber ODF (see ,

Q-Ball Imaging is a HARDI method that measures apparent diffusion coefficients along many directions distributed almost isotropically on the surface of a sphere

High Angular Resolution Diffusion Imaging allows apparent diffusion coefficients to be measured along a large number of directions, poses no assumptions on the underlying diffusion process and is capable of detecting the presence of multiple diffusion directions within an individual voxel

The Orientation Distribution Function describes the probability distribution for a water molecule to displace in a given direction

Overall, we are now able to show local reconstruction, segmentation and tracking results on complex fiber regions with known fiber crossing on simulated HARDI data, on a biological phantom and on multiple human brain datasets. Most current DTI based methods neglect these complex fibers, which might lead to wrong interpretations of the brain anatomy and functioning.

In order to acquire a better understanding of the brain mechanisms and to improve the diagnosis of neurological disorders, we are also interested by the application of our tools to important neuroscience problems: the analysis of the connections between the cerebral cortex and the basal ganglia, implicated in motor tasks, the study of the anatomo-functional network of the human visual cortex and the reconstruction of the transcallosal fibers intersecting with the corona radiata and superior longitudinal fasciculus, regions usually neglected by most DTI-based methods and recovered thanks to the ODF-based probabilistic. Our work is done in collaboration with the Center for Magnetic Resonance Research of the University of Minnesota (Minneapolis), the centre IRMf of the hospital la Timone (Marseille), the Centre for Neuro Imaging Research (CENIR - Pitié-Salpêtrière - Paris), the Max Planck Institute for Human Cognitive and Brain Sciences (Leipzig,Germany) and the Montreal Neurological Institute (McGill - Montréal).

Electroencephalography (EEG) and Magnetoencephalography (MEG) are two non-invasive techniques for measuring (part of) the electrical activity of the brain. While EEG is an old technique (Hans Berger, a german neuropsychiatrist, measured the first human EEG in 1929), MEG is a rather new one: the first measures of the magnetic field generated by the electrophysiological activity of the brain have been done in 1968 at MIT by D. Cohen. Nowadays, EEG is relatively inexpensive and used commonly to detect and qualify neural activities (epilepsy detection and characterisation, neural disorder qualification, BCI, ...). MEG is, comparatively, much more expensive as SQUIDS work in very challenging conditions (at liquid helium temperature) and as a specially shielded room must be used to separate the signal of interest from the ambient noise. However, as it reveals a complementary vision to that of EEG and as it is less sensitive to the head structure, it also bears great hopes and more and more MEG machines are installed throughout the world. INRIA and Odyssée have participated to the acquisition of one such machine that has just been installed in the hospital "La Timone" in Marseille, see above.

MEG and EEG can be measured simultaneously (M/EEG) and reveal complementary properties of the electrical fields. The two techniques have temporal resolutions of about the millisecond, which is the typical granularity of the measurable electrical phenomenons that arise in the brain. This high temporal resolution is what makes MEG and EEG attractive for the functionnal study of the brain. The spatial resolution, on the contrary, is somewhat poor as only a few hundreds of simultaneous data points can be acquired simultaneously (about 300-400 for MEG and up to 256 for EEG). MEG and EEG are somewhat complementary with fMRI and SPECT in that those provide a very good spatial resolution but a rather poor temporal one (about the second for fMRI and the minute for SPECT). Contrarily to fMRI, which “only” measures an haemodynamic response linked to the metabolic demand, MEG and EEG also measure a direct consequence of the electrical activity of the brain: it is admitted that the MEG and EEG measured signals correspond to the variations of the post-synaptic potentials of the pyramidal cells in the cortex. Pyramidal neurons compose approximately 80% of the neurons of the cortex, and it requires at least about 50,000 active such neurons to generate some measurable signal.

While the few hundreds of temporal curves obtained using M/EEG have a clear clinical interest, they only provide partial information on the localisation of the sources of the activity (as the measurements are made on or outside of the head). Thus the practical use of M/EEG data raises various problems that are at the core of the Odyssée research in this topic:

First, as acquisition is continuous and at upto 1kHz rate, the amount of data for each experiment is huge. Data selection and reduction (finding the interesting time instants or interesting frequencies) and pre-processing (removing artifacts, enhancing the signal to noise ratio, ...) is currently largely done manually. Making a better and more systematic use of the measurements is an important step to optimally exploit the M/EEG data .

With a proper model of the head and of the sources of brain electromagnetic activity, it is possible to simulate the electrical propagation and reconstruct sources that can explain the measured signal. Proposing better models , and means to calibrate them so as to have better reconstructions are other important aims of our work.

Finally, it is our goal to exploit the temporal resolution of M/EEG and apply the various methods we developped to better understand some aspects of the brain functioning, and/or to extract more subtle information out of the measurements. This is interesting not only as a cognitive goal, but also serves the purpose of the validation of our algorithms and can lead to the use of such methods in the field of Brain Computer Interfaces. To be able to conduct such kind of experiments, an EEG lab is being set up at Odyssée.

Another scientific focus of the team is the combined study of computer and biological vision. We think that a more detailed knowledge of the visual perception in humans and non-human primates can have a potential impact on algorithm design and performance evaluation. Thus, we develop so-called "bio-inspired" approaches to model visual tasks. This work is multidisciplinary. It involves knowledge from neuroscience and physiology, it tries to reproduce the percept and what psychophysics reveals from our visual system, and aims to compete with recent computer vision approaches.

The models that we develop are "bio-inspired" with regards to several aspects, depending on the scale chosen for the modelization.

At the microscopic level, see above, we use spikes as a way to emit and code the information, which is certainly one explanation of the extraordinary performance of the visual system. For example, we have developped a spiking retinal simulator, called Virtual Retina, see , which reproduces the main architecture and functions of retina layers, including contrast gain control , . Also, our claim is that spikes represent a new efficient paradigm for computer vision applications. For example we developped a spiking model of V1 and MT layers, in order to categorize motion .

At the macroscopic level, we imitate the functional hierarchy of the visual cortex and propose the variational framework and integro-differential equations as a way to model cortical layers activity. For example, considering cortical maps modeled by a variational formulation, we show at a discrete level how to define a interactions between cortical columns . More generally, we show how to extend this formalism to model several coupled cortical maps, distinguishing forward and backward links.

We also develop phenomenological models, in order to reproduce a percept. For example, in , a model for motion estimation is proposed, integrating motion with form information, and we show how this model can handle various kind of stimuli classically used in psychophysics.

Validation of these models is crucial. Since we claim that our models are "bio-inspired", our goal is also to validate them through biology. For example, the spiking retina simulator (Virtual Retina) reproduces closely cell measurements done on cat ganglion cells, for various kinds of experiments. At the perceptual level, our models should also be able to reproduce a percept, which may be not trivial to reproduce with standard computer vision approaches. Computer vision is another way to prove the efficiency of our approaches, and it is one goal to show compare the performances of our approaches with respect to state-of-the-art computer vision approaches. This is currently done for example for action recognition, based on classical image databases.

This modeling activity brings new insight and tools for computer vision. But it also raises fundamental issues that will be the focus of future research. Understanding the neural code is certainly the most challenging one. Since we believe that spikes are one possible explanation of the visual system performance, and represent a new paradigm for computer vision, more fundamental work has to be done to understand how to better exploit the richness of this code.

This work was partially supported by the ARC Diffusion MRI. To learn more, please visit the web page
http://

The algorithms developed within the Odyssée Project team and related to the Diffusion Tensor and Q-Ball imaging are all available upon request from the INRIA source forge (
https://

We now have users from IRISA, VISAGES, Rennes (Barillot et al), from INSERM, Paris and Universite de Montreal (H. Benali, J.C. Cohen-Adad et al), from Salpétrière Hospital, Paris (S.Lehericy, C.Delmaire, et al), from Toulouse (Landreau et al), from Eindhoven Technical University, CalTech in USA and other national and international sites.

The current library comprising geometric and variational methods developed to estimate, regularize, segment and perform tractography in DT (Diffusion Tensor) and HARDI (High Angular Resolution) MRI images was improved in two fundamental ways. In the first place, the building system was changed from Automake to CMake technologies. This improvement lead to adding support to use the library in Linux, Windows and OS X, systematize testing procedure. In the second place, the library was embedded into two open-source high level languages languages, TCL and Python.

Within the new library, new visualization schemes for Q-Ball images represented by spherical harmonic decomposition were developed. These visualization schemes based in open-source software tools, the Visualization Toolkit (VTK) and the CImg library, greatly improve the speed and Application Programming Interface (API) usability of the visualization library.

Finally taking advantage of the high level language embedding and visualization improvement, efforts are being made in order to rapidly include Q-Ball visualization and processing tools as
plug-ins for the medical image processing tool Slicer3,
http://

Virtual Retina is under CeCILL C licence:
*APP logiciel Virtual Retina: IDDN.FR.OO1.210034.000.S.P.2007.000.31235*

Virtual Retina is a simulation software, fully described in , , that allows large-scale simulations of biologically-plausible retinas, with customizable parameters, and different possible biological features:

Spatio-temporal linear filter implementing the basic Center/Surround organization of retinal filtering.

Non-linear Contrast Gain Control mechanism providing instantaneous adaptation to the local level of contrast. This stage is modelled as a dynamic shunting feedback by amacrine cells onto bipolar cells; the resulting model reproduces contrast-dependent amplitude and phase non-linearities, as measured in real mammalian retinas by Shapley & Victor 78.

Spike generation by one or several layers of ganglion cells paving the visual field. Magnocellular and Parvocellular pathways can be modelled in the same framework according to the parameters chosen. Large-scale simulations can be pursued on up to 100,000 spiking cells.

Possibility of a global radial inhomogeneity modeling the foveated organisation of mammalian retinas. In this case, the spatial scales of filtering, and the density of spiking cells, both depend on the eccentricity from the center of the retina.

Possibility to include a basic microsaccades generator at the input of the retina, to account for fixational eye movements.

This work was partially supported by the EC IP project FP6-015879, FACETS. To learn more, please visit the web page
http://

As part of the SOLAIRE project (système d'optimisation de la lecture par asservissement de l'image au regard), we are developping a visual aid systems to help patients to read more easily.

In 2006 we worked on text magnification in an augmented reality environment where documents were acquired via webcam/scanner devices .

We are currently working on another system which considers now electronic documents, focusing on PDF files. A protoptype was installed in La Timone hospital (Marseille), in october 2007.

This work was partially supported by the EC IP project FP6-015879, FACETS and the Fondation d'Entreprise EADS.

Recent in vivo experiments have revealed that the action potential threshold depends on the rate of depolarization just preceding the spike. This phenomenon can be reproduced in the
Hodgkin-Huxley model. We analyzed spike initiation in the
(
V,
h)phase space, where
his the sodium inactivation variable, and found that the dynamical system exhibits a saddle equilibrium, whose stable manifold is the curve of the threshold. We derived an equation of
this manifold, which relates the threshold to the sodium inactivation variable. It leads to a differential equation of the threshold depending on the membrane potential, which translates into
an integrate-and-fire model with an adaptive threshold. The model accounts well for the variability of threshold and the slope-threshold relationship. This work was presented at the CNS 2007
conference
.

This work was done in collaboration with the UNIC lab (CNRS Gif-sur-Yvette) and presented at the SfN meeting in San Diego. It is supported by the ANR. .

Measuring synaptic conductances in central neurons in vivo is essential to understand the response selectivity of those neurons. Response selectivity can arise from particular timings of excitation and inhibition, which can be amplified by intrinsic conductances. Conductance measurements can be realized by reconstructing current-voltage relations from Vm activity or by using the statistics of the Vm fluctuations. These methods suffer from one severe limitation, namely that it is necessary to accumulate statistics over several trials and different levels of polarization, which necessarily means that information about the variability unlocked to the stimulus is lost. We developed a method to measure excitatory and inhibitory conductances from single-trial intracellular voltage recordings. The principle of this method is to inject white noise into the membrane of the recorded neuron, and to extract the conductances from the voltage response to white noise. This is possible because the input signal varies faster than the synaptic conductances we try to estimate. The membrane equation defines a set of possible conductances, among which we choose the best according to a regularity criterion. This criterion is computed using wavelet transforms to preserve the high-frequency content of the signal. We successfully tested our method on numerical simulations of cortical neuron models with fluctuating synaptic conductances. The method will be tested using compartmental models of morphologically-reconstructed pyramidal neurons with synaptic inputs simulated in dendrites, as well as in real cortical neurons in vitro. The interest of such a method is that it potentially allows, for the first time, to perform extraction of excitatory and inhibitory conductances from single-trials.

This work was partially supported by the EC IP project FP6-015879, FACETS and the Fondation d'Entreprise EADS.

We introduced a new class of two-dimensional nonlinear integrate-and-fire neuron models being computationally efficient and biologically plausible, i.e. able to reproduce a wide gamut of behaviors observed in in-vivo or in-vitro recordings of cortical neuron, and studied the bifurcation diagram of the members of this class with respect to the inputs and the coupling between the membrane potential and the adaptation variable. This class includes for instance two models widely used in computational neuroscience, the Izhikevich and the Brette-Gerstner models. Among other global bifurcations, this system shows a saddle homoclinic bifurcation curve. We show how this bifurcation diagram generates the most prominent cortical neuron behaviors. This study leads us to the introduction of a new neuron model, the quartic model, able to reproduce among all the behaviors of the Izhikevich and Brette-Gerstner models, self-sustained subthreshold oscillations, which are of great interest in neuroscience. We found that they all undergo a Hopf, a saddle node and a Bogdanov-Takens bifurcation. This work is accepted for publication in SIAM Applied Math., and has appeared as a research report .

This work was partially supported by the EC IP project FP6-015879, FACETS and the Fondation d'Entreprise EADS.

We continued the study of the statistics of spike trains and found a new formula for computing first hitting times of integrate-and-fire neurons with exponentially decaying synaptic conductances. This formula is valid for approximating any first hitting times for what we call Double Integral processes, which are non-Markov. The problem of finding the probability distribution of the first hitting time of a Double Integral Process (DIP) such as the Integrated Wiener Proces (IWP) has been an important and difficult endeavor in stochastic calculus. It has applications in many fields of physics (first exit time of a particle in a noisy force field) or in biology and neuroscience (spike time distribution of an integrate-and-fire neuron with exponentially decaying synaptic current). The only results available are an approximation of the stationary mean crossing time and the distribution of the first hitting time of the IWP to a constant boundary. We generalize these results and find an analytical formula for the first hitting time of the IWP to a continuous piecewise cubic boundary. We use this formula to approximate the law of the first hitting time of a general DIP to a smooth curved boundary, and we provide an estimation of the convergence of this method. The accuracy of the approximation is computed in the general case for the IWP and the effective calculation of the crossing probability can be carried out through a Monte-Carlo method. This paper is accepted for publication in the Journal of Applied Probability and has appeared as a research report .

We wrote a review paper on the stochastic methods for characterizing spike trains that has been accepted for publication in the Journal of Physiology Paris . It has also appeared as a research report . We discuss the statistics of spikes trains for different types of integrate-and-fire neurons and different types of synaptic noise models. In contrast with the usual approaches in neuroscience, mainly based on statistical physics methods such as the Fokker-Planck equation or the mean-field theory, we chose the point of the view of the stochastic calculus theory to characterize neurons in noisy environments. We present four stochastic calculus techniques that can be used to find the probability distributions attached to the spikes trains. We illustrate the power of these techniques for four types of widely used neuron models. Despite the fact that these techniques are mathematically intricate we believe that they can be useful for answering questions in neuroscience that naturally arise from the variability of neuronal activity. For each technique we indicate its range of applicability and its limitations.

This work is about an event-based mathematical framework for studying the dynamics of networks of integrate-and-fire neuron driven by external noise. Such networks are classically studied using the Fokker-Planck equation . In this study, we use the powerful tools developed for communication networks theory and define a formalism for the study of spiking neuron networks driven by an external noise. With this formalism, we address biological questions to characterize the different network regimes. In this framework, the probability distribution of the interspike interval is a fundamental parameter. We developed and apply several tools for defining and computing the probability density function (pdf) of the time of the first spike, using stochastic analysis. This point of view gives us an event-driven strategy for simulating this type of random networks. This strategy has been implemented in an extension of the event-driven simulator Mvaspike . We presented an event-driven mathematical framework for noisy integrate-and-fire neuron networks as a poster in the CNS conference in Toronto .

We consider spiking neuron models defined by a one-dimensional differential equation and a reset — i.e., neuron models of the integrate-and-fire type. We address the question of the
existence and uniqueness of a solution on
Rfor a given initial condition. It turns out that the reset introduces a countable and ordered set of backward solutions for a given initial condition, which has important implications
in terms of neural coding and spike timing precision. This work has been accepted for publication in Cognitive Neurodynamics.

This work was partially supported by the ANR.

Cortical neurons are subject to sustained and irregular synaptic activity which causes important fluctuations of the membrane potential (
Vm). The simplified, fluctuating point-conductance model of synaptic activity provides the starting point of a variety of methods for the analysis of intracellular
Vmrecordings. In this model, the synaptic excitatory and inhibitory conductances are described by Gaussian-distributed stochastic variables, or colored conductance noise. The matching of
experimentally recorded
Vmdistributions to an invertible theoretical expression derived from the model allows the extraction of parameters characterizing the synaptic conductance distributions. This analysis
can be complemented by the matching of experimental
Vmpower spectral densities (PSDs) to a theoretical template, even though the unexpected scaling properties of experimental PSDs limit the precision of this latter approach. Building on
this stochastic characterization of synaptic activity, we also proposed methods to qualitatively and quantitatively evaluate spike-triggered averages of synaptic time-courses preceding
spikes. This analysis points to an essential role for synaptic conductance variance in determining spike times. The methods were evaluated using controlled conductance injection in cortical
neurons in vitro with the dynamic-clamp technique.

This work was done in collaboration with the UNIC lab (CNRS Gif-sur-Yvette) and accepted for publication in Journal of Neuroscience Methods.

This work is part of the STIC-SANTE contract GENESYS.

It is likely that neurons transport information via, among other means, spike trains. From this point of view spike trains dynamics is more relevant than e.g. membrane potential dynamics. However, despite a number of significative experiments, the way how information is encoded in spikes trains remains mysterious. This is due to experimental but also conceptual obstacles. For example, the processing of experimental data requires statistical models for the probability distribution of spike trains. However, the traditionally used models (e.g. Poisson) are ad hoc and are known to be poorly adapted. Recent advances based on experiments suggest that more adapted probability distributions can be obtained via the statistical inference principle consisting of maximizing entropy under the constraints that average values of observables are consistent with measured data. More than a simple token we believe that this is a general principle. Indeed, the results presented in and submitted , show that there is a natural symbolic coding of the dynamics via spike trains. This coding, combined with the so-called thermodynamic formalism, coming from ergodic theory and statistical physics, allows to construct probability measures on the spike trains via a natural variational principle. These measures, called Gibbs measures, give a direct access to a wide number of statistical properties of orbits, some of them corresponding to quantities measured by biologists. The expected outcomes of this research work are twofold.

(i) Establish a mathematical framework for the analysis of spikes trains in collaboration with neurobiologists, like F. Grammont, from the ”Laboratoire de Neurobiologie et Psychopathologie”, Université de Nice. We aim to develop a statistical model, relevant for data analysis, using conductance based models whose dynamics has been analysed in .

(ii) Use spiking neurons models to solve variational problems, using a suitable dynamics on the synaptic weights. This work enters in a project developed by P. Kornprobst and T. Viéville . We have obtained a MESR PHD grant for this topic (the student is J.C. Vasquez).

Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have 1) an explicit expression for the evolution of the state variables between spikes and 2) an explicit test on the state variables which predicts whether and when a spike will be emitted. In a previous work, we proposed a method which allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. Lately we proposed a method, based on polynomial root finding, which applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents. This work was published in Neural Computation .

We also published a review of simulation of spiking neural networks in the Journal of Computational Neuroscience, including algorithmic and precision issues, and an overview of simulation softwares. The review also includes benchmark code for various softwares that is distributed on a public database (ModelDB) .

With T. Viéville we have extended these results towards biologically plausible generalized integrate and fire (GIF) neuron model with conductance-based dynamics , . A step further, constructive conditions have been derived, allowing to properly implement visual functions on such networks. The time discretization has been carefully conducted avoiding usual bias induced by e.g. Euler methods and the usual arbitrary discontinuities have been discussed. The effects of the discretization approximation has been analytically and experimentally analyzed. With this new point of view, we have also reconsidered some “biological” results obtained on “models” with biologically non plausible discontinuities. This has allowed us to reduce the bio-physical membrane equation to a very simple but powerful gIF numerical model with a drastic reduction of the algorithmic complexity of event-based network simulations.

The mathematical study of neuronal networks dynamics is a real challenge since neuronal networks are dynamical systems with a huge number of degree of freedom and parameters, multi-scales organisation with complex interactions, where neurons dynamics depend on the synaptic graph structure while synapses evolve according to the neurons activity. This analysis, which is an important step towards the characterisation of in vitro or in vivo neuronal networks, from space scales corresponding to a few neurons to scales characterising e.g. cortical columns. can be performed, in some cases, using tools from statistical physics, dynamical systems theory and ergodic theory. A detailed description of these techniques has been published this year in , .

With O. Faugeras and J. Touboul we are currently applying these methods (dynamic mean-field theory combined with dynamical systems analysis) to neural mass models with several populations with a connectivity scheme based on anatomical data on cortical columns structure. This study, which has not been completed yet, will allow us to characterize cortical dynamics at a scale corresponding precisely to the resolution of optical imaging or functional MRI.

This project is partially supported by the ANR.

This collaboration between the Alchemy project team at INRIA Futurs Saclay (Hugues Berry, Olivier Temam), INSERM ANIM U742, Université P. et M. Curie, Paris (Bruno Delord) and Equipe neurocybernetique, ETIS, UMR CNRS 8051 (Mathias Quoy) aims at understanding how structure of biological neural networks is conditioning their functional capacities, in particular learning. In , we present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks. Using theoretical tools from dynamical systems and graph theory, we study a generic “Hebb-like” learning rule that can include passive forgetting and different time scales for neuron activity and learning dynamics. We show that “Hebb-like” learning leads to a reduction of the complexity of the dynamics manifested by a systematic decay of the largest Lyapunov exponent. This effect is caused by a contraction of the spectral radius of Jacobian matrices, induced either by passive forgetting or by saturation of the neurons. As a consequence learning drives the system from chaos to a steady state through a sequence of bifurcations. We show that the network sensitivity to the input pattern is maximal at the “edge of chaos”. We also emphasize the role of feedback circuits in the Jacobian matrices and the link to cooperative systems. In these results are extended to random networks with inhibitory and excitatory neurons.

We have developed an original approach based on a linear response theory proposed by Ruelle for dissipative dynamical systems allowing to analyse the joint effect of a network topology and non-linear dynamics, in dynamical systems like neural networks, where the nodes have non-linear transfer functions. On practical grounds, we have predicted and evidenced non-intuitive and unexpected effects in networks with chaotic dynamics. We have shown that it is possible to transmit and recover a signal in a chaotic system. We also analysed how the dynamics interfere with the graph topology to produce an effective transmission network, whose topology depends on the signal and cannot be directly read on the “wired” network. Moreover, with a suitable choice of the resonance frequency, one can transmit a signal from a node to another one by amplitude modulation, in spite of the presence of chaos. In addition, a signal, transmitted to any node via different paths, will be recovered only at some specific nodes.

Neural fields are an interesting option for modelling macroscopic parts of the cortex involving several populations of neurons, like cortical areas. Two classes of neural field equations
are considered: voltage and activity based. The spatio-temporal behaviour of these fields is described by nonlinear integro-differential equations. The integral term, computed over a compact
subset of
R^{q},
q= 1, 2, 3, involves space and time varying, possibly non-symmetric, intra-cortical connectivity kernels. Contributions from white matter afferents are represented
as external input. Sigmoidal nonlinearities arise from the relation between average membrane potentials and instantaneous firing rates. Using methods of functional analysis, we characterize
the existence and uniqueness of a solution of these equations for general, homogeneous (i.e. independent of the spatial variable), and locally homogeneous inputs. In all cases we give
sufficient conditions on the connectivity functions for the solutions to be absolutely stable, that is to say independent of the initial state of the field. These conditions bear on some
compact operators defined from the connectivity kernels, the sigmoids, and the time constants used in describing the temporal shape of the post-synaptic potentials. Numerical experiments are
presented to illustrate the theory. An important contribution of our work is the application of the theory of compact operators in a Hilbert space to the problem of neural fields with the
effect of providing very simple mathematical answers to the questions asked by neuroscience modellers.

This work has appeared as and was presented at .

Neural continuum networks are an important aspect of the modeling of macroscopic parts of the cortex. Two classes of such networks are considered: voltage- and activity-based. In both
cases our networks contain an arbitrary number,
n, of interacting neuron populations. Spatial non-symmetric connectivity functions represent cortico-cortical, local, connections, external inputs represent non-local connections.
Sigmoidal nonlinearities model the relationship between (average) membrane potential and activity. Departing from most of the previous work in this area we do not assume the nonlinearity to
be singular, i.e., represented by the discontinuous Heaviside function. Another important difference with previous work is our relaxing of the assumption that the domain of definition where
we study these networks is infinite, i.e. equal to
Ror
R^{2}. We explicitely consider the biologically more relevant case of a bounded subset
of
R^{q},
q= 1, 2, 3, a better model of a piece of cortex. The time behaviour of these networks is described by systems of integro-differential equations. Using methods of
functional analysis, we study the existence and uniqueness of a stationary, i.e., time-independent, solution of these equations in the case of a stationary input. These solutions can be seen
as “persistent”, they are also sometimes called “bumps”. We show that under very mild assumptions on the connectivity functions and because we do not use the Heaviside function for the
nonlinearities, such solutions always exist. We also give sufficient conditions on the connectivity functions for the solution to be absolutely stable, that is to say independent of the
initial state of the network. We then study the sensitivity of the solution(s) to variations of such parameters as the connectivity functions, the sigmoids, the external inputs, and, last but
not least, the shape of the domain of existence
of the neural continuum networks. These theoretical results are illustrated and corroborated by a large number of numerical experiments in most of the cases
2
n
3, 2
q
3.

This work has appeared as a technical report and is submitted for publication to Neural Computation.

We propose a biological cortical column model, at a mesoscopic scale, in order to explain and interpret biological sources of voltage-sensitive dye imaging signal. The mesoscopic scale, corresponding to a micro-column, is about 50 µm. The proposed model takes into account biological and electrical neural parameters of the laminar cortical layers. Thus we choose a model based on a cortical microcircuit, whose synaptic connections are made only between six specific populations of neurons, excitatory and inhibitory neurons in three main layers. For each neuron, we use a conductance-based single compartment Hodgkin-Huxley neuron model . We claim that our model will reproduce qualitatively the same results than the optical imaging signal based on voltage-sensitive dyes, which represents the summed intracellular membrane potential changes of all the neuronal elements at a given cortical site. After the preliminary simulations, this model suggests that the OI signal is the result of an average from multiple components whose proportion changes with levels of activity, and shows surprisingly that inhibitory cells and spiking activity may well participate more to the signal than initially though.

Neural masses are natural mathematical models for describing the dynamics of the cortex at the mesoscopic scale. They can be assembled to form a continuum, or neural field, and provide descriptions at a macroscopic scale . Starting from such a model of a cortical area, we propose a formula for the direct problem of extrinsic optical imaging.

This work was presented in .

We propose a novel variational framework for the dense non-rigid registration of Diffusion Tensor Images (DTI). Our approach relies on the differential geometrical properties of the Riemannian manifold of multivariate normal distributions endowed with the metric derived from the Fisher information matrix. The availability of closed form expressions for the geodesics and the Christoffel symbols allows us to define statistical quantities and to perform the parallel transport of tangent vectors in this space. We propose a matching energy that aims to minimize the difference in the local statistical content (means and covariance matrices) of two DT images through a gradient descent procedure. The result of the algorithm is a dense vector field that can be used to wrap the source image into the target image. This article is essentially a mathematical study of the registration problem. Some numerical experiments are provided as a proof of concept.

This work has been submitted to the SIAM Journal of Applied Mathematics and has appeared as a technical report .

This work was partially supported by the CRSNG Canada graduate scholarship, FQRNT-INRIA and INRIA (International internships program)

In this work, we propose a regularized, fast, and robust analytical solution for the Q-ball imaging (QBI) reconstruction of the orientation distribution function (ODF) together with its detailed validation and a discussion on its benefits over the state-of-the-art. Our analytical solution is achieved by modeling the raw high angular resolution diffusion imaging signal with a spherical harmonic basis that incorporates a regularization term based on the Laplace-Beltrami operator defined on the unit sphere. This leads to an elegant mathematical simplification of the Funk-Radon transform which approximates the ODF. We prove a new corollary of the Funk-Hecke theorem to obtain this simplification. Then, we show that the Laplace-Beltrami regularization is theoretically and practically better than Tikhonov regularization. At the cost of slightly reducing angular resolution, the Laplace-Beltrami regularization reduces ODF estimation errors and improves fiber detection while reducing angular error in the ODF maxima detected. Finally, a careful quantitative validation is performed against ground truth from synthetic data and against real data from a biological phantom and a human brain dataset. We show that our technique is also able to recover known fiber crossings in the human brain and provides the practical advantage of being up to 15 times faster than original numerical QBI method.

This work has been published in .

This work was partially supported by the ARC Diffusion MRI

To detect fiber crossings, today the HARDI acquisition approach has produced a plethora of new techniques and mathematical tools such as radial basis functions, Spherical Harmonics (SH), Higher Order Tensors (HOT), etc. The mathematical properties of these high order tools need to be better understood to be fully exploited. In particular it seems appropriate to explore HOT while leveraging the extensive framework already established for classical DTI. In this work, we have started to explore HOT and in particular the space of 4th order diffusion tensors.

A major limitation of the 2nd order DTI, is its incapacity to discriminate multiple fibers crossing in the same voxel. In the tensor framework this can be overcome by accomodating HOTs to the diffusivity function. However in these spaces of higher dimensions, it gets harder to enforce the physical constraint of positive diffusion in the inverse problem of estimation. In particular, the space of 4th order tensors, which already makes it possible to detect at least three separate fibers, needs to be better understood. In this work, we started to explore the space of 4th order diffusion tensors, and rewrite them in matrix form, to be able to extend the Riemannian framework of S+(3) to S+(6). We also started to explore the different symmetries of a 4th order tensor, which make the problem of estimation non-unique.

Recent High Angular Resolution Diffusion Imaging (HARDI) acquisitions use low b-values (b = 1000 s/mm2) and small number of gradient encoding directions, N, (less than 100) to describe local non-Gaussian diffusion process in clinically feasible acquisitions. One such technique is Q-Ball Imaging (QBI) , which reconstructs the diffusion orientation distribution function (ODF) of water molecules in a biological tissue. However, at low b-values and small N and because of the intrinsic Bessel function smoothing in the Funk-Radon Transform used to reconstruct the ODF , the ODF profiles are quite smooth and ODF maxima (giving the underlying fiber orientation) are difficult to find and sometimes missed when compared to ODF reconstructed from research-oriented HARDI acquisitions with higher b-values (b greater than 3000 s/mm2) and large N (N greater than 100). In this work , we define a general sharpening operation that can be used with any HARDI reconstruction method and in particular, we show that if the sharpening is applied on the ODF, it considerably improves fiber detection and increases angular resolution of QBI. This work has been presented and published in .

This work has been presented and published in .

This work was partially supported by the CRSNG Canada graduate scholarship and FQRNT-INRIA

This work has been presented and published in .

This work was partially supported by the PAI Procope.

The corpus callosum (CC) is involved in the inter-hemispheric interaction of cortical regions and the exact reconstruction of fibers connecting the cerebral hemispheres is of major interest. In this contribution , we investigate and show how the reconstruction of transcallosal fiber connections intersecting with the corona radiata and the superior longitudinal fasciculus can be improved with a local model of crossing fibers using diffusion weighted imaging and Q-Ball tractography. Current DTI based methods are shown to produces incomplete fiber reconstructions in the CC and neglect all lateral fibers to prefrontal areas and reconstructs only fibers to the medial/dorsal cortex. Q-Ball tractography additional finds strong interhemispheric connectivity of the inferior and middle frontal gyrus and the ventral premotor cortex. These additional lateral fibers influence the cartography of the transcallosal fibers, and might lead to new insights to inter-hemispheric cognitive networks.

This work has been presented and published in .

This work was partially supported by FQRNT/INRIA.

We present a new tracking algorithm based on the full multidirectional information of the diffusion orientation distribution function (ODF) estimated from Q-Ball Imaging (QBI). From the ODF, we extract all available maxima and then extend streamline (STR) tracking to allow for splitting in multiple directions (SPLIT-STR). Our new algorithm SPLIT-STR overcomes important limitations of classical diffusion tensor streamline tracking in regions of low anisotropy and regions of fiber crossings. Not only can the tracking propagate through fiber crossings but it can also deal with fibers fanning and branching. SPLIT-STR algorithm is efficient and validated on synthetic data, on a biological phantom and compared against probabilistic tensor tracking on a human brain dataset with known crossing fibers.

This work has been presented and published in .

This work was partially supported by the PAI Procope.

This work has been published in .

This work was partially supported by the ARC Diffusion MRI

Tractography applied to the tensor field in diffusion tensor imaging (DTI) results in sets of streamlines which can be associated with major fiber tracts. If fibers are reconstructed and visualized individually through the complete white matter, the display gets easily cluttered making it difficult to get insight in the data. The goeal of this work was to recover fibers tracts from white matter from DTI or HARDI imaging and embed and cluster them in order to perform statistical analysis and improve readability. In this work , we show that spectral embedding clustering techniques can provide a fast, non-linear way of performing this process and we present an approach based in Diffusion Maps that minimizes the dependance in uniform clustering avoiding artifacs by performing a previous normalization step.

This work was partially supported by the CRSNG Canada graduate scholarship, FQRNT-INRIA and PAI Procope

This work has been presented and published in . More details can be found in the INRIA Research Report .

This work was partially supported by the ARC Diffusion MRI

This work has been presented and published in .

This work was partially supported by the CRSNG Canada graduate scholarship and FQRNT-INRIA

Q-ball imaging (QBI), introduced by D. Tuch, reconstructs the diffusion orientation distribution function (ODF) of the underlying fiber population of a biological tissue. An analytical solution for QBI was recently proposed by several independent groups, using a spherical harmonic (SH) representation of the input signal. The methods differ primarily in the way SH are estimated. In this work , we validate these methods and compare them against Tuch's numerical QBI on synthetic data, on a biological phantom and on a human brain dataset. We show that analytical QBI results in a speed-up factor of 15 over Tuch's QBI, while providing results that are in strong agreement. We also show that at the cost of slightly reducing angular resolution, QBI with Laplace-Beltrami regularization provides the strongest robustness to noise and the most accurate detection of fiber crossings.

This work has been presented and published in .

The work depicted in this sub-theme concerns various aspects related to the problem of estimating the sources in the brain corresponding to some given activity. Besides the forward and inverse EEG/MEG problems (see sections – ) which are directly connected to this problem, there are a number of additionnal problems such as finding the events of interest in the recorded signal (sections and ), applying the reconstruction methods developped in the project to the problem of retinotopy (section ) or providing a simple user interface to promote those in the clinical environment (section ). Finally, the same set of tools can also be applied to model nerve stimulation (see section ).

An important issue in electroencephalographiy (EEG) experiments is to measure accurately the three dimensional (3D) positions of the electrodes. A system is proposed where these positions are automatically estimated from several images using computer vision techniques. Yet, only a set of undifferentiated points are recovered this way and remains the problem of labeling them, i.e. of finding which electrode corresponds to each point. A fast and robust solution is designed to this latter problem based on combinatorial optimization. A specific energy is minimized with a modified version of the Loopy Belief Propagation algorithm. Experiments on real data show that a manual labeling of two or three electrodes only is sufficient to get the complete labeling of a 64 electrodes cap in less than 10 seconds.

Magneto-encephalography (MEG) and electro-encephalograhy (EEG) experiments provide huge amounts of data and lead to the manipulations of high dimensional objects like time series or topographies. In the past, essentially in the last decade, various methods for extracting the structure in complex data have been developed and successfully exploited for visualization or classification purposes. proposes to use one of these methods, the Laplacian eigenmaps, on EEG data and prove that it provides an powerful approach to visualize and understand the underlying structure of evoked potentials or multitrial time series.

This work has been funded by the INRIA Color MedMesh action.

Finite Element methods (FEM) usually require a mesh to describe the geometric domain on which the computations are occuring. These meshes must have several properties: 1) they must approximate the geometrical domain accurately, 2) they must have good numerical properties, and 3) they must be small enough so that the computations take a reasonable amount of time. These goals are somewhat contradictory and in many cases such as biomedical images – and particularly in the case of the head –, even though the geometric domains can effectively be extracted, eg from Magnetic Resonance Images (MRI), the generation of such meshes is quite difficult.

The technique is illustrated on spherical and realistic geometries for the Electroencephalography (EEG) direct problem.

The accuracy of EEG forward models partially depends on the head tissue conductivites. Some methods have been proposed to estimate these conductivities. They are all based on the idea of imposing the electrical source in the head, and considering the conductivities as the only unknowns. Although the conductivity models are becoming more and more complex, it is not clear in the literature whether it is really possible to estimate the conductivities of all the head tissues. presents the limits of conductivity estimation for the common three-layer model (brain, skull, scalp), with and without skull anisotropy.

This work was partially supported by the Barrande grant "Brain Multimodal Imagery" and the Fondation d'Entreprise EADS.

Electroencephalography (EEG) and magnetoencephalography (MEG) have excellent time resolution. However, the poor spatial resolution and small number of sensors do not permit to reconstruct a general spatial activation pattern. Moreover, the low signal to noise ratio (SNR) makes accurate reconstruction of a time course also challenging. therefore proposes to use constrained reconstruction, modeling the relevant part of the brain using a neural mass model: There is a small number of zones that are considered as entities, neurons within a zone are assumed to be activated simultaneously. The location and spatial extend of the zones as well as the inter-zonal connection pattern can be determined from functional MRI (fMRI), diffusion tensor MRI (DTMRI), and other anatomical and brain mapping observation techniques. The observation model is linear, its deterministic part is known from EEG/MEG forward modeling, the statistics of the stochastic part can be estimated. The dynamics of the neural model is described by a moderate number of parameters that can be estimated from the recorded EEG/MEG data. We explicitly model the long-distance communication delays. Our parameters have physiological meaning and their plausible range is known. Since the problem is highly nonlinear, a quasi-Newton optimization method with random sampling and automatic success evaluation is used. The actual connection topology can be identified from several possibilities. The method was tested on synthetic data as well as on true MEG somatosensory-evoked potential (SEP) data.

This work was partially supported by the Fondation d'Entreprise EADS.

Detection of activity from the primary visual cortex is a difficult challenge to magneto-encephalography (MEG) source imaging techniques: the geometry of the visual cortex is intricate, with structured visual field maps extending deep within the calcarine fissure. This questions the very sensitivity of MEG to the corresponding neural responses of visual stimuli and the usage of MEG source imaging for innovative retinotopic explorations. In this context, , compare two imaging models of MEG generators in realistic simulations of activations within the visual cortex. Localization and spatial extent of neural activity in the visual cortex were extracted from retinotopic maps obtained in fMRI.We prove that the suggested approaches are robust and succeed in accurately recovering the activation patterns with satisfactory match with fMRI results. These results suggest that fast retinotopic exploration of the visual cortex could be obtained from MEG as a complementary alternative to more standard fMRI approaches. The excellent time resolution of MEG imaging further opens interesting perspectives on the temporal and spectral processes sustained by the human visual system.

"Non-linear shape prior are introduced for the deformable model framework that are learnt from a set of shape samples using recent manifold learning techniques. A category of shapes is modeled as a finite dimensional manifold approximated using Diffusion maps. The method computes a Delaunay triangulation of the reduced space, considered as Euclidean, and uses the resulting space partition to identify the closest neighbors of any given shape based on its Nystrom extension. A non-linear shape prior term is designed to attract a shape towards the shape prior manifold at given constant embedding. Results on shapes of ventricle nuclei demonstrate the potential of the method for segmentation tasks.

Traditional techniques of dense optical flow estimation do not generally yield symmetrical solutions: the results will differ if they are applied between images I1 and I2 or between images I2 and I1 . presents a method to recover a dense optical flow field map from two images, while explicitely taking into account the symmetry across the images as well as possible occlusions in the flow field. The idea is to consider both displacements vectors from I1 to I2 and I2 to I1 and to minimise an energy functional that explicitely encodes all those properties. This variational problem is then solved using the gradient flow defined by the Euler–Lagrange equations associated to the energy. To prove the importance of the concepts of symmetry and occlusions for optical flow computation, we have extended a classical approach to handle those. Experiments clearly show the added value of these properties to improve the accuracy of the computed flows.

This work has been initiated some years ago during the visit of I. Kokkinos to our research project for 4 months internship but it's only this year that this report has been published .

This work was partially supported by the EC IP project FP6-015879, FACETS and the Fondation d'Entreprise EADS

The branch of computer science and applied mathematics that studies way of emulating visual performances with computers.

This work was partially supported by the EC IP project FP6-015879, FACETS and the Fondation d'Entreprise EADS

Many traditional image segmentation techniques are based on variational approaches seen as Mumford Shah's approach variants. A step further, it has been argued, e.g. by Sarti et al, that such mechanism could provide an abstract view of brain's segmentation. Following this track, we "implement" segmentation using a retinotopical neuron network .

In our approach, the first step is to consider a discrete approximation of the Mumford Shah functional, as proposed by Chambolle, yielding a dynamical system grid.

We explore then different possibilities to link it to a grid of neurons, the processed value being directly the phase, the membrane voltage or a more complex neuron state evaluation, all this depending on the concidered neuron model (from integrate and fire to Hodgkin-Huxley) and encoding (with it's phase, membrane voltage or spiking rate..).

From this theoretical study and the related numerical experiment, we are able to compare these alternatives, while an original biologically inspired segmentation network emerges from our study.

This work was partially supported by the EC IP project FP6-015879, FACETS and the Fondation d'Entreprise EADS

High-level specification of how the brain represents and categorizes the causes of its sensory input allows to link "what is to be done" (perceptual task) with "how to do it" (neural network calculation). In , we described how the variational framework, which encountered a large success in modeling computer vision tasks, has some interesting relationships, at a mesoscopic scale, with computational neuroscience. We focus on cortical map computations such that "what is to be done" can be represented as a variational approach, i.e., an optimization problem defined over a continuous functional space. In particular, generalizing some existing results, we show how a general variational approach can be solved by an analog neural network with a given architecture and conversely. Numerical experiments are provided as an illustration of this general framework, which is a promising framework for modeling macro-behaviors in computational neuroscience.

We are now considering how to extend this formalism to the case of many connected cortical maps, i.e., coupled variational approaches.

motion of a living character

We propose a bio-inspired MT model working in a fully spiking mode: Our MT layer receives spiking inputs coming from a previous spiking V1 layer. MT layer integrates this information to produce spikes as output. Interestingly, this spike to spike model allows us the study and modelling of some dynamics existing in V1 and MT, and due the causality of our cells representation it is also possible to integrate some top-down feedbacks. This model differs from existing ones such as e.g. and , whose generally have analogue entry and consider motion stimuli in a continuous regime (as plaids or gratings) discarding dynamic behaviors.

The first layer of the model is formed as an array of direction-selective V1 complex cells tuned for different speeds and directions of motion. Each V1 complex cell is modelled with a
motion energy detector following
. The second layer of the model corresponds to a spiking MT cell array. Each MT cell has as input the spike
trains of the V1 complex cells inside its receptive field. From the spike trains of MT cells a motion map of velocity distribution is built representing a sequence. In order to show the
efficiency of these models, the motion maps here obtained are used in the biological motion recognition task. We ran the experiments using two databases Giese and Weizmann, containing two (
*march, walk*) and ten (
*e.g., march, jump, run*) different classes, respectively. The results revealed that the motion map here proposed could be used as a reliable motion representation.

This work has been presented at , , and more details are available in

A detailed retina model is proposed, that transforms a video sequence into a set of spike trains, as those emitted by retinal ganglion cells. It includes a linear model of filtering in the Outer Plexiform Layer (OPL), a contrast gain control mechanism modeling a non-linear feedback loop on bipolar cells, and a spike generation process modeling ganglion cells. A strength of the model is that each of its features can be associated to a precise physiological signification and location. The resulting retina model can simulate physiological recordings on mammalian retinas, including such non-linearities as cat Y cells, or contrast gain control.

This work has been concretized in a large-scale simulator software under CeCILL C licence,
*Virtual Retina*, that can emulate the spikes of up to 100,000 neurons. More recently, a mathematical study of the gain control loop present in the retina model has been undertaken. This
simulator is described in details in
(see also the software section to learn more).

The dynamical system of contrast gain control has been also studied mathematically in .

We present a model of motion integration and segmentation controlled by form cues in the primate visual cortex. Motion is diffused by a V1/MT like recurrent system biologically implemented by V1 pattern cells. Motion segmentation and asymetric center-surround effects came from the form information coming from the ventral pathway. The model is able to give results conforming to our percept for complex stimuli involving extrinsic junctions such as the Chopstick illusion.

This work has been presented at , and more details are available in

FACETS is an integrated project within the biologically inspired information systems branch of IST-FET. The FACETS project aims to address, with a concerted action of neuroscientists,
computer scientists, engineers and physicists, the unsolved question of how the brain computes. It combines a substantial fraction of the European groups working in the field into a consortium
of 13 groups from Austria, France, Germany, Hungary, Sweden, Switzerland and the UK. About 80 scientists will join their efforts over a period of 4 years, starting in September 2005. A project
of this dimension has rarely been carried out in the context of brain-science related work in Europe, in particular with such a strong interdisciplinary component ( web site:
http://

Visiontrain (MRTN-CT-2004-005439 - duration: 1 May 2005 - 30 April 2009 ) addresses the problem of understanding vision from both computational and cognitive points of view. The research
approach will be based on formal mathematical models and on the thorough experimental validation of these models. Under this framework we intend to reduce the gap that exists today between
biological vision (which is by large not yet understood) and computer vision (which is biologically inspired and whose flexibility, robustness, and autonomy remain to be demonstrated). In order
to achieve these ambitious goals, 11 academic partners plan to work cooperatively on a number of targeted research objectives: (i) computational theories and methods for low-level vision, (ii)
motion understanding from image sequences, (iii) learning and recognition of shapes, objects, and categories, (iv) cognitive modelling of the action of seeing, and (v) functional imaging for
observing and modelling brain activity ( web site:
http://

**Duration:**2007-01 to 2008-12

Our partners in this project are the INSERM Imparabl team of the Laboratoire d'Imagerie Fonctionnelle LIF/U678 Faculté de Médecine Pierre et Marie Curie - Hopital Pitié-Salpêtrière and the
CENIR : Center for NeuroImaging Research of the Hopital Pitié-Salpêtrière. In this ARC project, our broad goal is to develop and validate algorithms that will help us to have a better knowledge
and better understand the structural organization of the whitematter fiber bundles in the human brain and help to identify the neural connectivity patterns with the help of Diffusion Magnetic
Resonance Imaging (MRI). Our algorithms will be based on formulations using tensor calculus, partial differential equations, variational methods and differential geometry and will ultimately be
useful for clinicians as well as researchers ( web site:
http://

This project deals with the problem of better measuring, modeling and simulating the set of representations that are used and the flow of processing that is performed in the human brain to achieve efficient visual perception. This is indeed a challenge because despite all the knowledge that has been accumulated on the functioning of the brain over the last years, many very basic questions still remain open, e.g.: What is the “information” conveyed by neuronal electrical and chemical activity? How is the information encoded in this activity? How is the information distributed among brain areas? In particular, what are the respective roles of feedforward and feedback connections between brain areas? Can we infer any “computational” paradigms from the observation of the functioning of the brain and the computer simulation of parts of this functioning? Most of these questions arise from the fact that it has proven to be extremely difficult to connect 1) the small scale knowledge of the functioning of one neuron or a small population of neurons (chemical/electrical models) to 2) the large scale (in space and/or in time) knowledge (spatial organisation, main connections, spatial and temporal activations, . . . ) provided by brain imagery observations (functional Magnetic Resonance Images (fMRI), MagnetoEncephalography (MEG), ElectroEncephalography (EEG), Diffusion Magnetic Resonance Images (DMRI), optical imaging). Similarly, the large scale knowledge of the brain activations has turned out to be difficult to relate to 3) the mathematical and computational principles underlying their (somewhat) equivalent computer implementations (when they exist). As an example, what we know about the processing of visual motion in humans has hardly ever been compared with the field of motion analysis in computer vision. But certainly the abilities of the best computer programs in terms of the analysis of 2D and 3D motions of objects in video sequences of images are way behind the state of the art of most mammalian brains. The intent of this project is double. First we want to build some connections between these three levels of description, particularly for the low-level vision areas of the brain and the feedback loops between these areas. Second we want to show that this increase of knowledge can be put to good use from the technological standpoint and opens the door to new ways of interacting with the machines our societies build. The project covers some of the parts of the current research program of the Odyssée laboratory which are not covered by other grants. The potential impacts of our research are multifold:

By combining single-neuron models (microscopic scale) which can reproduce large numbers of observed spiking behaviours (see point 6 below) into medium size networks (containing of the order of 105 individuals and their connections), we will be able to reach the so-called mesoscopic scale of what seems to be the elementary processing unit in the human cortex, the cortical column. The computer simulation of a few of these units can be achieved using existing simulators such as mvaspike. The results can be confronted with optical imaging measurements which can also be used to estimate the parameters. Bridging the gap between the microscopic and mesoscopic levels of description is an important challenge in neuroscience.

By combining these neural-mass models with a description of the cortex geometry such as the one obtained from anatomical MRI and anatomical connectivity such as the one obtained from DMRI we will be able to reach the macroscopic level of description of a signicant part of a brain area. The computer simulation of these parts can then be confronted with fMRI, MEG and EEG measurements since they operate at comparable spatio-temporal scales. Bridging the gap between the mesoscopic and macroscopic levels of description can have an important impact for the understanding of such aspects of brain disfunctioning as epilepsy. Related to this remark this will also provide better electrical source models which are much needed in MEG and EEG.

Still at the macroscopic level, the role of feedback connections between brain areas is much less well known and understood than that of feedforward ones. They seem to be central for some fundamental visual processes such as figure-ground segregation and attention where they are likely to carry learned or innate priors. Furthermore, they are a generic organizational feature of the cortex, therefore the knowledge acquired in the context of the processing of visual information can potentially be transferred to other areas than vision; this may contribute to dene new computational paradigms for information processing, e.g., in computer vision where the use of priors is becoming essential.

Pushing the level of sophistication of brain descriptions (electrical source models, geometry, physical properties of tissues) used into the imaging methods can lead to better tools or at the very least to a better understanding of the limitations of the existing ones and thus ways to improve them. This may contribute to enhance currently available medical imaging techniques, in a broad sense, and therefore have a strong impact on Health programs.

Low-level vision areas in the brain correspond to functions that have fairly well-denned counterparts in the computer vision field. It would therefore be very interesting to compare the performances of biologically inspired and computer vision based algorithms in particular to investigate whether the latter have intrinsic limitations with respect to the former or/and to assess the level of details absolutely necessary to reproduce interesting aspects of brain behaviour.

Models of “computation” should also be compared. Traditional neural networks process continuous quantities in a way that resembles how an analog or digital computer using oating point arithmetic would solve a minimization problem or compute the solution of a partial differential equation. Real neurons deal with action potentials which are discrete events (their duration is of the order of 1ms), spikes, that are produced every few milliseconds in the 1011 neurons of a human brain, propagate along the 1015 connections between them and create or inhibit electrical activity here and there. The way such huge asynchronous networks can embody the kind of computation that seems to be necessary to achieve, e.g., visual perception, is very different from that of traditional neural network technology (which failed in this program) and essentially unknown. Unveiling some of these mysteries can potentially have a strong impact on computation paradigms for many real time applications and for such emerging areas as Brain Computer Interface.

In this project we focus on the points 2-5 above, points 1 and 6 being partly supported by another grant (European project FACETS). Bullier has shown that the time scale at which the feedback connections referred to above occur in the visual system is of the order of a few tens of milliseconds. This is way beyond what can currently be achieved using fMRI in humans. Moreover, fMRI reects neuronal activity only very indirectly via such physiological parameters as blood oxygenation and it is still unclear how accurately these reect neuronal activity and to what detail. On the other hand such modalities as Electroencephalography (EEG) and Magnetoencephalography (MEG) do offer the kind of time resolution that is needed to observe cortical feedbacks. However, to get the feedback information, the MEG and EEG techniques must be enhanced to incorporate connectivity and better spatio-temporal source models. This observation is central to our project.

This project combines different expertises, such as mathematics, computer science, computational neuroscience and electrophysiology (in vitro and in vivo), to yield accurate and reliable methods to properly characterize high-conductance states in neurons. We plan to address several of the caveats of present recording techniques, namely (1) the impossibility to perform reliable high-resolution dynamic-clamp with sharp electrodes, which is the intracellular technique mostly used in vivo; (2) the unreliability and low time resolution of single-electrode voltage-clamp recordings in vivo; (3) the impossibility of extracting single-trial conductances from Vm activity in vivo. We propose to address these caveats with the following goals:

Obtain high-resolution recordings applicable to any type of electrode (sharp and patch), any type of protocol (current-clamp, voltage-clamp, dynamic-clamp) and different preparations (in vivo, in vitro, dendritic patch recordings).

Obtain methods to reliably extract single-trial conductances from Vm activity, as well as to “probe” the intrinsic conductances in cortical neurons. These methods will be applied to intracellular recordings during visual responses in cat V1 in vivo.

Obtain methods to extract correlations from Vm activity and apply these methods to intracellular recordings in vivo to measure changes in correlation in afferent activity.

Obtain methods to estimate spike-triggered averages from Vm activity and obtain estimates of the optimal patterns of conductances that trigger spikes in vivo. These results will be integrated into computational models to test mechanisms for selectivity.

In all of these methods, we take advantage of the real-time feedback between a computer and the recorded neuron. This real-time feedback will be used to (a) design a new type of recording paradigm, which we call Active Electrode Compensation (AEC), and which consists in a real-time computer-controlled compensation of the electrode artefacts and bias which currently limit recording precision; (b) to use the AEC method to improve current-clamp, voltage-clamp and dynamic-clamp recordings of cortical neurons; (c) use this method as an essential tool to design methods for estimating conductances and statistical characteristics of network activity from intracellular recordings.

Thus, we expect this project to provide three main contributions: (1) It will provide technical advances in the precision and resolution of several currently-used recording techniques, such
as dynamic-clamp and voltage-clamp, which are currently limited. We aim at obtaining high-resolution (>= 20 KHz) reliable measurement or conductance injection. This advance should be of
benefit for in vivo and in vitro electrophysiologists. (2) It will enable us to perform high-resolution conductance measurements in high-conductance states in vivo and in vitro and better
understand this type of network activity. (3) It will enable us to better understand the spike selectivity of cortical neurons, by directly measuring single-trial conductances underlying visual
responses, as well as the conductance time courses linked to the genesis of spikes. Those measurements will be directly integrated into computational models. The mechanisms of spike selectivity
in cortical neurons is still a subject of intense debate, and we expect to provide here crucial measurements, which we hope will help us better understand input selectivity in visual cortex
(web site:
http://

**Duration**: 2006-01 to 2007-12

Through this Procope project (Projet d'Actions Intégrées funded by the Egide), we collaborate with the Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig. Our partners in this group are Alfred Anwander and Thomas Thomas Knösche. Our project is to compare and co-develop methods for MEG/EEG source localization and methods for diffusion MRI tracktography (probabilistic and deterministic) and work on the integration of both modalities. Various visits between the partners during the year and a joint seminar has been hold in November in Leipzig.

**Duration:**2006-01 to 2007-12

In this Barrande project (Projet d'Actions Intégrées funded by the Egide), we collaborate with the Center for Machine Perception of the Czech Technical University, Prague. Our main partner in this group is Jan Kybic. The topic of this joint project is to develop spatio-temporal methods for cortical activity estimation using MEG/EEG.

**Duration**: 1 year from August 2006 to August 2007. Extended until August 2007.

Our partners in this project are Prof. Kaleem Siddiqui from McGill University (School of Computer Science and Centre For Intelligent Machines), Bruce Pike and Jennifer Campbell from McGill (Brain Imaging Center ). Our broad goal is to develop algorithms for the analysis and processing of biomedical images, specifically brain images, which are based on formulations using partial differential equations, variational methods and differential geometry. Exchanges of visits has already occured from McGill University to INRIA and following these visits, Maxime Descoteaux joined our group early 2005 and started a PhD thesis at Nice University under the supervision of Rachid Deriche. Maxime Descoteaux visited for a month McGill early 2007. P. Savadjiev visited Odyssée and spent one week in May. R.Deriche and M. Descoteaux visited McGill in July 2007.

Romain Brette is a member of the editorial board of Cognitive Neurodynamics and of the programme committee of ICCN'07 (Shanghai). He serves as a regular reviewer for Journal of Computational
Neuroscience, Neural Computation, Journal of Physiology (Paris), Cognitive Neurodynamics, Computational Intelligence and Neuroscience, Europhysics Letters and the CNS conferences. He organizes
the Theoretical Neuroscience Breakfasts in Paris twice a month, and he is setting up a wiki for theoretical neuroscience teams at Ecole Normale Supérieure (
http://

Maureen Clerc is a member of the local (Sophia-Antipolis) committee CUMIR.

Maureen Clerc is a member of the Program Committee of 2008 RFIA. She has been invited to give talks in several workshops (NIH-INRIA, INSERM-INRIA).

Maureen Clerc is in charge of the Color grant GT signal-MEEG and of the Procope and Barrande projects "Multimodal functional imaging of the Brain", see and .

Rachid Deriche is Project committee vice-chairman at INRIA Sophia Antipolis - Méditerranée. Rachid DERICHE has co-organised the first joint workshop "Biomedical and Life Sciences Computing
Workshop" for promoting the collaborations between INRIA and American researchers in the field of Life Sciences (April 16-17, 2007 in Bethesda, MD.) and was in charge, with P. Basser (NIH) of
the organisation of the session
*Brain Imaging and Modeling*.

Rachid Deriche is Associate Editor of SIAM Journal on Imaging Sciences (SIIMS), editorial board member at Springer for the book series entitled Computational Imaging and Vision, editorial board member of the International Journal of Computer Vision (IJCV), area-chair for the 11th IEEE International Conference on Computer Vision (ICCV:2007), area-chair for the 10th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI:07), area-chair for the 16e congrès francophone AFRIF-AFIA Reconnaissance des Formes et Intelligence Artificielle (RFIA:2008), member of the program committees of IEEE International Symposium on Biomedical Imaging: From Nano to Macro (ISBI:07 and 08) and serves on several international journals and conferences editorial/program committees (NeuroImage, IEEE Transactions on Medical Imaging, Magnetic Resonance in Medicine, JMIV, Medical Image Analysis Journal, ISMRM, HBM, ISBI...)

Rachid Deriche has been invited to give a talk at “Séminaire Méthodes Mathématiques du Traitement d'Images” University Pierre and Marie Curie and C.N.R.S. Laboratoire Jacques-Louis Lions - December 19, 2006

Rachid Deriche has been invited to give a talk at “JOSTIC:07” - Faculté des sciences, Rabat. - Marocco - February, 23-25, 2007 -

Rachid Deriche has been invited to give two talks at “4ème Ecole Internationale en Traitement de Signal et ses Applications - ISSSPA07” - INELEC, University of Boumerdes, Algeria. June 30 - July 4, 2007

Rachid Deriche has been invited to give a talk at RICAM - Workshop on Bioimaging II/PDEs, Linz, Austria - November 19-23, 2007

Olivier Faugeras is a member of the French Academy of Sciences, the French Academy of Technology. He is on the Administration boards of the Agence Nationale de la Recherche (ANR) and the Fondation d'Entreprise EADS. He is on the Scientific Board of the Institut Français du pétrole that he chaired for four years up until November 2007. He is on the Editorial board of the International Journal of Computer Vision (IJCV).

Pierre Kornprobst is a member of the scientific comittee of the "Pôle de Recherche Scientifique et Technique" (PRST) entitled "Modélisation, informations et systèmes numériques" (MISN). At INRIA, he is is a member of the comité de suivi doctoral (CSD). He was also a member of the Program Committee of the IEEE Pacific Rim Symposium on Image Video and Technology (PSIVT 2007), and he regularly contributes to the review process for several international journals.

Pierre Kornprobst gave a half-a-day tutorial at SIGGRAPH 2007 entitled "
*A Gentle Introduction to Bilateral Filtering and its Applications*," with Sylvain Paris (MIT), Jack Tumblin (Northwestern University, IL) and Fredo Durand (MIT).

Pierre Kornprobst has been invited to give a keynote speech at meeting "AG des 20 ans", organized by GDR ISIS, in may 2007.

Théo Papadopoulo is a member of the Program Committee of the 2007 International Conference on Computer Vision (ICCV), the 2007 International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) and the 2008 RFIA (Reconnaissance des Formes et Intelligence Artificielle). He is helping in the organization of the 2008 European Conference on Computer Vision (ECCV) that will be held in Marseille.

Since July 2007, Théo Papadopoulo is the task leader of the WP8 work package of the European project FACETS .

Théo Papadopoulo is a member of the local (Sophia-Antipolis) committee for software development (CDL).

Romain Brette is an assitant professor at Ecole Normale Supérieure (Paris). He teaches in the Introduction to scientific computing course and the Computational neuroscience course in the computer science curriculum of Ecole Normale Supérieure. He also mentors several students there.

Rachid DERICHE teaches “Geometric Flows and Image Analysis “ in the Master IGMMV “Image et géométrie pour le multimédia et la modélisation du vivant” - University of Nice Sophia Antipolis - (15H)

Rachid DERICHE teaches “PDE's and Geometric Flows in CV and IP “ in the Master MPRI “Master Parisien de Recherche en Informatique “ - University of Paris 7, ENS and Ecole Polytechnique - (12H)

Olivier Faugeras teaches at ENS the course ”Mathematical methods for neuroscience“ in the Master MVA and the ENS Math/Info section - (24H)

Jonathan Touboul does exercise sections for this course at ENS - (16H)

Renaud Keriven teaches "Fundamentals in Computer Science" and ”Computer Vision and Image Processing" at ENPC, "Foundations of Computer Science" at Ecole Polytechnique, Stereovision in the "Master Mathematiques Vision et Apprentissage" (MVA) and the "Master Parisien de Recherche en Informatique" (MPRI), and "Computer Vision" in the “Master Systèmes d’Information".

Sandrine Chemla is a teaching assistant of numerical electronics courses (combinatory logic and sequential logic) for first and second year students.

émilien Tlapale teaches "Programming in C++", at EPU (Université de Nice-Sophia Antipolis) in the electronic department.

Adrien Wohrer is a teaching assistant (moniteur) from école Polytechnique. He teaches the "Curves and Surfaces" course in Université de Nice-Sophia-Antipolis.