Team CORTEX

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Spiking neurons

Participants : Maxime Ambard, Hana Belmabrouk, Yann Boniface, Dominique Martinez, Thierry Viéville, Thomas Voegtlin.

Analysis of experimental data:

We study the encoding of sensory information in the mammal olfactory bulb –in collaboration with P.M. Lledo from the Pasteur Institute, Paris– and in the insect antennal lobe –in collaboration with J.P. Rospars from INRA, Versailles–.

In the collaborative work with the Pasteur Institute, we analysed the correlation between the firing of individual neurons and the network oscillation. Analysis of electrophysiological data, recorded in vitro from rat olfactory bulb slices, shows that mitral cell firing is phase-locked to the fast (gamma range) local field potential oscillation. This phase-locking is largely reduced when the inhibitory synaptic conductance is pharmacologically blocked, hence highlighting the important role of synaptic inhibition. In order to extract the time course of the inhibitory synaptic conductance, we have developed a new method based on the adjustment of a neuron model from experiments with local injections of a synaptic blocker. Using this method, we found that the inhibitory conductance fluctuations are correlated to the local field potential oscillations. A relationship between the received inhibition and the phase of mitral action potentials is also revealed. The probability to fire a phase-locked action potential increases if the neuron receives a large number of inhibitory synaptic events, and if these events are themselves phase-locked [1] . This finding confirms our model prediction.

In the collaborative work with INRA, we analysed the spike timing precision of pheromone sensitive neurons in the antennal lobe of the moth Agrotis ipsilon. Spike train activity from several neurons was first recorded in vivo in responsive areas of the macroglomerular complex and individual spike trains were identified using spike sorting. A statistical tool was then developed to segment and characterize individual spike trains. It reveals that antennal lobe neurons have a stereotyped and synchronized response in the presence of pheromones. From repeated measurements, we show that the response is both precise (temporal jitter of spikes over trials < 4ms) and robust (probability of loosing spikes over trials < 0.1) [25] . The stereotyped response and its extreme precision leads to certain hypotheses concerning intrinsic properties of these neurons.

Modeling at the neuronal level:

A major paradigm in computational neuroscience is that information is encoded in the precise timing of individual spikes, rather than in the mean firing rate. In order to understand the neural code, it seems therefore important to focus on the response of a neuron to an incoming current. This response depends on its internal state, in a way that is described by a Phase Response Curve. We have developed a theory of temporal coding based on this principle. The idea is that the meaning of a spike arriving at a synapse depends on the post-synaptic neuron's dynamic state. If the post-synaptic neuron is in a highly excitable state, and responds well to incoming currents, then an incoming spike will code for a high value. Therefore the time at which this neuron is excitable can be used to encode high values. Conversely, low values correspond to times when the neuron is less excitable. We have derived a learning algorithm for spiking neural networks, based on this principle, that generalizes single-layer and multi-layer perceptron learning in spiking neurons [12] . Another development of this theory uses Spike Timing Dependent Plasticity, a biologically plausible learning mechanism, in order to extract the principal components of the distribution of a time-coded random input vector [15] .

Following this line of research, we carried on with a study that focuses on synchronized firings across neurons and phase-locking to the network oscillation. More precisely, we investigated the formation of synchronized neural assemblies in inhibitory networks. First, a mathematical analysis revealed that oscillatory synchronization requires precise and balanced inhibition. This model prediction was further tested on experimental data from olfactory bulb slices (see above, section about data analysis). Second, we studied the role of inhibitory, noisy interactions in producing stimulus-specific synchrony. From theoretical analysis and computer simulations, we found that slow inhibition plays a key role in desynchronizing neurons. Depending on the balance between fast and slow inhibitory inputs, particular neurons may either synchronize or desynchronize themselves. The complementary roles of the two synaptic time scales in the formation of neural assemblies suggest a wiring scheme that produces stimulus-specific inhibitory interactions and endows inhibitory sub-circuits with properties of binary memories. The relative number between fast GABA-A and slow GABA-B inputs regulates synchrony and determines whether particular projection neurons engage in the neural assembly.

Mathematical analysis of spiking networks

Overview of facts and issues about neural coding by spikes: introducing numerical bounds to explain spiking neural networks limits and improve event-based neural network simulation.

In the present colloborative work, we have clarified some aspects of coding with spike-timing, through a simple review of well-understood technical facts regarding spike coding. Our goal is a better understanding of the extent to which computing and modeling with spiking neuron networks might be biologically plausible and computationally efficient [4] .

We intentionally restrict ourselves to a deterministic implementation of spiking neuron networks and we consider that the dynamics of a network is defined by a non-stochastic mapping. By staying in this rather simple framework, we are able to propose results, formula and concrete numerical values, on several topics: (i) general time constraints, (ii) links between continuous signals and spike trains, (iii) spiking neuron networks parameter adjustment. Beside an argued review of several facts and issues about neural coding by spikes, we propose new results, such as a numerical evaluation of the most critical temporal variables that schedule the progress of realistic spike trains [51] .

When implementing spiking neuron networks, for biological simulation or computational purpose, it is important to take into account the indisputable facts here unfolded [6] . This precaution could prevent one from implementing mechanisms that would be meaningless relative to obvious time constraints, or from artificially introducing spikes when continuous calculations would be sufficient and more simple. It is also pointed out that implementing a large-scale spiking neuron network is finally a simple task.

Reverse-engineering of spiking neural networks parameters

We consider the deterministic evolution of a time-discretized network of spiking neurons with connection weights having delays, modeled as a discretized neural network of the generalized integrate and fire (gIF) type. The purpose is to study a class of algorithmic methods allowing to calculate the proper parameters to reproduce exactly a given spike train generated by an hidden (unknown) neural network.

This standard problem is known as NP-hard when delays are to be calculated. We propose here a reformulation, now expressed as a Linear-Programming (LP) problem, thus allowing to provide an efficient resolution. This allows us to “back-engineer” a neural network, i.e. to find out, given a set of initial conditions, which parameters (i.e., connection weights in this case), allow to simulate the network spike dynamics.

More precisely we make explicit the fact that the back-engineering of a spike train, is a Linear (L) problem if the membrane potentials are observed and a LP problem if only spike times are observed, with a gIF model. Numerical robustness is discussed. Furthermore, we point out how the L or LP adjustment mechanism is local to each unit and has the same structure as an “Hebbian” rule [42] .

A step further, this paradigm has been generalizabled to the design of input-output spike train transformations. This means that we have a practical method to “program” a spiking network, i.e. find a set of parameters allowing us to exactly reproduce the network output, given an input [43] .

Parametric Estimation of spike train statistics, with Gibbs Distributions and application to Synaptic Adaptation Mechanisms

We consider the evolution of a network of neurons, focusing on the asymptotic behavior of spikes dynamics instead of membrane potential dynamics. The spike response is not sought as a deterministic response in this context, but as a conditional probability: "Reading the code" consists in inferring such a probability.

This probability is computed from empirical raster plots, by using the framework of thermodynamic formalism in ergodic theory. This gives us a parametric statistical model where the probability has the form of a Gibbs distribution. In this respect, this approach generalizes the seminal and profound work of Schneidman, Bialek and collaborators [40] .

A minimal presentation of the formalism is reviewed here, while a general algorithmic estimation method is proposed, minimizing the relative entropy, yielding fast convergent implementations. It is also made explicit how several spike observables (entropy, rate, synchronizations, correlations) are given in closed-form from the parametric estimation [41] .

This paradigm does not only allow us to estimate the spike statistics, given a design choice, but also to compare different models, thus answering comparative questions about the neural code such as : are correlations (or time synchrony or a given set of spike patterns, ..) significant with respect to rate coding ?

A numerical validation of the method is proposed and the perspectives regarding spike-train code analysis are also discussed [44] .

A step further, we use this mechanism to help Bruno Cessac (from NeuroMathComp EPI) to study the effects of synaptic plasticity on these statistics and introduce a framework in which spike trains are associated to a coding of membrane potential trajectories, and actually, constitute a symbolic coding in important explicit examples (the so-called gIF models). For instance, it has been shown that Gibbs distributions naturally arise when considering "slow" synaptic plasticity rules where the characteristic time for synapse adaptation is quite longer than the characteristic time for neurons dynamics [5] .

Simulation tools :

We have developed two simulators for the numerical simulation of spiking neural network models. In SIRENE, a time-stepping method (Runge-Kutta) approximates the membrane voltage of neurons on a discretized time. In MVASpike, computation of the firing times is driven by local or global events.


previous
next

Logo Inria