## Section: New Results

### Assemblies of neuron models and simulation

#### Simulation of spiking neural networks

Participant : Romain Brette.

This work was partially supported by the EC IP project FP6-015879, FACETS and the Fondation d'Entreprise EADS.

Neural networks can be simulated exactly using event-driven strategies, in which the algorithm advances directly from one spike to the next spike. It applies to neuron models for which we have 1) an explicit expression for the evolution of the state variables between spikes and 2) an explicit test on the state variables which predicts whether and when a spike will be emitted. In a previous work, we proposed a method which allows exact simulation of an integrate-and-fire model with exponential conductances, with the constraint of a single synaptic time constant. Lately we proposed a method, based on polynomial root finding, which applies to integrate-and-fire models with exponential currents, with possibly many different synaptic time constants. Models can include biexponential synaptic currents and spike-triggered adaptation currents. This work was published in Neural Computation [Oops!] .

We also published a review of simulation of spiking neural networks in the Journal of Computational Neuroscience, including algorithmic and precision issues, and an overview of simulation softwares. The review also includes benchmark code for various softwares that is distributed on a public database (ModelDB) [Oops!] .

#### Dynamic analysis of discrete time models

Participants : Bruno Cessac, Olivier Faugeras, Jonathan Touboul, Thierry Viéville.

This work was partially supported by the EC IP project FP6-015879, FACETS and the Fondation d'Entreprise EADS.

In [Oops!] I have rigorously classified the dynamical regimes generically exhibited by leaky-integrate and fire neural networks models where time have been discretised. Using symbolic dynamic techniques I have shown how the dynamics of membrane potential has a one-to-one correspondence with sequences of spikes patterns (“raster plots”). Moreover, though the dynamics is generically periodic, it has a weak form of initial conditions sensitivity due to the presence of a sharp threshold in the model definition. As a consequence, the model exhibits a dynamical regime indistinguishable from chaos in numerical experiments.

With T. Viéville we have extended these results towards biologically plausible generalized integrate and fire (GIF) neuron model with conductance-based dynamics [107] , [100] . A step further, constructive conditions have been derived, allowing to properly implement visual functions on such networks. The time discretization has been carefully conducted avoiding usual bias induced by e.g. Euler methods and the usual arbitrary discontinuities have been discussed. The effects of the discretization approximation has been analytically and experimentally analyzed. With this new point of view, we have also reconsidered some “biological” results obtained on “models” with biologically non plausible discontinuities. This has allowed us to reduce the bio-physical membrane equation to a very simple but powerful gIF numerical model with a drastic reduction of the algorithmic complexity of event-based network simulations.

#### Stochastic analysis

Participants : Bruno Cessac, Olivier Faugeras, Jonathan Touboul, Thierry Viéville.

This work was partially supported by the EC IP project FP6-015879, FACETS and the Fondation d'Entreprise EADS.

The mathematical study of neuronal networks dynamics is a real challenge since neuronal networks are dynamical systems with a huge number of degree of freedom and parameters, multi-scales organisation with complex interactions, where neurons dynamics depend on the synaptic graph structure while synapses evolve according to the neurons activity. This analysis, which is an important step towards the characterisation of in vitro or in vivo neuronal networks, from space scales corresponding to a few neurons to scales characterising e.g. cortical columns. can be performed, in some cases, using tools from statistical physics, dynamical systems theory and ergodic theory. A detailed description of these techniques has been published this year in [Oops!] , [Oops!] .

With O. Faugeras and J. Touboul we are currently applying these methods (dynamic mean-field theory combined with dynamical systems analysis) to neural mass models with several populations with a connectivity scheme based on anatomical data on cortical columns structure. This study, which has not been completed yet, will allow us to characterize cortical dynamics at a scale corresponding precisely to the resolution of optical imaging or functional MRI.

#### Effects of synaptic plasticity

Participants : Hugues Berry, Bruno Delord, Mathias Quoy, Benoit Siri, Olivier Temam.

This project is partially supported by the ANR.

This collaboration between the Alchemy project team at INRIA Futurs Saclay (Hugues Berry, Olivier Temam), INSERM ANIM U742, Université P. et M. Curie, Paris (Bruno Delord) and Equipe neurocybernetique, ETIS, UMR CNRS 8051 (Mathias Quoy) aims at understanding how structure of biological neural networks is conditioning their functional capacities, in particular learning. In [Oops!] , we present a mathematical analysis of the effects of Hebbian learning in random recurrent neural networks. Using theoretical tools from dynamical systems and graph theory, we study a generic “Hebb-like” learning rule that can include passive forgetting and different time scales for neuron activity and learning dynamics. We show that “Hebb-like” learning leads to a reduction of the complexity of the dynamics manifested by a systematic decay of the largest Lyapunov exponent. This effect is caused by a contraction of the spectral radius of Jacobian matrices, induced either by passive forgetting or by saturation of the neurons. As a consequence learning drives the system from chaos to a steady state through a sequence of bifurcations. We show that the network sensitivity to the input pattern is maximal at the “edge of chaos”. We also emphasize the role of feedback circuits in the Jacobian matrices and the link to cooperative systems. In [Oops!] these results are extended to random networks with inhibitory and excitatory neurons.

#### Linear response and networks dynamics

Participants : Bruno Cessac, J.A. Sepulchre.

We have developed an original approach based on a linear response theory proposed by Ruelle for dissipative dynamical systems allowing to analyse the joint effect of a network topology and non-linear dynamics, in dynamical systems like neural networks, where the nodes have non-linear transfer functions. On practical grounds, we have predicted and evidenced non-intuitive and unexpected effects in networks with chaotic dynamics. We have shown that it is possible to transmit and recover a signal in a chaotic system. We also analysed how the dynamics interfere with the graph topology to produce an effective transmission network, whose topology depends on the signal and cannot be directly read on the “wired” network. Moreover, with a suitable choice of the resonance frequency, one can transmit a signal from a node to another one by amplitude modulation, in spite of the presence of chaos. In addition, a signal, transmitted to any node via different paths, will be recovered only at some specific nodes.

Recently [Oops!] , B. Cessac has applied this method on simple examples of non uniform hyperbolic systems like the Hénon map, and exhibited the existence of a pole in the upper half complex plane. This result present s a strong interest for the community studying non uniformly hyperbolic dynamical systems and may have an impact on the type of analysis we are performing on networks.

#### A brief overview of intracortical circuits

Participant : François Grimbert.

This work, [Oops!] , aims at giving an insight of the most salient features of intracortical connectivity, i.e., the structure of neuronal networks inside a cortical area. In the first part we raise the question of cortical columns, a blur and sometimes misused concept defining fundamental mesoscopic units in the cortex. Their role and structure show a lot of discrepancies across species and even across areas. In the second part, we focus on local circuits and try to get an insight of their complexity as well as their most important organizing laws, like the stereotypical excitatory pathway. The last part is dedicated to horizontal connectivity, illustrated through two famous examples: the primary visual cortex of mammals and the rat barrel cortex.

#### Neural Fields: homogeneous states

Keywords : Neural masses, Neural fields, Integro-differential equations, synchronization.

Participants : Olivier Faugeras, François Grimbert, Jean-Jacques Slotine [ MIT ] .

Neural fields are an interesting option for modelling macroscopic parts of the cortex involving several populations of neurons, like cortical areas. Two classes of neural field equations are considered: voltage and activity based. The spatio-temporal behaviour of these fields is
described by nonlinear integro-differential equations. The integral term, computed over a compact subset of
R^{q},
q= 1, 2, 3 , involves space and time varying, possibly non-symmetric, intra-cortical connectivity kernels. Contributions from white matter afferents are represented as external input. Sigmoidal nonlinearities arise from the relation between average membrane
potentials and instantaneous firing rates. Using methods of functional analysis, we characterize the existence and uniqueness of a solution of these equations for general, homogeneous (i.e. independent of the spatial variable), and locally homogeneous inputs. In all cases we give sufficient
conditions on the connectivity functions for the solutions to be absolutely stable, that is to say independent of the initial state of the field. These conditions bear on some compact operators defined from the connectivity kernels, the sigmoids, and the time constants used in describing
the temporal shape of the post-synaptic potentials. Numerical experiments are presented to illustrate the theory. An important contribution of our work is the application of the theory of compact operators in a Hilbert space to the problem of neural fields with the effect of providing very
simple mathematical answers to the questions asked by neuroscience modellers.

This work has appeared as [Oops!] and was presented at [Oops!] .

#### Neural Fields: stationary states

Keywords : Neural masses, Neural fields, Integro-differential equations, stationary solutions, persistent states, bumps.

Participants : Olivier Faugeras, François Grimbert, Romain Veltz.

Neural continuum networks are an important aspect of the modeling of macroscopic parts of the cortex. Two classes of such networks are considered: voltage- and activity-based. In both cases our networks contain an arbitrary number,
n , of interacting neuron populations. Spatial non-symmetric connectivity functions represent cortico-cortical, local, connections, external inputs represent non-local connections. Sigmoidal nonlinearities model the relationship between (average) membrane potential and activity.
Departing from most of the previous work in this area we do not assume the nonlinearity to be singular, i.e., represented by the discontinuous Heaviside function. Another important difference with previous work is our relaxing of the assumption that the domain of definition where we study
these networks is infinite, i.e. equal to
R or
R^{2} . We explicitely consider the biologically more relevant case of a bounded subset
of
R^{q},
q= 1, 2, 3 , a better model of a piece of cortex. The time behaviour of these networks is described by systems of integro-differential equations. Using methods of functional analysis, we study the existence and uniqueness of a stationary, i.e., time-independent,
solution of these equations in the case of a stationary input. These solutions can be seen as “persistent”, they are also sometimes called “bumps”. We show that under very mild assumptions on the connectivity functions and because we do not use the Heaviside function for the nonlinearities,
such solutions always exist. We also give sufficient conditions on the connectivity functions for the solution to be absolutely stable, that is to say independent of the initial state of the network. We then study the sensitivity of the solution(s) to variations of such parameters as the
connectivity functions, the sigmoids, the external inputs, and, last but not least, the shape of the domain of existence
of the neural continuum networks. These theoretical results are illustrated and corroborated by a large number of numerical experiments in most of the cases
2
n3, 2
q3 .

This work has appeared as a technical report [Oops!] and is submitted for publication to Neural Computation.

#### Biophysical cortical column model for optical signal analysis: Hodgkin-Huxley

Keywords : Voltage-sensitive dye imaging, cortical column, membrane potential, NEURON simulation.

Participants : Sandrine Chemla, Frederic Chavane [ Institut de Neurosciences Cognitives de la Méditerranée ] , Thierry Viéville.

We propose a biological cortical column model, at a mesoscopic scale, in order to explain and interpret biological sources of voltage-sensitive dye imaging signal. The mesoscopic scale, corresponding to a micro-column, is about 50 µm. The proposed model takes into account biological and electrical neural parameters of the laminar cortical layers. Thus we choose a model based on a cortical microcircuit, whose synaptic connections are made only between six specific populations of neurons, excitatory and inhibitory neurons in three main layers. For each neuron, we use a conductance-based single compartment Hodgkin-Huxley neuron model [Oops!] . We claim that our model will reproduce qualitatively the same results than the optical imaging signal based on voltage-sensitive dyes, which represents the summed intracellular membrane potential changes of all the neuronal elements at a given cortical site. After the preliminary simulations, this model suggests that the OI signal is the result of an average from multiple components whose proportion changes with levels of activity, and shows surprisingly that inhibitory cells and spiking activity may well participate more to the signal than initially though.

#### From neural fields to VSD optical imaging: neural masses

Keywords : Voltage-sensitive dye imaging, cortical column, membrane potential.

Participants : Frederic Chavane [ Institut de Neurosciences Cognitives de la Méditerranée ] , Olivier Faugeras, François Grimbert.

Neural masses are natural mathematical models for describing the dynamics of the cortex at the mesoscopic scale. They can be assembled to form a continuum, or neural field, and provide descriptions at a macroscopic scale [101] . Starting from such a model of a cortical area, we propose a formula for the direct problem of extrinsic optical imaging.

This work was presented in [Oops!] .