NeuroMathComp focuses on the exploration of the brain from the mathematical and computational perspectives.

We want to unveil the principles that govern the functioning of neurons and assemblies thereof and to use our results to bridge the gap between biological and computational vision.

Our work is quite mathematical but we make heavy use of computers for numerical experiments and simulations. We have close ties with several top groups in biological neuroscience. We are pursuing the idea that the "unreasonable effectiveness of mathematics" can be brought, as it has been in physics, to bear on neuroscience.

Computational neuroscience attempts to build models of neurons at a variety of levels, microscopic, i.e., the single neuron, the minicolumn containing of the order of one hundred or so neurons, mesoscopic, i.e., the macrocolumn containing of the order of

Modeling such assemblies of neurons and simulating their behavior involves putting together a mixture of the most recent results in neurophysiology with such advanced mathematical methods as dynamic systems theory, bifurcation theory, probability theory, stochastic calculus, theoretical physics and statistics, as well as the use of simulation tools.

We conduct research in the following main areas:

Neural networks dynamics

Mean-field approaches

Neural fields

Slow-fast dynamics in neuronal models

Spike train statistics

Synaptic plasticity

Visual neuroscience

Neuromorphic vision

The study of neural networks is certainly motivated by the long term goal to understand how brain is working. But, beyond the comprehension of brain or even of simpler neural systems in less evolved animals, there is also the desire to exhibit general mechanisms or principles at work in the nervous system. One possible strategy is to propose mathematical models of neural activity, at different space and time scales, depending on the type of phenomena under consideration. However, beyond the mere proposal of new models, which can rapidly result in a plethora, there is also a need to understand some fundamental keys ruling the behaviour of neural networks, and, from this, to extract new ideas that can be tested in real experiments. Therefore, there is a need to make a thorough analysis of these models. An efficient approach, developed in our team, consists of analysing neural networks as dynamical systems. This allows to address several issues. A first, natural issue is to ask about the (generic) dynamics exhibited by the system when control parameters vary. This naturally leads to analyse the bifurcations occurring in the network and which phenomenological parameters control these bifurcations. Another issue concerns the interplay between neuron dynamics and synaptic network structure.

In this spirit, our team has been able to characterize the generic dynamics exhibited by models such as Integrate and Fire models , conductance-based Integrate and Fire models , , , models of epilepsy , effects of synaptic plasticity , , homeostasis and intrinsic plasticity .

Selected publications on this topic: lien.

Modeling neural activity at scales integrating the effect of thousands of neurons is of central importance for several reasons. First, most imaging techniques are not able to measure individual neuron activity (“microscopic” scale), but are instead measuring mesoscopic effects resulting from the activity of several hundreds to several hundreds of thousands of neurons. Second, anatomical data recorded in the cortex reveal the existence of structures, such as the cortical columns, with a diameter of about 50μm to 1mm, containing of the order of one hundred to one hundred thousand neurons belonging to a few different species. The description of this collective dynamics requires models which are different from individual neurons models. In particular, when the number of neurons is large enough averaging effects appear, and the collective dynamics is well described by an effective mean-field, summarizing the effect of the interactions of a neuron with the other neurons, and depending on a few effective control parameters. This vision, inherited from statistical physics requires that the space scale be large enough to include a large number of microscopic components (here neurons) and small enough so that the region considered is homogeneous.

Our group is developing mathematical and numerical methods allowing on one hand to produce dynamic mean-field equations from the physiological characteristics of neural structure (neurons type, synapse type and anatomical connectivity between neurons populations), and on the other so simulate these equations. These methods use tools from advanced probability theory such as the theory of Large Deviations and the study of interacting diffusions . Our investigations have shown that the rigorous dynamics mean-field equations can have a quite more complex structure than the ones commonly used in the literature (e.g. ) as soon as realistic effects such as synaptic variability are taken into account. Our goal is to relate those theoretical results with experimental measurement, especially in the field of optical imaging. For this we are collaborating with Institut des Neurosciences de la Timone, Marseille.

Neural fields are a phenomenological way of describing the activity of population of neurons by delay integro-differential equations. This continuous approximation turns out to be very useful to model large brain areas such as those involved in visual perception. The mathematical properties of these equations and their solutions are still imperfectly known, in particular in the presence of delays, different time scales and of noise.

Our group is developing mathematical and numerical methods for analysing these equations. These methods are based upon techniques from mathematical functional analysis , bifurcation theory , equivariant bifurcation analysis, delay equations, and stochastic partial differential equations. We have been able to characterize the solutions of these neural fields equations and their bifurcations, apply and expand the theory to account for such perceptual phenomena as edge, texture , and motion perception. We have also developed a theory of the delayed neural fields equations, in particular in the case of constant delays and propagation delays that must be taken into account when attempting to model large size cortical areas . This theory is based on center manifold and normal forms ideas. We are currently extending the theory to take into account various sources of noise using tools from the theory of stochastic partial differential equations.

Selected publications on this topic: lien.

Neuronal rhythms typically display many different timescales, therefore it is important to incorporate this slow-fast aspect in models. We are interested in this modeling paradigm where slow-fast point models (using Ordinary Differential Equations) are investigated in terms of their bifurcation structure and the patterns of oscillatory solutions that they can produce. To insight into the dynamics of such systems, we use a mix of theoretical techniques — such as geometric desingularisation and centre manifold reduction — and numerical methods such as pseudo-arclength continuation . We are interested in families of complex oscillations generated by both mathematical and biophysical models of neurons. In particular, so-called *mixed-mode oscillations (MMOs) , )*, which represent an alternation between subthreshold and spiking behaviour, and *bursting oscillations* , , also corresponding to experimentally observed behaviour .

Selected publications on this topic: lien.

The neuronal activity is manifested by the emission of action potentials (“spikes”) constituting spike trains. Those spike trains are usually not exactly reproducible when repeating the same experiment, even with a very good control ensuring that experimental conditions have not changed. Therefore, researchers are seeking models for spike train statistics, assumed to be characterized by a canonical probabilities giving the statistics of spatio-temporal spike patterns. A current goal in experimental analysis of spike trains is to approximate this probability from data. Several approach exist either based on (i) generic principles (maximum likelihood, maximum entropy); (ii) phenomenological models (Linear-Non linear, Generalized Linear Model, mean-field); (iii) Analytical results on spike train statistics in Neural Network models.

Our group is working on those 3 aspects, on a fundamental and on a practical (numerical) level. On the one hand, we have published analytical (and rigorous) results on statistics of spike trains in canonical neural network models (Integrate and Fire, conductance based with chemical and electric synapses) , , . The main result is the characterization of spike train statistics by a Gibbs distribution whose potential can be explicitly computed using some approximations. Note that this result does not require an assumption of stationarity. We have also shown that the distributions considered in the cases (i), (ii), (iii) above are all Gibbs distributions . On the other hand, we are proposing new algorithms for data processing . We have developed a C++ software for spike train statistics based on Gibbs distributions analysis and freely available at https://

Selected publications on this topic: lien.

Neural networks show amazing abilities to evolve and adapt, and to store and process information. These capabilities are mainly conditioned by plasticity mechanisms, and especially synaptic plasticity, inducing a mutual coupling between network structure and neuron dynamics. Synaptic plasticity occurs at many levels of organization and time scales in the nervous system (Bienenstock, Cooper, and Munroe, 1982). It is of course involved in memory and learning mechanisms, but it also alters excitability of brain areas and regulates behavioral states (e.g. transition between sleep and wakeful activity). Therefore, understanding the effects of synaptic plasticity on neurons dynamics is a crucial challenge.

Our group is developing mathematical and numerical methods to analyse this mutual interaction. On the one hand, we have shown that plasticity mechanisms, Hebbian-like or STDP, have strong effects on neuron dynamics complexity, such as dynamics complexity reduction, and spike statistics (convergence to a specific Gibbs distribution via a variational principle), resulting in a response-adaptation of the network to learned stimuli , , . We are also studying the conjugated effects of synaptic and intrinsic plasticity in collaboration with H. Berry (Inria Beagle) and B. Delord, J. Naudé, ISIR team, Paris. On the other hand, we have pursued a geometric approach in which we show how a Hopfield network represented by a neural field with modifiable recurrent connections undergoing slow Hebbian learning can extract the underlying geometry of an input space . We have also pursued an approach based on the ideas developed in the theory of slow-fast systems (in this case a set of neural fields equations) in the presence of noise and applied temporal averaging methods to recurrent networks of noisy neurons undergoing a slow and unsupervised modification of their connectivity matrix called learning .

Selected publications on this topic: lien.

Our group focuses on the visual system to understand how information is encoded and processed resulting in visual percepts. To do so, we propose functional models of the visual system using a variety of mathematical formalisms, depending on the scale at which models are built, such as spiking neural networks or neural fields. So far, our efforts have been focused on the study of retinal processing, edge and texture perception, motion integration at the level of V1 and MT cortical areas.

At the retina level, we are modeling its circuitry and we are studying the statistics of the spike train output (see, e.g., the software ENAS https://

Selected publications on this topic: lien.

From the simplest vision architectures in insects to the extremely complex cortical hierarchy in primates, it is fascinating to observe how biology has found efficient solutions to solve vision problems. Pioneers in computer vision had this dream to build machines that could match and perhaps outperform human vision. This goal has not been reached, at least not on the scale that was originally planned, but the field of computer vision has met many other challenges from an unexpected variety of applications and fostered entirely new scientific and technological areas such as computer graphics and medical image analysis. However, modelling and emulating with computers biological vision largely remains an open challenge while there are still many outstanding issues in computer vision.

Our group is working on neuromorphic vision by proposing bio-inspired methods following our progress in visual neuroscience. Our goal is to bridge the gap between biological and computer vision, by applying our visual neuroscience models to challenging problems from computer vision such as optical flow estimation , coding/decoding approaches , or classification , .

Selected publications on this topic: lien.

**Awards**

Olivier Faugeras received the Okawa Foundation Prize for "Pioneering contributions for computer vision and for computational neuroscience". The prize was awarded to him in Tokyo, Japan, in March 2015. He received the PAMI Azriel Rosenfeld Lifetime Achievement Award in December 2015 at the ICCV 2015 in Santiago, Chile. This award is given to researchers in Computer Vision who have made major contributions to the field over their career and who have influenced the field in an extraordinary way.

**Habilitation à Diriger des Recherches (HDR)**
Mathieu Desroches has defended an habilitation thesis on the 11th December 2015 at the Université Pierre et Marie Curie - Paris 6. The title of his habilitation thesis is *Complex oscillations with multiple timescales - Application to neuronal dynamics* . The reviewer of this HDR were: Eusebius J. Doedel (Concordia University, Canada), Christopher K. R. T. Jones (University of North Carolina at Chapel Hill, USA) and Daniel Panazzolo (Université de Haute-Alsace, France). The jury was formed by : Stephen Coombes (University of Nottingham, UK), Peter De Maesschalck (Hasselt University, Belgium), Olivier Faugeras (Inria Sophia Antipolis, France), Jean-Pierre Françoise (President of the Jury, Université Pierre et Marie Curie - Paris 6, France), Christopher K. R. T. Jones (University of North Carolina at Chapel Hill, USA) and Daniel Panazzolo (Université de Haute-Alsace, France).

Event Neural Assembly Simulation

Keywords: Neurosciences - Health - Physiology

Functional Description Enas is a software for the analysis of spike trains either coming from neural simulators or from biological experiments. Spike trains statistical analysis is based on the estimation of a Gibbs distribution with a spatio-temporal potential optimaly characterizing the statistics of empirical spike trains by minimisation of the Kullback-Leibler divergence between the empirical measure and the Gibbs measure. From this, classical statistical indicators such as firing rate, correlations, higher order moments statistical entropy, effective connectivity graph, confidence plots and so on are obtained. Also, the form of the Gibbs potential provides essential informations on the underlying neural network and its structure. This method does not only allows us to estimate the spikes statistics but also to compare different models, thus answering such questions about the neural code as, e.g., are correlations (or time synchrony or a given set of spike patterns, etc.) significant with respect to rate coding? The software includes classical Maximum Entropy Models such as Ising model, but also more general forms of potentials with spatio-temporal interactions. It also has a functionality attempting to guess the shape of the potential from data and a procedure fitting an Integrate and Fire Neural Network reproducing the empirical rasters statistics. Finally, it allows to generate artificial rasters having a given distribution (e.g. corresponding to biological spike trains).

Participants: Bruno Cessac, Sélim Kraria, Hassan Nasser, Thierry Viéville, Rodrigo Cofre Torres, Audric Drogoul, Geoffrey Portelli, Pierre Kornprobst, Theodora Karvouniari and Daniela Pamplona

Contact: Bruno Cessac

Keywords: Neurosciences - Simulation - Biology

Functional Description Virtual Retina allows large-scale simulations of biologically-plausible retinas, with customizable parameters. Virtual Retina has been shown to reproduce a wide range of experimental data from salamander, cat and primate retinas , and has been used in several theoretical studies , , , . It has recently been shown to predict spikes in a mouse retina more accurately than linear-nonlinear (LN) models . The underlying model includes a non-separable spatio-temporal linear model of filtering in the Outer Plexiform Layer, a shunting feedback at the level of bipolar cells, and a spike generation process using noisy leaky integrate-and-fire neurons to model retinal ganglion cells (RGCs). All parameters for the different stages of the model are customizable so that the visual field can be paved with different RGC types.

Participants: Bruno Cessac, Maria-Jose Escobar, Pierre Kornprobst, Adrien Wohrer and Thierry Viéville

Contact: Pierre Kornprobst

URL: http://

Inhibition stabilized networks (ISNs) are neural architectures with strong positive feedback among pyramidal neurons balanced by strong negative feedback from in-hibitory interneurons, a circuit element found in the hippocampus and the primary visual cortex. In their working regime, ISNs produce damped oscillations in the γ-range in response to inputs to the inhibitory population. In order to understand the proper-ties of interconnected ISNs, we investigated periodic forcing of ISNs. We show that ISNs can be excited over a range of frequencies and derive properties of the resonance peaks. In particular, we studied the phase-locked solutions, the torus solutions and the resonance peaks. More particular, periodically forced ISNs respond with (possibly multi-stable) phase-locked activity whereas networks with sustained intrinsic oscillations respond more dynamically to periodic inputs with tori. Hence, the dynamics are surprisingly rich and phase effects alone do not adequately describe the network response. This strengthens the importance of phase-amplitude coupling as opposed to phase-phase coupling in providing multiple frequencies for multiplexing and routing information.

This work has been published in Neural Computation and is available as .

The use of stochastic models, in effect piecewise deterministic Markov processes (PDMP), has become increasingly popular especially for the modeling of chemical reactions and cell biophysics. Yet, exact simulation methods, for the simulation of these models in evolving environments, are limited by the need to find the next jumping time at each recursion of the algorithm. We report on a new general method to find this jumping time for the True Jump Method. It is based on an expression in terms of ordinary differential equations for which efficient numerical methods are available. As such, our new result makes it possible to study numerically stochastic models for which analytical formulas are not available thereby providing a way to approximate the state distribution for example. We conclude that the wide use of event detection schemes for the simulation of PDMPs should be strongly reconsidered. The only relevant remaining question being the efficiency of our method compared to the Fictitious Jump Method, question which is strongly case dependent.

This work is available as .

This work challenges and extends earlier seminal work. We consider the problem of describing mathematically the spontaneous activity of V1 by combining several important experimental observations including (1) the organization of the visual cortex into a spatially periodic network of hypercolumns structured around pinwheels, (2) the difference between short-range and long-range intracortical connections, the first ones being rather isotropic and producing naturally doubly periodic patterns by Turing mechanisms, the second one being patchy, and (3) the fact that the Turing patterns spontaneously produced by the short-range connections and the network of pinwheels have similar periods. By analyzing the preferred orientation (PO) maps, we are able to classify all possible singular points (the pinwheels) as having symmetries described by a small subset of the wallpaper groups. We then propose a description of the spontaneous activity of V1 using a classical voltage-based neural field model that features isotropic short-range connectivities modulated by non-isotropic long-range connectivities. A key observation is that, with only short-range connections and because the problem has full translational invariance in this case, a spontaneous doubly periodic pattern generates a 2-torus in a suitable functional space which persists as a flow-invariant manifold under small perturbations, for example when turning on the long-range connections. Through a complete analysis of the symmetries of the resulting neural field equation and motivated by a numerical investigation of the bifurcations of their solutions, we conclude that the branches of solutions which are stable over an extended range of parameters are those that correspond to patterns with an hexagonal (or nearly hexagonal) symmetry. The question of which patterns persist when turning on the long-range connections is answered by (1) analyzing the remaining symmetries on the perturbed torus and (2) combining this information with the Poincaré-Hopf theorem. We have developed a numerical implementation of the theory that has allowed us to produce the predicted patterns of activities, the planforms. In particular we generalize the contoured and non-contoured planforms predicted by previous authors.

This work has been published in Journal of Mathematical Neuroscience and is available as .

Retinal waves are spontaneous waves of spiking activity observed in the retina, during development only, playing a central role in shaping the visual system and retinal circuitry. Understanding how these waves are initiated and propagate in the retina could enable one to control, guide and predict them in the in vivo adult retina as inducing them is expected to reintroduce some plasticity in the retinal tissue and in the projections to the LGN. In this context, we propose a physiologically realistic reaction-diffusion model for the mechanisms of the emergence of stage II cholinergic retinal waves during development. We perform the bifurcation analysis when varying two biophysically relevant parameters, the conductances of calcium and potassium

This work is available as .

We study the impact of a weak time-dependent external stimulus on the collective statistics of spiking responses in neuronal networks. We extend the current knowledge, assessing the impact over firing rates and cross correlations, to any higher order spatio-temporal correlation [1]. Our approach is based on Gibbs distributions (in a general setting considering non stationary dynamics and infinite memory) [2] and linear response theory. The linear response is written in terms of a correlation matrix, computed with respect to the spiking dynamics without stimulus. We give an example of application in a conductance based integrate-and fire model.

This work is available as .

It is widely believed that information is stored in the brain by means of the varying strength of synaptic connections between neurons. Stored patterns can be replayed upon the arrival of an appropriate stimulus. Hence, it is interesting to understand how an information pattern can be represented by the dynamics of the system. In this work, we consider a class of network neuron models, known as Hopfield networks, with a learning rule which consists of transforming an information string to a coupling pattern. Within this class of models, we study dynamic patterns, known as robust heteroclinic cycles, and establish a tight connection between their existence and the structure of the coupling.

This work has been published in Journal of Nonlinear Science and is available as .

Mean-field theories in neuroscience are usually understood as ways to bridge spatial and temporal scales by lumping together the activities of many single neurons, and then explaining or predicting the spatio-temporal variations of mesoscopic or macroscopic quantities measurable with current technologies: EEG, MEG, fMRI, optical imaging, etc. This is very much alike the situation in statistical physics where macroscopic quantities such as pressure, conductivity and so on are explained by the interactions between ”microscopic” entities like atoms or molecules.

The situation in neuroscience is different however: the laws governing the microscopic dynamics in physics do not have the same structure as the laws governing neuronal dynamics; for example, interactions between neurons are not symmetric. Moreover, it is yet unclear what the relevant macroscopic quantities are in order to account for, say, visual perception. At the present stage of research, these quantities are considered to be what is measurable with currently available technologies, whereas better theories could reveal new types of phenomenological observables with a higher explanatory power.

We review mean-field methods coming from physics and their consequences on neuronal dynamics predictions.

We introduce a new formalism for evaluating analytically the cross-correlation structure of a finite size firing-rate network with recurrent connections. The analysis performs a first-order perturbative expansion of neural activity equations that include three different sources of randomness: the background noise of the membrane potentials, their initial conditions, and the distribution of the recurrent synaptic weights. This allows the analytical quantification of the relationship between anatomical and functional connectivity, i.e. of how the synaptic connections determine the statistical dependencies at any order among different neurons. The technique we develop is general, but for simplicity and clarity we demonstrate its efficacy by applying it to the case of synaptic connections described by regular graphs. The analytical equations obtained in this way reveal previously unknown behaviors of recurrent firing-rate networks, especially on how correlations are modified by the external input, by the finite size of the network, by the density of the anatomical connections and by correlation in sources of randomness. In particular, we show that a strong input can make the neurons almost independent, suggesting that functional connectivity does not depend only on the static anatomical connectivity, but also on the external inputs. Moreover we prove that in general it is not possible to find a mean-field description à la Sznitman of the network, if the anatomical connections are too sparse or our three sources of variability are correlated. To conclude, we show a very counterintuitive phenomenon, which we call stochastic synchronization, through which neurons become almost perfectly correlated even if the sources of randomness are independent. Due to its ability to quantify how activity of individual neurons and the correlation among them depends upon external inputs, the formalism introduced here can serve as a basis for exploring analytically the computational capability of population codes expressed by recurrent neural networks.

This work is available as .

We study the asymptotic law of a network of interacting neurons when the number of neurons becomes infinite. Given a completely connected network of neurons in which the synaptic weights are Gaussian correlated random variables, we describe the asymptotic law of the network when the number of neurons goes to infinity. We introduce the process-level empirical measure of the trajectories of the solutions to the equations of the finite network of neurons and the averaged law (with respect to the synaptic weights) of the trajectories of the solutions to the equations of the network of neurons. The main result of this article is that the image law through the empirical measure satisfies a large deviation principle with a good rate function which is shown to have a unique global minimum. Our analysis of the rate function allows us also to characterize the limit measure as the image of a stationary Gaussian measure defined on a transformed set of trajectories.

This work is available as .

In this work, we clarify the well-posedness of the limit equations to the mean-field N-neuron models proposed in and we prove the associated propagation of chaos property. We also complete the modeling issue in by discussing the well-posedness of the stochastic differential equations which govern the behavior of the ion channels and the amount of available neurotransmitters.

This work is available as .

This work has been published in SIAM J. Math. Anal. and is available as .

In this work we present a general framework in which to rigorously study the effect of spatio- temporal noise on traveling waves and stationary patterns. In particular, the framework can incorporate versions of the stochastic neural field equation that may exhibit traveling fronts, pulses or stationary patterns. To do this, we first formulate a local SDE that describes the position of the stochastic wave up until a discontinuity time, at which point the position of the wave may jump. We then study the local stability of this stochastic front, obtaining a result that recovers a well-known deterministic result in the small-noise limit. We finish with a study of the long-time behavior of the stochastic wave.

This work has appeared in SIAM J. on Applied Dynamical Systems (SIADS) .

In this work, we study canard solutions of the forced van der Pol equation in the relaxation limit for low-, intermediate-, and high-frequency periodic forcing. A central numerical observation made herein is that there are two branches of canards in parameter space which extend across all positive forcing frequencies. In the low-frequency forcing regime, we demonstrate the existence of primary maximal canards induced by folded saddle nodes of type I and establish explicit formulas for the parameter values at which the primary maximal canards and their folds exist. Then, we turn to the intermediate- and high-frequency forcing regimes and show that the forced van der Pol possesses torus canards instead. These torus canards consist of long segments near families of attracting and repelling limit cycles of the fast system, in alternation. We also derive explicit formulas for the parameter values at which the maximal torus canards and their folds exist. Primary maximal canards and maximal torus canards correspond geometrically to the situation in which the persistent manifolds near the family of attracting limit cycles coincide to all orders with the persistent manifolds that lie near the family of repelling limit cycles. The formulas derived for the folds of maximal canards in all three frequency regimes turn out to be representations of a single formula in the appropriate parameter regimes, and this unification confirms the central numerical observation that the folds of the maximal canards created in the low-frequency regime continue directly into the folds of the maximal torus canards that exist in the intermediate- and high-frequency regimes. In addition, we study the secondary canards induced by the folded singularities in the low-frequency regime and find that the fold curves of the secondary canards turn around in the intermediate-frequency regime, instead of continuing into the high-frequency regime. Also, we identify the mechanism responsible for this turning. Finally, we show that the forced van der Pol equation is a normal form-type equation for a class of single-frequency periodically driven slow/fast systems with two fast variables and one slow variable which possess a non-degenerate fold of limit cycles. The analytic techniques used herein rely on geometric desingularisation, invariant manifold theory, Melnikov theory, and normal form methods. The numerical methods used herein were developed in Desroches et al. (SIAM J Appl Dyn Syst 7:1131–1162, 2008, Nonlinearity 23:739–765 2010).

This work has been published in J. Nonlinear Sci. and is available as .

Slow-fast systems often possess slow manifolds, that is invariant or locally invariant sub-manifolds on which the dynamics evolves on the slow time scale. For systems with explicit timescale separation, the existence of slow manifolds is due to Fenichel theory, and asymptotic expansions of such manifolds are easily obtained. In this work, we discuss methods of approximating slow manifolds using the so-called zero-derivative principle. We demonstrate several test functions that work for systems with explicit time scale separation including ones that can be generalized to systems without explicit timescale separation. We also discuss the possible spurious solutions, known as ghosts, as well as treat the Templator system as an example.

This work has been published in ZAMP and is available as .

Canard-induced phenomena have been extensively studied in the last three decades, both from the mathematical and from the application viewpoints. Canards in slow-fast systems with (at least) two slow variables, especially near folded-node singularities, give an essential generating mechanism for Mixed-Mode oscillations (MMOs) in the framework of smooth multiple timescale systems. There is a wealth of literature on such slow-fast dynamical systems and many models displaying canard-induced MMOs, in particular in neuroscience. In parallel, since the late 1990s several papers have shown that the canard phenomenon can be faithfully reproduced with piecewise-linear (PWL) systems in two dimensions although very few results are available in the three-dimensional case. This work aims to bridge this gap by analyzing canonical PWL systems that display folded singularities, primary and secondary canards, with a similar control of the maximal winding number as in the smooth case. We also show that the singular phase portraits are compatible in both frameworks. Finally, we show on an example how to construct a (linear) global return and obtain robust PWL MMOs.

This work has been accepted for publication in SIAM Review and is available as .

In this work, we analyze the existence and stability of canard solutions in a class of planar piecewise linear systems with three zones, using a singular perturbation theory approach. To this aim, we follow the analysis of the classical canard phenomenon in smooth planar slow-fast systems and adapt it to the piecewise-linear framework. We first prove the existence of an intersection between repelling and attracting slow manifolds, which defines a maximal canard, in a non-generic system of the class having a continuum of periodic orbits. Then, we perturb this situation and prove the persistence of the maximal canard solution, as well as the existence of a family of canard limit cycles in this class of systems. Similarities and differences between the piecewise linear case and the smooth one are highlighted.

This work has been published Dynam. Syst. and is available as .

The present work develops a new approach to studying parabolic bursting, and also proposes a novel four-dimensional canonical and polynomial-based parabolic burster. In addition to this new polynomial system, we also consider the conductance-based model of the Aplysia R15 neuron known as Plant's model, and a reduction of this prototypical biophysical parabolic burster to three variables, including one phase variable, namely Rinzel's theta model. Revisiting these models from the perspective of slow-fast dynamics reveals that the number of spikes per burst may vary upon parameter changes, however the spike-adding process occurs in a brutal (explosive) fashion that involves special solutions called canards. This spike-adding canard explosion phenomenon is analysed by using tools from geometric singular perturbation theory in tandem with numerical bifurcation techniques. We find that the bifurcation structure persists across both parabolic bursters, that is, spikes within the burst are incremented via the crossing of an excitability threshold given by a particular type of canard orbit, namely the strong canard of a folded-saddle singularity. Using these findings, we construct a new polynomial approximation of Plant's model, which retains all the key elements for parabolic bursting, including the canard mediated spike-adding transitions. Finally, we briefly investigate the presence of spike-adding via canards in planar phase models of parabolic bursting, namely the theta model by Ermentrout and Kopell.

This work has been submitted for publication and is available as .

We construct a piecewise-linear (PWL) approximation of the Hindmarsh-Rose (HR) neurone model that is minimal, in the sense that the vector field has the least number of pieces, in order to reproduce all the dynamics present in the original HR model with the classical parameter values. This includes spiking, square-wave bursting, and also special trajectories called canards, which possess long repelling segments and organise the transition between stable bursting patterns with

This work has been submitted for publication and is available as .

A subcritical pattern-forming system with nonlinear advection in a bounded domain is recast as a slow-fast system in space and studied using a combination of geometric singular perturbation theory and numerical continuation. Two types of solutions describing the possible location of stationary fronts are identified, one of which is present for all values of the bifurcation parameter while the other is present for zero or sufficiently small inlet boundary conditions but only when the bifurcation parameter is large enough. For slightly larger inlet boundary condition a continuous transition from one type to the other takes place as the bifurcation parameter increases. The origin of the two solution types is traced to the onset of convective and absolute instability on the real line. The role of canard trajectories in the transitions between these states is clarified and the stability properties of the resulting spatial structures are determined. Front location in the convective regime is highly sensitive to the upstream boundary condition and its dependence on this boundary condition is studied using a combination of numerical continuation and Monte Carlo simulations of the partial differential equation. Statistical properties of the system subjected to random or stochastic boundary conditions are interpreted using the deterministic slow-fast spatial-dynamical system.

This work has been submitted for publication and is available as .

Recent advances in multi-electrodes array acquisition have made it possible to record the activity of up to several hundreds of neurons at the same time and to register their collective activity (spike trains). For the retina, this opens up new perspectives in understanding how retinal structure and ganglion cells encode information about a visual scene and what is transmitted to the brain. Especially, two paradigms can be confronted: in the first one, ganglion cells encode information independently of each others; in the second one non linear dynamics and connectivity contribute to produce a population coding where spatio-temporal correlations, although weak, play a significant role in spike coding. Confronting these two paradigms can be done at an experimental and at a theoretical level. On experimental grounds, new methods to analyse the role of weak correlations in spike train statistics are required. On theoretical grounds, mathematical results have been established, in neuronal models, showing how non linear dynamics and connectivity contribute to produce a correlated spike response to stimuli. In the context of the ANR KEOPS project, we have been working on these two aspects and we present our main results.

This work is available as .

It has been shown that the neurons of visual system present correlated activity in response to different stimuli. The role of these correlations is an unresolved subject. These correlations vary according to the stimulus, specially with natural images. To uncover the role of these correlation and characterize the population code, it is necessary to measure the simultaneous activity of large neural populations. This has been achieved thanks to the advent of Multi-Electrode Array technology, opening up a way to better characterize how the brain encodes information in the concerted activity of neurons. In parallel, powerful statistical tools have been developed to accurately characterize spatio-temporal correlations between neurons. Methods based on *Maximum Entropy Principle*, where statistical entropy is maximized under a set of constraints corresponding to specific assumptions on the relevant statistical quantities, have been proved successfully, specially when they consider *spatiotemporal* correlations. They are although limited by (i) **the assumption of stationarity**, (ii) **the many possible choice of constraints**, and (iii) **the huge number of free parameters**.
We present our results on these two aspects obtained in the context of ANR KEOPS.

This work is available as .

The Spike Triggered Average (STA) is a classical technique to find a discrete approximation of the Receptive Fields (RFs) of sensory neurons , a required analysis in most experimental studies. One important parameter of the STA is the spatial resolution of the estimation, corresponding to the size of the blocks of the checkerboard stimulus images. In general, it is experimentally fixed to reach a compromise: If too small, neuronal responses might be too weak thus leading to RF with low Signal-to-Noise-Ratio; on the contrary, if too large, small RF will be lost, or not described with enough details, because of the coarse approximation. Other solutions were proposed consisting in starting from a small block size and updating it following the neuron response in a closed-loop to increase its response , , . However, these solutions were designed for single cells and cannot be applied to simultaneous recordings of ensembles of neurons (since each RF has its own size and preferred stimulus).

To solve this problem, we introduced a modified checkerboard stimulus where blocks are shifted randomly in space at fixed time steps. This idea is inspired from super-resolution techniques developed in image processing . The main interest is that the block size can be large, enabling strong responses, while the resolution can be finer since it depends on the shift minimum size. In , we show that the STA remains an unbiased RF estimator and, using simulated spike trains from an ensemble of Linear Nonlinear Poisson cascade neurons, it was predicted that this approach improves RF estimation over the neuron ensemble, in terms of resolution and convergence. In , we test these predictions experimentally on the RFs estimation of 8460 ganglion cells from two mouse retinas, using recordings performed with a large scale high-density multielectrode array. We compare RFs obtained using (i) the classical checkerboard stimulus with block size of 160

This work was presented in , and it is being used in current experimental protocols by E. Sernagor (Newcastle University), partner of the EC IP project FP7-ICT-2011-9 no. 600847 (RENVISION).

We explore how motion information, also called optical flow, is estimated from natural moving sequences. Owing to application potential, optical flow estimation has been studied extensively by computer vision. On the other hand the neural mechanisms underlying motion analysis in the visual cortex have been extensively studied almost with little interaction with computer vision community resulting in few mathematical models. Even though there was some early interaction among the two communities for example, methods by Heeger et.al, Sejnowski et. al, comparatively little work has been done in terms of examining or extending the mathematical models proposed in biology in terms of their engineering efficacy on modern optical flow estimation datasets.

Pursuing this idea, in , we proposed a neural model inspired from the ones presented in , which are popular models of primate velocity encoding. We started from a classical V1-MT feedforward architecture. We modeled V1 cells by motion energy (based on spatio-temporal filtering), and MT pattern cells (by pooling V1 cell responses). The efficacy of this architecture and its inherent limitations in the case of real videos were not known. To answer this question, we proposed a velocity space sampling of MT neurones (using a decoding scheme to obtain the local velocity from their activity) coupled with a multi-scale approach. After this, we explored the performance of our model on the Middlebury dataset. To the best of our knowledge, this is the only neural model in this dataset. The results were promising and suggested several possible improvements, in particular to better deal with discontinuities. An extension was proposed in .

We also focused on the decoding the motion energies which is of natural interest for developing biologically inspired computer vision algorithms for dense optical flow estimation. In , we addressed this problem by evaluating four strategies for motion decoding: intersection of constraints, maximum likelihood, linear regression on MT responses and neural network based regression using multi scale-features. We characterized the performances and the current limitations of the different strategies, in terms of recovering dense flow estimation using Middlebury benchmark dataset widely used in computer vision, and we highlight key aspects for future developments.

This work was partially funded by the EC IP project FP7-ICT-388 2011-8 no. 318723 (MatheMACS).

Studies in biological vision have always been a great source of inspiration for design of computer vision algorithms. In the past, several successful methods were designed with varying degrees of correspondence with biological vision studies, ranging from purely functional inspiration to methods that utilise models that were primarily developed for explaining biological observations. Even though it seems well recognised that computational models of biological vision can help in design of computer vision algorithms, it is a non-trivial exercise for a computer vision researcher to mine relevant information from biological vision literature as very few studies in biology are organised at a task level.

Olivier Faugeras is a member of the scientific committee of the "Axe Interdisciplinaire de Recherche de l'Université de Nice Sophia Antipolis" entitled "Modélisation Théorique et Computationnelle en Neurosciences et Sciences Cognitives".

See section “International Initiatives” below.

Title: MATHEmatics of Multi-level Anticipatory Complex Systems

Programm: FP7

Duration: October 2012 - September 2015

Coordinator: Max Planck Institute for Mathematics in the Sciences

Partners:

see the webpage of the project.

Inria contact: Olivier Faugeras

The MATHEMACS project aims to develop a mathematical theory of complex multi-level systems and their dynamics. In addition to considering systems with respect to a given level structure, as is natural in certain applications or dictated by available data, the project has the unique goal of identifying additional meaningful levels for understanding multi-level systems. This is done through a general formulation based on the mathematical tools of information and dynamical systems theories.

To ensure that the theoretical framework is at the same time practically applicable, three key application areas are represented within the project, namely neurobiology, human communication, and economics. These areas not only provide us with some of the best-known epitomes of complex multi-level systems, but also constitute a challenging test bed for validating the generality of the theory since they span a vast range of spatial and temporal scales.

Furthermore, they have an important common aspect; namely, their complexity and self-organizational character is partly due to the anticipatory and predictive actions of their constituent units. The MATHEMACS project contends that the concepts of anticipation and prediction are particularly relevant for multi-level systems since they often involve different levels. Thus, as a further unique feature, the project includes the mathematical representation and modeling of anticipation in its agenda for understanding complex multi-level systems.

For validating the theory on large heterogeneous data sets, the project has a specific component with exclusive access to a wide range of data from human movement patterns to complex urban environments.

In this way, MATHEMACS provides a complete and well-rounded approach to lay the foundations of a mathematical theory of the dynamics of complex multi-level systems.

Title: Retina-inspired ENcoding for advanced VISION tasks

Programm: FP7

Duration: March 2013 - February 2016

Coordinator: Instituto Italiano di Tecnologia (Pattern Analysis and Computer vision) Vittorio Murino

Partners:

PAVIS, NET3 Fondazione Istituto Italiano di Tecnologia (Italy)

Institute for Adaptive and Neural Computation, The University of Edinburgh (United Kingdom)

Institute of Neuroscience, University of Newcastle Upon Tyne (United Kingdom)

Inria contact: Bruno Cessac

The retina is a sophisticated distributed processing unit of the central nervous system encoding visual stimuli in a highly parallel, adaptive and computationally efficient way. Recent studies show that rather than being a simple spatiotemporal filter that encodes visual information, the retina performs sophisticated non-linear computations extracting specific spatio-temporal stimulus features in a highly selective manner (e.g. motion selectivity). Understanding the neurobiological principles beyond retinal functionality is essential to develop successful artificial computer vision architectures. RENVISION's goal is, therefore, twofold: i) to achieve a comprehensive understanding of how the retina encodes visual information through the different cellular layers; ii) to use such insights to develop a retina-inspired computational approach to high-level computer vision tasks. To this aim, exploiting the recent advances in high-resolution light microscopy 3D imaging and high-density multielectrode array technologies, RENVISION will be in an unprecedented position to investigate pan-retinal signal processing at high spatio-temporal resolution, integrating these two technologies in a novel experimental setup. This will allow for simultaneous recording from the entire population of ganglion cells and functional imaging of inner retinal layers at near-cellular resolution, combined with 3D structural imaging of the whole inner retina. The combined analysis of these complex datasets will require the development of novel multimodal analysis methods. Resting on these neuroscientific and computational grounds, RENVISION will generate new knowledge on retinal processing. It will provide advanced pattern recognition and machine learning technologies to ICTs by shedding a new light on how the output of retinal processing (natural or modelled) allows solving complex vision tasks such as automated scene categorization and human action recognition.

Title: The Human Brain Project

Programm: FP7

Duration: October 2013 - March 2016

Coordinator: EPFL

Partners:

see the webpage of the project.

Inria contact: Olivier Faugeras

Understanding the human brain is one of the greatest challenges facing 21st century science. If we can rise to the challenge, we can gain profound insights into what makes us human, develop new treatments for brain diseases and build revolutionary new computing technologies. Today, for the first time, modern ICT has brought these goals within sight. The goal of the Human Brain Project, part of the FET Flagship Programme, is to translate this vision into reality, using ICT as a catalyst for a global collaborative effort to understand the human brain and its diseases and ultimately to emulate its computational capabilities. The Human Brain Project will last ten years and will consist of a ramp-up phase (from month 1 to month 36) and subsequent operational phases.

This Grant Agreement covers the ramp-up phase. During this phase the strategic goals of the project will be to design, develop and deploy the first versions of six ICT platforms dedicated to Neuroinformatics, Brain Simulation, High Performance Computing, Medical Informatics, Neuromorphic Computing and Neurorobotics, and create a user community of research groups from within and outside the HBP, set up a European Institute for Theoretical Neuroscience, complete a set of pilot projects providing a first demonstration of the scientific value of the platforms and the Institute, develop the scientific and technological capabilities required by future versions of the platforms, implement a policy of Responsible Innovation, and a programme of transdisciplinary education, and develop a framework for collaboration that links the partners under strong scientific leadership and professional project management, providing a coherent European approach and ensuring effective alignment of regional, national and European research and programmes. The project work plan is organized in the form of thirteen subprojects, each dedicated to a specific area of activity.

A significant part of the budget will be used for competitive calls to complement the collective skills of the Consortium with additional expertise.

Paul Bressloff, a Professor of Applied Mathematics at the University of Utah visited the team in June-July as part of his Inria International chair.

Ruben Herzog, Master student in Valparaiso, with A. Palacios, Centro Interdisciplinario de Neurociencia de Valparaíso, Univ de Valparaíso, Valparaíso. From May 4 th 2015 until May 29 th 2015.

**Roberta Evangelista**

During her internship (May 2015-September 2015, funded by *Action Transverse*) supervised by E. Tanré (Tosca) and R. Veltz (Neuromathcomp), Roberta Evangelista worked on “A stochastic model of gamma phase modulated orientation selectivity”.

Neurons in primary visual cortex (V1) are known to be highly selective for stimulus orientation. Recent experimental evidence has shown that, in awake monkeys, the orientation selectivity of V1 neurons is modulated by gamma oscillations. In particular, neurons’ firing rate in response to the preferred orientation changes as a function of the gamma phase of spiking. The effect is drastically reduced for non-preferred orientations. We have introduced a stochastic model of a network of orientation-dependent excitatory and inhibitory spiking neurons. We have found conditions on the parameters such that the solutions of the mathematical model reproduce the experimental behavior.

**Quentin Cormier**

Quentin is co-supervised by E. Tanré (Tosca) and R. Veltz (Neuromathcomp). He is a Master 1 student from ENS Lyon.

We study numerically and theoretically a model of spiking neuron in interaction with plasticity. The synaptic weights evolve according to biological law of plasticity. We study the existence of separable time scales. We are also interested in the characterization of invariant distribution for the activity of the network and the distribution of the synaptic weights. During his internship, Quentin Cormier also develop a numerical code to simulate large networks of neurons evolving according to this dynamics.

Olivier Faugeras was the General Chair of the 1st International Conference on Mathematical Neuroscience, held in Antibe-Juan les Pins, June 8-10 2015.

Romain Veltz and James Inglis are members of organizing committee of the 1st International Conference on Mathematical Neuroscience, held in Antibe-Juan les Pins, June 8-10 2015.

We have co-organized the conferences:

Confronting mean-field theories to measurements a perspective from neuroscience, Paris 14-15 January 2015 (Bruno Cessac, Olivier Faugeras).

Modeling the early visual system: From natural vision to numerical applications. Satellite of the 12 eme colloque societe des neurosciences, Montpellier, 19-22 May 2015 (Bruno Cessac).

Mathematical Modeling and Statistical Analysis in Neuroscience workshop, Nice 8-10 September, (Bruno Cessac).

Workshop on Heteroclinic Dynamics in Neuroscience, Nice 17-18 December, (Pascal Chossat, Mathieu Desroches, Maciej Krupa).

Pierre Kornprobst was area Chair of the 23rd edition of the European Signal Processing Conference (EUSIPCO 2015) for the theme Bio-inspired image and signal processing.

Pierre Kornprobst was a member of the program committee of the Conference of "Groupement de Recherche en Traitement du Signal et des Images" (GRETSI 2015) and of the Conference on Computer Vision and Pattern Recognition (CVPR 2015).

Olivier Faugeras is the co-editor in chief of the open access Journal of Mathematical Neuroscience.

Olivier Faugeras acts as a reviewer for the Journal of Mathematical Neuroscience, the Journal of Computational Neuroscience, the SIAM Journal on Applied Dynamical Systems (SIADS).

Maciej Krupa acts as a reviewer for Nonlinearity, Proceedings of the National Academy of Sciences of the USA (PNAS), the SIAM Journal of Applied Dynamical Systems (SIADS).

Mathieu Desroches acts as a reviewer for Journal of Nonlinear Science, Physica D, the SIAM Journal of Applied Dynamical Systems (SIADS).

Pierre Kornprobst gave an invited talk at the workshop entitled “From Retina to Robots – Connecting the Neural Computations of Early Vision to Neuromorphic Engineering and Artificial Vision” at the Bernstein Conference for Computational Neuroscience in Heidelberg, Germany, on September 14, 2015 (organized by Tim Gollisch and Stefano Panzeri).

Pierre Kornprobst is an elected member of the Conseil Académique de l’UCA, member of the editorial committee of the Sophia Antipolis internal letter and "représentant de l'administration suppléant".

**E-learning**

Master 2: Bruno Cessac, *Neuronal dynamics*, 36 hours, Master of Computational Biology and Biomedicine, Université Nice Sophia Antipolis, France.

Summer school: Bruno Cessac, *Mean Field Methods in Neuroscience*, 3h, Lecture given in the conference Dynamics of Multi-Level Systems, Jun 2015, Dresde, Germany .

**Chalk-learning**

Master 2 MVA/UPMC: Romain Veltz, Mathematical Methods for Neurosciences, 20 hours, Paris, France.

PhD in progress: Kartheek Medathati, "Motion perception: from neuroscience to computer vision", started in September 2013, co-supervised by Pierre Kornprobst and Guillaume S. Masson (Institut de Neurosciences de la Timone, Marseille, France).

PhD in progress: Theodora Karvouniari, «Retinal waves in the retina: theory and experiments», defence planned in October 2017, supervised by Bruno Cessac.

Bruno Cessac. Reviewer of Luis Garcia Del Molino's thesis, "Non-Hermitian random matrices and applications to randomly connected firing rate neuronal network" (supervision Jonathan Touboul, Khashasyar Pakdaman). Paris, 01-10-15.