Team parietal

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Dissemination
Bibliography

Section: Contracts and Grants with Industry

National Initiatives

Vimagine

Participants : Bertrand Thirion [Correspondant] , Vincent Michel, Alexandre Gramfort, Gaël Varoquaux, Alan Tucholka.

Vimagine is an accepted ANR blanc project (2008-2012), which aims at building a novel view on the retinotopic organization of the visual cortex, based on MEG and MRI. Vimagine should open the way to understanding the dynamics of brain processes for low-level vision, with an emphasis on neuropathologies. This project is leaded by S. Baillet ( MMiXT, CNRS UPR640 LENA, Pitié-Salpêtrière), in collaboration with M.Clerc, T. Papadopoulos (INRIA Sophia-Antipolis, Odyssée) and J. Lorenceau(LPPA, CNRS, Collège de France). The fMRI part of the project will be done by PARIETAL, and will consist in a study of spatially resolved retinotopic maps at the mm scale, the decoding of retinotopic information and the comparison of retinotopy with sulco-gyral anatomy.

Karametria

Participants : Pierre Fillard [Correspondant] , Viviana Siless, Bertrand Thirion.

KaraMetria is an ANR lead by Alexis Roche (LNAO) and Pierre Fillard (Parietal ) whose goal is to develop new methods for feature-based morphometry (FBM) as opposed to voxel-based morphometry (VBM). In VBM, a subject or group of subjects is compared to another group of subjects based on the grey values of their MR images only. The inconvenient is that the interpretation of a change in grey-value is rather unclear (what are we detecting?). Conversely, in KaraMetria we propose to rely on anatomically well-defined features such as the gyri and sulci, the white matter fibers, or other brain internal structures such as the grey nuclei, where the detection of a change of shape is easier to interpret. Practically, our aim is to develop a registration framework able to produce a spatial transformation mapping at the same time all anatomical features of one subject onto the anatomical features of another. This transformation can then be used to build atlases of features, such as sulci or fibers, which are not available yet. Those atlases, in turn, can be used as a reference to compare individuals and determine if they statistically differ from a normal population and if yes, where and how they differ. A study on depressed teenagers lead by a clinical partner (INSERM UMR 797) will serve as proof of concept for the proposed framework. The actors of KaraMetria are the INRIA teams Parietal and Asclepios , the LNAO, the MAP5 (University Paris 5) and the INSERM UMR 797. The project started in January 2010 for a time period of 3 years.

Digiteo: Hidinim Project

Participants : Bertrand Thirion, Virgile Fritsch, Jean-Baptiste Poline.

High-dimensional Neuroimaging– Statistical Models of Brain Variability observed in Neuroimaging

This is a joint project with Select project team and with SUPELEC Sciences des Systèmes (E3S), Département Signaux & Systèmes Électroniques (A. Tennenhaus).

Statistical inference in a group of subjects is fundamental to draw valid neuroscientific conclusions that generalize to the whole population, based on a finite number of experimental observations. Crucially, this generalization holds under the hypothesis that the population-level distribution of effects is estimated accurately. However, there is growing evidence that standard models, based on Gaussian distributions, do not fit well empirical data in neuroimaging studies.

In particular, Hidinim is motivated by the analysis of new databases hosted and analyzed at Neurospin that contain neuroimaging data from hundreds of subjects, in addition to genetic and behavioral data. We propose to investigate the statistical structure of large populations observed in neuroimaging. In particular, we will investigate the use of region-level averages of brain activity, that we plan to co-analyse with genetic and behavioral information, in order to understand the sources of the observed variability. This entails a series of modeling problems that we will address in this project: i) Distribution normality assessment and variables covariance estimation, ii) model selection for mixture models and iii) setting of classification models for heterogeneous data, in particular for mixed continuous/discrete distributions.

ANR IRMGroup

Participants : Bertrand Thirion, Alexandre Gramfort.

This is a joint project with Polytechnique/CMAP http://www.cmap.polytechnique.fr/ : Stéphanie Allassonnière and Stéphane Mallat (2010-2013).

Much of the visual cortex is organized into visual field maps, which means that nearby neurons have receptive fields at nearby locations in the image. The introduction of functional magnetic resonance imaging (fMRI) has made it possible to identify visual field maps in human cortex, the most important one being the medial occipital cortex (V1,V2,V3). It is also possible to relate directly the activity of simple cells to an fMRI activation pattern and Parietal developed some of the most effective methods. However, the simple cell model is not sufficient to account for high-level information on visual scenes, which requires the introduction of specific semantic features. While the brain regions related to semantic information processing are now well understood, little is known on the flow of visual information processing between the primary visual cortex and the specialized regions in the infero-temporal cortex. A central issue is to better understand the behavior of intermediate cortex layers.

Our proposition is to use our mathematical approach to formulate explicitly some generative model of information processing, such as those that characterize complex cells in the visual cortex, and then to identify the brain substrate of the corresponding processing units from fMRI data. While fMRI resolution is still too coarse for a very detailed mapping of detailed cortical functional organization, as detailed next, we conjecture that some of the functional mechanisms that characterize biological vision processes can be captured through fMRI; in parallel we will push the fMRI resolution to increase our chance to obtain a detailed mapping of visual cortical regions.

Graph-based decoding CNRS project

Participants : Bertrand Thirion, Gaël Varoquaux.

This is a joint project with Sylvain Takerkart (CNRS/UMR 6193), Daniele Schon (CNRS/UMR 6193), and Liva Ralaivola (CNRS UMR 6166). The time span of the project is 2010-2011.

In this project, we develop new tools for fMRI decoding that specifically address the aforementioned pitfall by explicitly using the spatial information. These tools should broaden the range of applications of this technique and help better improve our understanding of brain functions. Two specific goals are set :

The first goal is methodological. We will demonstrate that we can integrate the information about the spatial locations of the voxels and their neighboring links in the fMRI decoding framework. For that purpose, we will use graphical models to represent spatial patterns of activation and develop graph-based kernels within a SVM framework in order to perform the classification.

The second goal is application-oriented. We will demonstrate that the outputs of the decoder can provide estimates of the robustness of a cortical representation. We will therefore scan two populations with fMRI, and show, using our graph-based decoding technique, that the anatomo-functional representation associated with the task is “stronger” in one population than in the other, thus allowing for finer discrimination.

MNoVNI

Participants : Bertrand Thirion, Pierre Fillard.

This is a joint project with S.Allassonnière (CMAP http://www.cmapx.polytechnique.fr/~allassonniere/ ), for the 2010-2013 period.

Modelling and understanding brain structure is a great challenge, given the anatomical and functional complexity of the brain organ. In addition to this, there is a large variability of these characteristics among the population. To give an possible answer to these issues, medical imaging researchers proposed to construct a template image. Most of the time, these analysis only focus on one category of signals (called modality), in particular, the anatomical one was the main focus of research these past years. Moreover, these techniques are often dedicated to a particular problem and raise the question of their mathematical foundations. The MMoVNI project aims at building atlases based on multi-modal image (anatomy, diffusion and functional) data bases for given populations. An atlas is not only a template image but also a set of admissible deformations which characterize the observed population of images. The estimation of these atlases will be based on a new generation of deformation and template estimation procedures that builds an explicit statistical generative model of the observed data. Moreover, they enable to infer all the relevant variables (parameters of the atlases) thanks to stochastic algorithms. Lastly, this modeling allows also to prove the convergence of both the estimator and the algorithms which provides a theoretical guarantee to the results. The models will first be proposed independently for each modality and then merged together to take into account, in a correlated way, the anatomy, the local connectivity through the cortical fibers and the functional response to a given cognitive task. This model will then be generalized to enable the non-supervised clustering of a population. This leads therefore to a finer representation of the population and a better comparison for classification purposes for example. The Neurospin center, partner of this project, will allow us to have access to databases of images of high-quality and high-resolution for the three modalities: anatomical, diffusion and functional imaging. This project is expected to contribute to making neuroimaging a more reliable tool for understanding inter-subject differences, which will eventually benefit to the understanding and diagnosis of various brain diseases like Alzheimer's disease, autism or schizophrenia.


previous
next

Logo Inria