Section: New Results
A Fully Bayesian Joint Model for coupling Atlas registration with robust brain tissue and structure segmentation
Participant : Florence Forbes.
Joint work with: Scherrer, B. and Dojat, M. (Grenoble Institute of Neuroscience).
The analysis of MR brain scans is a complex task that requires several sources of information to be taken into account and combined. The analysis is frequently based on segmentations of tissues and of subcortical structures performed by human experts. For automatic segmentation, difficulties arise from the presence of various artifacts such as noise or intensity non uniformities. For structures, the segmentation requires in addition the use of prior information usually encoded via a pre-registered atlas. Recently growing interest has been on tackling this complexity by combining different approaches. As an illustration, some authors propose to use a region based tissue classification approach followed by a watershed algorithm to label brain sulci while others combine a region-based bias field estimation and a level set method to segment the cortex. A step further the combinaison of methods is coupling , giving the possibility to introduce mutual interactions between components of a model. Such a coupling can be naturally expressed in a statistical framework via the definition of joint distributions. In this vein, Ashburner and Friston  couple a global statistical tissue segmentation approach with the estimation of a bias field and a global registration of an atlas of tissue probability maps. Another growing feature in the literature is to locally estimate model parameters on the image to better fit local image properties. For instance, our previous work  couples a local tissue segmentation approach with a structure segmentation approach; Pohl et al. ,  couple structure segmentation with the local affine registration of an atlas.
In this work, we propose to go further towards coupling methods by constructing a Conditional Random Field (CRF) model that performs a number of essential tasks. We will focus on developing a statistical framework that allows 1) tissue segmentation using local Markov Random Field (MRF) models, 2) MRF segmentation of structures and 3) local affine registration of an atlas. All tasks are linked and completing each one of them can help in refining the others. The idea is to capture in a single model all the relationships that could be formalized between these tasks. Our basis toward a solution is similar to that in  with the major difference that therein a joint model was not explicitly given but defined through the specification of a number of compatible conditional MRF models. In this work, we specify directly a joint model from which the conditional models are derived. As a result, cooperation between tissues and structures is treated in a more symmetric way which results in new more consistent conditional models. In addition, interaction between the segmentation and registration steps is easily introduced. An explicit joint formulation has the advantage to provide a strategy to construct more consistent or complete models that are open to incorporation of new tasks. For estimation, we provide an appropriate variational EM framework allowing a Bayesian treatment of the parameters. The evaluation performed on both phantoms and real 3T brain scans shows good results and demonstrates the clear improvement provided by coupling the registration step to tissue and structure segmentation. See Figure 3 for an illustration and  for more details.
Bayesian Weighting of Multiple MR Sequences for Brain Lesion Segmentation
Joint work with: Michel Dojat (Grenoble Institute of Neuroscience), Daniel Garcia-Lorenzo and Christian Barillot (INRIA Team Visages).
A healthy brain is generally segmented into three tissues: cephalo spinal fluid, grey matter and white matter. Statistical based approaches usually aim to model probability distributions of voxel intensities with the idea that such distributions are tissue-dependent. The delineation and quantification of brain lesions is critical to establishing patient prognosis, and for charting the development of pathology over time. Typically, this is performed manually by a medical expert, however automatic methods have been proposed (see  for review) to alleviate the tedious, time consuming and subjective nature of manual delineation. Automated or semi-automated brain lesion detection methods can be classified according to their use of multiple sequences, a priori knowledge about the structure of normal brain, tissue segmentation models, and whether or not specific lesion types are targeted. A common feature is that most methods are based on the initial identification of candidate regions for lesions. In most approaches, normal brain tissue a priori maps are used to help identify regions where the damaged brain differs, and the lesion is identified as an outlier. Existing methods frequently avail of complementary information from multiple sequences. For example, lesion voxels may appear atypical in one modality and normal in another. This is well known and implicitly used by neuroradiologists when examining data. Within a mathematical framework, multiple sequences enable the superior estimation of tissue classes in a higher dimensional space.
For multiple MRI volumes, intensity distributions are commonly modelled as multi-dimensional Gaussian distributions. This provides a way to combine the multiple sequences in a single segmentation task but with all the sequences having equal importance. However, given that the information content and discriminative power to detect lesions varies between different MR sequences, the question remains as to how to best combine the multiple channels. Depending on the task at hand, it might be beneficial to weight the various sequences differently.
In this work, rather than trying to detect lesion voxels as outliers from a normal tissue model, we adopt an incorporation strategy whose goal is to identify lesion voxels as an additional fourth component. Such an explicit modelling of the lesions is usually avoided. It is difficult for at least two reasons: 1) most lesions have a widely varying and inhomogeneous appearance (eg. tumors or stroke lesions) and 2) lesion sizes can be small (eg. multiple sclerosis lesions). In a standard tissue segmentation approach, both reasons usually prevent accurate model parameter estimation resulting in bad lesion delineation. Our approach aims to make this estimation possible by modifying the segmentation model with an additional weight field. We propose to modify the tissue segmentation model so that lesion voxels become inliers for the modified model and can be identified as a genuine model component. Compared to robust estimation approaches (eg.  ) that consist of down-weighting the effect of outliers on the main model estimation, we aim to increase the weight of candidate lesion voxels to overcome the problem of under-representation of the lesion class.
We introduce weight parameters in the segmentation model and then solve the issue of prescribing values for these weights by developing a Bayesian framework. This has the advantage to avoid the specification of ad-hoc weight values and to allow the incorporation of expert knowledge through a weight prior distribution. We provide an estimation procedure based on a variational Expectation Maximization (EM) algorithm to produce the corresponding segmentation. Furthermore, in the absence of explicit expert knowledge, we show how the weight prior can be specified to guide the model toward lesion identification. Experiments on artificial (Table 1 ) and real lesions (Table 2 , Figures 4 , 5 , 6 ) of various sizes are reported to demonstrate the good performance of our approach.
|Mild lesions (0.02% of the voxels)|
|AWEM||68 (+1)||49 (-21)||36 (+2)||12 (+8)|
|Moderate lesions (0.18% of the voxels)|
|AWEM||86 (+7)||80 (-1)||73 (+14)||64 (+27)|
|Severe lesions (0.52% of the voxels)|
|AWEM||92 (+7)||86 (-2)||78 (+6)||68 (+27)|
|Mild lesions (0.02% of the voxels)|
|AWEM||0 (-75)||0 (-65)||0 (-20)||0 (-30)|
|Moderate lesions (0.18% of the voxels)|
|AWEM||52 (-24)||51 (-25)||52 (-15)||27 (-21)|
|Severe lesions (0.52% of the voxels)|
|AWEM||87 (+1)||70 (-13)||61 (-13)||50 (-8)|
|Average||55 +/-8||60 +/-16|
Variational approach for the joint estimation-detection of Brain activity from functional MRI data
Joint work with: Michel Dojat (Grenoble Institute of Neuroscience).
The goal is to investigate the possibility of using Variational approximation techniques as an alternative to MCMC based methods for the joint estimation-detection of brain activity in functional MRI data  . The 5-month internship of Alexandre Janon enabled us to initiate this activity which will be pursued in 2010 with a new collaboration with Philippe Ciuciu from Neurospin, CEA in Saclay.
A Joint Framework for Disparity and Surface Normal Estimation
Participant : Florence Forbes.
Joint work with: Elise Arnaud, Radu Horaud and Ramya Narasimha from team Perception.
This work deals with the stereo matching problem. Stereo matching has been one of the core challenges in computer vision for decades. The most recent algorithms show very good performance. However, most existing stereo algorithms have inherent fronto-parallel assumption in their modelling of the stereo correspondence problem. Such an assumption supposes that the scene under consideration can be approximated by a set of fronto-parallel planes (on which the disparity is constant) and thus biases the results towards staircase solutions. As described in our paper  , we propose an novel algorithm that provides surface consistent solutions. To move away from the traditional fronto-parallel assumption, we propose an algorithm that provides disparities in accordance with the surface properties of the scene under consideration. To do so, we carry out cooperatively both disparity and surface normal estimations by setting the two tasks in a unified Markovian framework. We define a new joint probabilistic model based on the definition of two MRFs that are linked to encode consistency between disparities and surface properties. The consideration of normal and disparity maps as two separate random fields increases the model's flexibility. For both MRFs, we include geometric contextual information in the pair-wise regularizing term, thus favoring a disparity solution consistent with the scene surfaces – possibly slanted and/or curved. The respective MRFs data terms are designed to extract data information that specifically impact both the disparity and normal fields. In particular, for normals, we propose a data term favoring proximity to a set of observed normals derived from an over-segmentation of the image into small regions and a plane fitting procedure, thus including explicitly these steps within the model. The surface properties are then approximated using surface normals in disparity space. These normals provide a reasonable approximation of the true surface. The proposed joint model results in a posterior distribution, for both the disparity and normal fields, which is used for their estimation according to a Maximum A Posteriori (MAP) principle. The alternating maximization procedure used for the MAP search is based on belief propagation and leads to cooperative estimation and mutual improvement of disparity and normal accuracies. Moreover, our approach has the following advantages:
(i) it does not require the computation of high-order disparity derivatives, (ii) it embeds the estimation of surface properties in the Markovian model rather than refining the results using a post-processing step, and (iii) it does not require knowledge of the intrinsic camera calibration parameters.
We illustrate the performance of our approach on synthetic and real data. The results obtained are comparable to the state-of-the-art and show improvement in many cases.
Consistent detection, localization and tracking of Audio-Visual Objects with Variational EM
Joint work with: Radu Horaud from team Perception.
This work addresses the issue of detecting, localizing and tracking objects in a scene that are both seen and heard. We give this problem an interpretation within an unsupervised clustering framework and propose a novel approach based on features consistency. This model is capable of resolving the observations that are due to detector errors, improving thus the estimation accuracy. We formulate the task as a maximum likelihood estimation problem and perform the inference by a version of the expectation-maximization algorithm, which is formally derived, and which provides cooperative estimates of observation errors, observation assignments and object tracks. We describe several experiments with single- and multiple person detection, localization and tracking.
Hidden Markov random fields for disease risk mapping
Joint work with: David Abrial, Christian Ducrot and Myriam Garrido from INRA Clermont-Ferrand-Theix.
Risk mapping in epidemiology enables to identify the location of areas with low or high risk of contamination. It provides also a measure of risk differences between these regions. Most risk mapping methods for pooled data used by epidemiologists are based on hierarchical Bayesian approaches designed for the estimation of risk at each geographical unit. They rely on a Gaussian auto-regressive spatial smoothing. The risk classification, i.e. grouping of geographical units with similar risk, is then necessary to easily draw interpretable maps, but it must be performed in a second step. By analogy with the methods used in image segmentation, we investigate, in the context of Lamiae Azizi' PhD thesis, alternative methods for risk mapping based on the introduction of an hidden discrete random field representing assignment of each spatial unit to a risk class (partition approaches). The most standard such case is the Hidden Markov Random Field (HMRF) model where the hidden field is defined as a Potts model. Other possibilities consist in modelling spatial dependencies at different levels in the hierarchy and in mixing auto-regressive modelling with partition approaches. In the hidden Potts model case, the risk value attached to a class is represented as a parameter of the model and is estimated during the classification procedure. The conditional distribution of the observed field given the class assignments is given by a product of Poisson distributions. To estimate the model parameters and determine the risk classes, we investigate the use of EM variants based on mean-field like approximations, as implemented in the Spacem3 software. Preliminary experiments on realistic synthetic data sets rise a number of questions. Difficulties arise from a number of sources including the presence of a lot of zeros in the data and the possible inappropriate use of the Poisson distribution in this case, the particularity of the data sets we consider that sometimes correspond to very small populations (rare disease case) and inhomogeneous population sizes and the high impact of the EM initialization on the final mapping result. More investigations are then necessary before addressing our second goal which is the addition of a time component in the analysis.
Optimization of the consumption of printers using Markov decision processes
Joint work with: Ciriza, V. and Bouchard, G. (Xerox XRCE, Meylan)
In the context of the PhD thesis of Laurent Donini, we have proposed several approaches to optimize the consumption of printers. The first aim of this work is to determine an optimal value of the timeout of an isolated printer, so as to minimize its electrical consumption. This optimal timeout is obtained by modeling the stochastic process of the print requests, by computing the expected consumption under this model, according to the characteristics of the printers, and then by minimizing this expectation with respect to the timeout. Two models are considered for the request process: a renewal process, and a hidden Markov chain. In  , explicit values of the optimal timeout are provided when possible. In other cases, we provide some simple equation satisfied by the optimal timeout. It is also shown that a model based on a renewal process offers as good results as an empirical minimization of the consumption based on exhaustive search of the timeout, for a largely lower computational cost. This work has been extended to take into account the users' discomfort resulting from numerous shutdowns of the printers, which yield increased waiting time. This has also been extended to printers with several states of sleep, or with separate reservoirs of solid ink.
As a second step, the case of a network of printers has been considered. The aim is to decide on which printer some print request must be processed, so as to minimize the total consumption of the network of printers, taking into account user discomfort. Our approach is based on Markov Decision Processes (MDPs), and explicit solutions for the optimal decision are not available anymore. Furthermore, to simplify the problem, the timeout values are considered are fixed. The state space is continuous, and its dimension increases linearly with the number of printers, which quickly turns the usual algorithms (i.e. value or policy iteration) intractable. This is why different variants have been considered, among which the Sarsa algorithm.
Validation of hidden Markov tree models by comparison of empirical and predicted distributions
This study consists in validating biological models of tree growth based on hidden Markov trees by comparing empirical characteristics of the trees, and their theoretical counterpart, as predicted by the model. In a first step, we focused on trees with discrete univariate variables associated with each vertex. In this case, the characteristics can consist of the size of homogeneous zones (connected vertices with the same value for the variable), their number, the tree depth before the first occurrence of a given value, or the path length separating homogeneous zones.
Since no explicit formula is available for the predicted distributions, these have been approximated by MC simulations. This work has been achieved by Inga Paukner-Stojkov, in the context of a 5-month internship.