Team mistis

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Markov models

Bayesian Weighting of Multiple MR Sequences for Brain Lesion Segmentation

Participants : Florence Forbes, Senan James Doyle, Eric Frichot, Darren Wraith.

Joint work with: Michel Dojat (Grenoble Institute of Neuroscience).

A healthy brain is generally segmented into three tissues: cephalo spinal fluid, grey matter and white matter. Statistical based approaches usually aim to model probability distributions of voxel intensities with the idea that such distributions are tissue-dependent. The delineation and quantification of brain lesions is critical to establishing patient prognosis, and for charting the development of pathology over time. Typically, this is performed manually by a medical expert, however automatic methods have been proposed (see [59] for review) to alleviate the tedious, time consuming and subjective nature of manual delineation. Automated or semi-automated brain lesion detection methods can be classified according to their use of multiple sequences, a priori knowledge about the structure of normal brain, tissue segmentation models, and whether or not specific lesion types are targeted. A common feature is that most methods are based on the initial identification of candidate regions for lesions. In most approaches, normal brain tissue a priori maps are used to help identify regions where the damaged brain differs, and the lesion is identified as an outlier. Existing methods frequently make use of complementary information from multiple sequences. For example, lesion voxels may appear atypical in one modality and normal in another. This is well known and implicitly used by neuroradiologists when examining data. Within a mathematical framework, multiple sequences enable the superior estimation of tissue classes in a higher dimensional space.

For multiple MRI volumes, intensity distributions are commonly modelled as multi-dimensional Gaussian distributions. This provides a way to combine the multiple sequences in a single segmentation task but with all the sequences having equal importance. However, given that the information content and discriminative power to detect lesions vary between different MR sequences, the question remains as to how to best combine the multiple channels. Depending on the task at hand, it might be beneficial to weight the various sequences differently.

In this work, rather than trying to detect lesion voxels as outliers from a normal tissue model, we adopt an incorporation strategy whose goal is to identify lesion voxels as an additional fourth component. Such an explicit modelling of the lesions is usually avoided. It is difficult for at least two reasons: 1) most lesions have a widely varying and unhomogeneous appearance (eg. tumors or stroke lesions) and 2) lesion sizes can be small (eg. multiple sclerosis lesions). In a standard tissue segmentation approach, both reasons usually prevent accurate model parameter estimation resulting in bad lesion delineation. Our approach aims to make this estimation possible by modifying the segmentation model with an additional weight field. We propose to modify the tissue segmentation model so that lesion voxels become inliers for the modified model and can be identified as genuine model components. Compared to robust estimation approaches (eg. [60] ) that consist of down-weighting the effect of outliers on the main model estimation, we aim to increase the weight of candidate lesion voxels to overcome the problem of under-representation of the lesion class.

We introduce weight parameters in the segmentation model and then solve the issue of prescribing values for these weights by developing a Bayesian framework. This has the advantage of avoiding the specification of ad-hoc weight values and of enabling the incorporation of expert knowledge through a weight prior distribution. We provide an estimation procedure based on a variational Expectation Maximization (EM) algorithm to produce the corresponding segmentation. Furthermore, in the absence of explicit expert knowledge, we show how the weight prior can be specified to guide the model toward lesion identification. Experiments on artificial and real lesions of various sizes are reported to have demonstrated the good performance of our approach.

These latter experiments have been carried out with a first version of the method that uses diagonal covariance matrices in the Gaussian parts of the model [25] , [26] . We extended recently to non-diagonal covariance matrices for a more general formulation. This new formulation is still under validation.

Variational approach for the joint estimation-detection of Brain activity from functional MRI data

Participants : Florence Forbes, Lotfi Chaari.

Joint work with: Michel Dojat (Grenoble Institute of Neuroscience), Philippe Ciuciu and Thomas Vincent from Neurospin, CEA in Saclay..

The goal is to investigate the possibility of using Variational approximation techniques as an alternative to MCMC-based methods for the joint estimation-detection of brain activity in functional MRI data [56] . We investigated the so-called JDE (Joint Detection Estimation) framework developed by P. Ciuciu and collaborators at NeuroSpin [56] [23] , [28] and derived a variational version of it. This new formulation is under validation.

Disparity and normal estimation through alternating maximization

Participant : Florence Forbes.

Joint work with: Elise Arnaud, Radu Horaud and Ramya Narasimha from the INRIA Perception team.

In this work [27] , we propose an algorithm that recovers binocular disparities in accordance with the surface properties of the scene under consideration. To do so, we estimate the disparity as well as the normals in the disparity space, by setting the two tasks in a unified framework. A novel joint probabilistic model is defined through two random fields to favor both intra-field (within neighboring disparities and neighboring normals) and inter-field (between disparities and normals) consistency. Geometric contextual information is introduced in the models for both normals and disparities. The models are optimized using an appropriate alternating maximization procedure. We illustrate the performance of our approach on synthetic and real data.

Consistent detection, localization and tracking of Audio-Visual Objects with Variational EM

Participants : Florence Forbes, Vasil Khalidov.

Joint work with: Radu Horaud from the INRIA Perception team.

This work addresses the issue of detecting, locating and tracking objects that are both seen and heard in a scene. We give this problem an interpretation within an unsupervised clustering framework and propose a novel approach based on feature consistency. This model is capable of resolving the observations that are due to detector errors, thus improving the estimation accuracy. We formulate the task as a maximum likelihood estimation problem and perform the inference by a version of the expectation-maximization algorithm, which is formally derived, and which provides cooperative estimates of observation errors, observation assignments, and object tracks. We describe several experiments with single- and multiple- person detection, localization and tracking.

Spatial risk mapping for rare disease with hidden Markov fields and variational EM

Participants : Lamiae Azizi, Florence Forbes, Senan James Doyle.

Joint work with: David Abrial, Christian Ducrot and Myriam Garrido from INRA Clermont-Ferrand-Theix.

The analysis of the geographical variations of a disease and their representation on a map is an important step in epidemiology. The goal is to identify homogeneous regions in terms of disease risk and to gain better insights into the mechanisms underlying the spread of the disease. Traditionally, the region under study is partitioned into a number of areas on which the observed cases of a given disease are counted and compared to the population size in this area. It has also become clear that spatial dependencies between counts had to be taken into account when analyzing such location-dependent data. One of the most popular approach which has been extensively used in this context, is the so-called BYM model introduced by Besag, York and Mollié in 1991. This model corresponds to a Bayesian hierarchical modelling approach. It is based on an Hidden Markov Random Field (HMRF) model where the latent intrinsic risk field is modelled by a Markov field with continuous state space, namely a Gaussian Conditionally Auto-Regressive (CAR) model. The model inference therefore results in a real-valued estimation of the risk at each location and one of the main reported limitation is that local discontinuities in the risk field are not modelled potentially leading to risk maps that are too smooth. In some cases, coarser representations where areas with similar risk values are grouped are desirable. Grouped representations have the advantage that they provide clearly delimited areas for different risk levels, which is helpful for decision-makers to interpret the risk structure and determine protection measures. Using the BYM model it is possible to derive from the model output such a grouping using, either fixed risk ranges (usually difficult to choose in practice) or a more automated clustering techniques. In any case this post-processing step is likely to be sub-optimal. In this work, we investigate procedures that include such a risk classification.

There have been several attempts to take into account the presence of discontinuities in the spatial structure of the risk. Within hierarchical approaches, one possibility is to move the spatial dependence one level higher in the hierarchy. Green and Richardson in 2002 proposed to replace the continuous risk field by a partition model involving the introduction of a finite number of risk levels and allocations variables to assign each area under study to one of these levels. Spatial dependencies are then taken into account by modelling the allocation variables as a discrete state-space Markov field, namely a spatial Potts model. This results in a discrete HMRF modelling. The general effect is also to recast the disease mapping issue into a clustering task using spatial finite Poisson mixtures. In the same spirit, Fernandez and Green proposed another class of spatial mixture models, in which the spatial dependence is pushed yet one level higher. Of course, the higher the spatial dependencies in the hierarchy the more flexible the model but also the more difficult the parameter estimation. As regards inference, these various attempts have in common the use of simulation intensive Monte Carlo Markov Chain (MCMC) techniques which can present serious difficulties in applying them to large data sets in a reasonable time.

Following the idea of using a discrete HMRF model for disease mapping, we propose to use for inference, as an alternative to simulation-based techniques, an Expectation Maximization framework. This framework is commonly used to solve clustering tasks but leads to intractable computation when considering non-trivial Markov dependencies. However, approximation techniques are available and, among them we propose to investigate variational approximations for their computational efficiency and good performance in practice. In particular, we consider the so-called mean field principle [6] that provides a deterministic way to deal with intractable MRF models and has proven to perform well in a number of applications.

Human disease data usually has this particularity that the populations under consideration are large and the risk values relatively high, say between 0.5 and 1.5. This is not fully representative of epidemiological studies, especially studies of non-contagious diseases in animals. When considering animal epidemiology, we may have to face instead low size populations and risk levels much smaller than 1, typically 10-5 to 10-3 . Difficulties in applying techniques that work in the first (human) case to data sets in the second (animal) case have not been investigated. In addition, no particular difficulties regarding initialization and model selection are usually reported. This is far from being the case in all practical problems. In this work we propose to go further and to address a number of related issues. More specifically, we investigate the model behavior in more detail. We pay special attention to the main two inherent issues when using EM procedures, namely algorithm initialization and model selection. The EM solution can highly depend on its starting position. We show that simple initializations do not always work, especially for rare disease for which the risks are small. We then propose and compare different initialization strategies in order to get a robust way of initializing for most situations arising in practice.

In addition we build on the standard hidden Markov field model by considering a more general formulation that is able to encode more complex interactions than the standard Potts model. In particular we are able to encode the fact that risk levels in neighboring regions cannot be too different while the standard Potts model penalizes the same way different neighboring risks whatever the amplitude of their difference.

Optimization of the consumption of printers using Markov decision processes

Participants : Laurent Donini, Jean-Baptiste Durand, Stéphane Girard.

Joint work with: Ciriza, V. and Bouchard, G. (Xerox XRCE, Meylan).

In the context of the PhD thesis of Laurent Donini, we have proposed several approaches to optimize the resources consumed by printers. The first aim of this work is to determine an optimal value of the timeout of an isolated printer, so as to minimize its electrical consumption. This optimal timeout is obtained by modeling the stochastic process of the print requests, by computing the expected consumption under this model, according to the characteristics of the printers, and then by minimizing this expectation with respect to the timeout. Two models are considered for the request process: a renewal process, and a hidden Markov chain. Explicit values of the optimal timeout are provided when possible. In other cases, we provide some simple equation satisfied by the optimal timeout. It is also shown that a model based on a renewal process offers as good results as an empirical minimization of the consumption based on exhaustive search of the timeout, for a largely lower computational cost. This work has been extended to take into account the users' discomfort resulting from numerous shutdowns of the printers, which yield increased waiting time. This has also been extended to printers with several states of sleep, or with separate reservoirs of solid ink. The results are submitted for publication [41] .

As a second step, the case of a network of printers has been considered. The aim is to decide on which printer some print request must be processed, so as to minimize the total power consumption of the network of printers, taking into account user discomfort. Our approach is based on Markov Decision Processes (MDPs), and explicit solutions for the optimal decision are not available anymore. Furthermore, to simplify the problem, the timeout values are considered are fixed. The state space is continuous, and its dimension increases linearly with the number of printers, which quickly turns the usual algorithms (i.e. value or policy iteration) intractable. This is why different variants have been considered, among which the Sarsa algorithm.


previous
next

Logo Inria