Team MISTIS

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Markov models

Triplet Markov fields for the classification of complex structure data

Participants : Florence Forbes, Juliette Blanchet.

We address the issue of classifying complex data. We focus on three main sources of complexity, namely the high dimensionality of the observed data, the dependencies between these observations and the general nature of the noise model underlying their distribution. We investigate the recent Triplet Markov Fields and propose [Oops!] new models in this class designed for such data and in particular allowing very general noise models. In addition, our models can handle the inclusion of a learning step in a consistent way so that they can be used in a supervised framework. One other advantage of our models is that whatever the initial complexity of the noise model, parameter estimation can be carried out using state-of-the-art Bayesian clustering techniques under the usual simplifying assumptions (typically, non correlated noise condition). As generative models, they can be seen as an alternative, in the supervised case, to discriminative Conditional Random Fields. In the non supervised case, identifiability issues underlying the models can occur. We also consider the issue of selecting the best model with regards to the observed data using a criterion (referred to as BICMF ) based on the Bayesian Information Criterion (BIC).

In [Oops!] , the models performance is illustrated on simulated and real data exhibiting the mentioned various sources of complexity. See also Figure 2 for an illustration on synthetic data.

Figure 2. Synthetic image segmentations using a standard Hidden Markov (HMF-IN) model (second row) and our triplet Markov (TMF) model (third row): the true 2-class segmentation is the image in the upper left corner and four different noise models are considered. In (a) class distributions are mixtures of two Gaussians, In (c) observations from class 1 are generated from a Gamma(1,2) distribution and observations from class 2 are obtained by adding 1 to realizations of an Exponential distribution with parameter 1. In (b) and (d) the noisy images are obtained by replacing each pixel value respectively in (a) and (c) by its average with its four nearest neighbors. Classification rates are given below each segmentation results. In the TMF model case, Gaussian components are used to approximate the noise model. The last row gives the number of components K selected using our BICMF criterion.
True segmentation(a)(b)(c)(d)
ciblecible_m01S025datcible_m01S025_bni4datcible_m01S1datcible_m01S1_bni4dat
HMF-INcible_m01S025_k1cls2cible_m01S025_bni4_k1cls2cible_m01S1_k1cls2cible_m01S1_bni4_k1cls2
Classification rates51.2%80.7%66.3%74.5%
TMFcible_m01S025_k2cls2cible_m01S025_bni4_k3cls2cible_m01S1_k4cls2cible_m01S1_bni4_k4cls2
Classification rates96.6%91.7%95.8%88.4%
Selected K 2344

Integrated Markov models for clustering genes: combining expression data with missing values and gene interaction network analysis

Participants : Juliette Blanchet, Florence Forbes, Matthieu Vignes.

DNA microarray technologies provide means for monitoring in the order of tens of thousands of gene expression levels quantitatively and simultaneously. However data generated in these experiments can be noisy and have missing values. When it is not ignored, the last issue has been solved by imputing the expression matrix in order to keep going with traditional analysis method. Although it was a first useful step, it is not recommended to use value imputation to deal with missing data. Moreover, appropriate tools are needed to cope with noisy background in expression levels and to take into account a dependency structure among genes under study. Various approaches have been proposed but to our knowledge none of them has the ability to fulfil all these features. We therefore propose [Oops!] a clustering algorithm that explicitly accounts for dependencies within a biological network and for missing value mechanism to analyze microarray data. We propose to tackle these issues in a unique statistical framework. We take advantage of many features of the probabilistic aspect of the model. In a previous work [Oops!] , we mentioned the ability of a straightforward extension of the model therein to deal with missing values. It is now implemented and we prove it to be successful at dealing with different absence patterns either on simulated or real biological data sets. We emphasize that our model can be useful in a great range of applications for clustering entities of interest (such as genes, proteins, metabolites in post-genomics studies). It requires individual possibly incomplete measurements taken on these entities related by a relevant interaction network. Hence our method is neither organism- nor data-specific. Also, the method is of interest in a wide variety of fields where missing data is a common feature: social sciences, computer vision, remote sensing, speach recognition and of course biological systems. In experiments on synthetic and real biological data, reported in [Oops!] , our method demonstrates enhanced results over existing approaches.

LOCUS: LOcal Cooperative Unified Segmentation of MRI Brain Scans

Participant : Florence Forbes.

Joint work with:Benoit Scherrer, Michel Dojat (Grenoble Institute of Neuroscience) and Christine Garbay (LIG).

MRI brain scan segmentation is a challenging task and has been widely addressed in the last 15 years. Difficulties in automatic segmentation arise from various sources including the size of the data, the low contrast between tissues, the limitations of available prior knowledge, local perturbations such as noise or global perturbations such as intensity nonuniformity. Current approaches share three main characteristics: first, tissue and structure segmentations are considered as two separate tasks whereas they are clearly linked. Second, for a robust to noise segmentation, the Markov Random Field (MRF) probabilistic framework is classically used to introduce spatial dependencies between voxels Third, tissue models are generally estimated globally through the entire volume and do not reflect spatial intensity variations within each tissue, due mainly to biological tissue properties and to MRI hardware imperfections. Only the latter is generally addressed, modeled by the introduction of an explicit so called “bias field” model to estimate. Local segmentation is an attractive alternative. The principle is to compute models in various subvolumes to fit better to local image properties. However, the few local approaches proposed to date are clearly limited: they use local estimation as a preprocessing step only to estimate a bias field model, a training set for statistical local shape modelling , redondant information to ensure consistency and smoothnesss between local estimated models, or an atlas providing a priori local spatial information greedily increasing computational cost. We present in this work [Oops!] an original LOcal Cooperative Unified Segmentation (LOCUS) approach which 1) performs tissue and structure segmentation by distributing a set of cooperating local MRF models through the volume, 2) segments structures by introducing prior localization constraints in a MRF framework and 3) ensures local models consistency and tractable computational time via specific cooperation and coordination mechanisms.

The evaluation was performed using phantoms and real 3T brain scans. It shows good results and in particular robustness to nonuniformity and noise with a low computational cost. Figure 3 shows a visual comparison with two well known approaches, FSL and SPM5, on a very high bias field real 3T brain scan. This image was acquired with a surface coil which provides a high sensitivity in a small region (here the occipital lobe) for functional imaging applications.

Figure 3. Tissue segmentation of a very high bias field real 3T brain scan (a): segmentations provided by SPM5 (b), FSL (c) and LOCUS (d).
surf_z169_glsurf_z169_spm5surf_z169_fslsurf_z169_locus
(a)(b)(c)(d)

Multimodal MRI segmentation of ischemic stroke lesions

Participant : Florence Forbes.

Joint work with:Benoit Scherrer, Michel Dojat, Yacine Kabir (Grenoble Institute of Neuroscience) and Christine Garbay (LIG).

The problem addressed is the automatic segmentation of stroke lesions on MR multi-sequences. Lesions enhance differently depending on the MR modality and there is an obvious gain in trying to account for various sources of information in a single procedure. To this aim, we propose [Oops!] a multimodal Markov random field model which includes all MR modalities simultaneously. The results of the multimodal method proposed are compared with those obtained with a mono-dimensional segmentation applied on each MRI sequence separately. We also constructed an Atlas of blood supply territories to help clinicians in the determination of stroke subtypes. Single modality segmentations show as expected that some of the modalities are not or less informative in term of lesion detection and cannot therefore be considered alone. In addition, the modalities information varies with the session. The multimodal approach has the advantage to intrinsically take that into account and to provide satisfactory results in all cases. Further analysis is required. In particular we propose to use the Blood Supply territories Atlas to further assess the performance of the approach.

Joint Markov model for cooperative disparity estimation and object boundary extraction

Participant : Florence Forbes.

Joint work with:Ramya Narasimha, Elise Arnaud, Miles Hansard and Radu Horaud from team Perception, INRIA.

Accurate disparity and object boundary estimation is critical in several applications. In most approaches, these processes are considered as two separate tasks although they are clearly linked: the disparity discontinuities (which are also 3D depth discontinuities) occur usually at object boundaries. However, most disparity estimation algorithms result in disparity discontinuities occurring at improper locations. By “improper" we mean locations which are not at the actual depth discontinuities.

In this work, we build on standard approaches to dense disparity estimation and propose an original approach which simultaneously corrects disparity and finds the object boundaries. These two tasks are dealt with cooperatively, i.e. the presence of disparity discontinuity aids the detection of object boundaries and vice versa. Our approach relies on two assumptions: (i) that the discontinuities in depth are usually at object boundaries (which is true for natural images) (ii) that the disparity discontinuities obtained from naive disparity estimation are usually at the vicinity of actual depth discontinuities. Thus, if we locate the object boundaries which are in the vicinity of the disparity discontinuities – using the gradient map of the image as evidence –, we can correct the disparity values so that they fit closer to the object boundaries. The feedback of boundary estimation on disparity estimation is made through the use of an additional auxiliary field referred to as a displacement field . This field suggests the corrections that need to be applied at disparity discontinuities in order that they align with object boundaries, so that disparity discontinuities can then be assumed as representing the object boundaries. The displacement model allows to estimate directions in which the discontinuities have to be moved. This information is incorporated in the disparity model so that the disparity values at discontinuities are influenced only by the neighbors in the opposite direction of the displacement. The resulting procedure involves alternation between estimation of disparity and displacement fields in an iterative framework at various scales. When the observation is a set of two stereo images (right and left), we propose a joint probabilistic model of both disparity and displacement fields. Considering the resulting conditional distributions, the formulation reduces to a Markov Random Field (MRF) model on disparities while it reduces to a Markov chain for displacement variables. The disparity-MRF is then optimized using variational mean field and the exact optimization of the Markov chain is carried out using Viterbi algorithm.

The main originality is to define such a model through conditional distributions that can model explicitly relationships between disparity and object boundaries. As a result, we observe a significant gain in disparity and boundary estimations in experiments. The latter show already good results when made with basic image information such as gradient maps. Other monocular cues could be incorporated easily.

As regards, the probabilistic setting itself, we chose to first ignore the parameter estimation issue by fixing them manually. However, a natural future direction of research is to investigate the possibility to incorporate this kind of model in an EM (Expectation Maximization) or variants framework. Besides providing theoretically based parameter estimation, this would also have the advantage to provide a richer framework in which iterative estimation of realizations of the displacement and disparity fields would be replaced by iterative estimation of full distributions for these fields.


previous
next

Logo Inria