Team MISTIS

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: Scientific Foundations

Keywords : mixture of distributions, EM algorithm, missing data, conditional independence, statistical pattern recognition, clustering, unsupervised and partially supervised learning (unsupervised learning, partially supervised learning).

Mixture models

Participants : Juliette Blanchet, Jean-Baptiste Durand, Florence Forbes, Gersende Fort, St├ęphane Girard, Matthieu Vignes.

In a first approach, we consider statistical parametric models, $ \theta$ being the parameter possibly multi-dimensional usually unknown and to be estimated. We consider cases where the data naturally divide into observed data y= y1, ..., yn and unobserved or missing data z= z1, ..., zn . The missing data zi represents for instance the memberships to one of a set of K alternative categories. The distribution of an observed yi can be written as a finite mixture of distributions,

Im1 $\mtable{...}$(1)

These models are interesting in that they may point out an hidden variable responsible for most of the observed variability and so that the observed variables are conditionally independent. Their estimation is often difficult due to the missing data. The Expectation-Maximization (EM) algorithm is a general and now standard approach to maximization of the likelihood in missing data problems. It provides parameters estimation but also values for missing data.

Mixture models correspond to independent zi 's. They are more and more used in statistical pattern recognition. They allow a formal (model-based) approach to (unsupervised) clustering.


previous
next

Logo Inria