Team MISTIS

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Mixture models

Taking into account the curse of dimensionality.

Participants : Charles Bouveyron, Stéphane Girard.

Joint work with Serge Iovleff (Université Lille 3) and Cordelia Schmid (Lear, Inria).

In the PhD work of Charles Bouveyron (co-advised by Cordelia Schmid from the INRIA team LEAR) [11] , we propose new Gaussian models of high dimensional data for classification purposes. We assume that the data live in several groups located in subspaces of lower dimensions. Two different strategies arise:

This modelling yields new supervised classification methods called HDDA for High Dimensional Discriminant Analysis [12] , [13] . Some versions of this method have been tested on the supervised classification of objects in images. This approach has been adapted to the unsupervised classification framework, and the related method is named HDDC for High Dimensional Data Clustering [26] , In collaboration with Gilles Celeux and Charles Bouveyron we are currently working on the automatic selection of the discrete parameters of the model. We also, in the context of Juliette Blanchet PhD work (also co-advised with C. Schmid), combined the method to our Markov-model based approach of learning and classification and obtained significant improvement in applications such as texture recognition [24] , [25] , where the observations are high-dimensional.

We are then also willing to get rid of the Gaussian assumption. To this end, non linear models and semi parametric methods are necessary.

Supervised and unsupervised classification of objects in images

Participants : Charles Bouveyron, Stéphane Girard.

This is joint work with Cordelia Schmid, (LEAR, INRIA Rhône-Alpes)

Supervised framework. In this framework, small scale-invariant regions are detected on a learning image set and they are then characterized by the local descriptor Sift [61] . The object is recognized in a test image if a sufficient number of matches with the learning set is found. The recognition step is done using supervised classification methods. Frequently used methods are Linear Discriminant Analysis (LDA) and, more recently, kernel methods (SVM) [59] . In our approach, the object is represented as a set of object parts. As an example fora motorbike, we will consider three parts: wheels, seat and handlebars.

Obtained results showed that the HDDA method described in Section 6.1.1 gives better recognition results than SVM and other generative methods. In particular, the classification errors are significantly lower for HDDA compared to SVM. In addition, HDDA method is as fast as standard discriminant analysis (computation time Im4 ${\#8771 1}$ sec. for 1000 descriptors) and much faster than SVM (Im5 ${\#8771 7}$ sec.).

Unsupervised framework. Our approach learns automatically discriminant object parts and then identifies local descriptors belonging to the object. It first extracts a set of scale-invariant descriptors and then learns a set of discriminative object parts based on a set of positive and negative images. Learning is "weakly supervised" since objects are not segmented in the positive images. Recognition matches descriptors of a unknown image to the discriminative object parts.

Object localization is a challenging problem since it requires a very precise classification of descriptors. For this, it is necessary to identify the descriptors of an image which have a high probability to belong to the object. The adaptation of HDDA to the unsupervised framework, called HDDC, allows to compute the posterior probability for each interest point that it belongs to the object. Finally, the object can be located in a test image by considering the points with the highest probabilities. In practice, 5 or 10 percents of all detected interest points are enough to locate efficiently the object. See an illustration in Figure 2 .

Figure 2. Object localization: from left to right, Original image, All detected points, Final localization.
IMG/big_bike

We also consider the application of image classification [42] . This step decides if the object is present in the image, i.e. it classifies the image as positive (containing the object) or negative (not containing the object). We use our decision rule to assign a posterior probability to each descriptor and each cluster. We then decide based on these probabilities if a test image contains the object or not. Previous approaches [45] have used a simple empirical technique to classify a test image. We introduce a probabilistic technique which uses the posterior probabilities. We obtain for a test image I a score S$ \in$[0, 1] that I contains the object. We decide that a test image contains the object if the score S is larger than a given threshold. This probabilistic decision has the advantage of not introducing an additional parameter and of using the posterior probability to reject (assign a low weight) to dubious points [27] .


previous
next

Logo Inria