Section: New Results
Mixture models
Parameter estimation in the heterogeneity linear mixed model
Participant : MarieJosé Martinez.
Joint work with: Emma Holian (National University of Ireland, Galway)
In studies where subjects contribute more than one observation, such as in longitudinal studies, linear mixed models have become one of the most used techniques to take into account the correlation between these observations. By introducing random effects, mixed models allow the withinsubject correlation and the variability of the response among the different subjects to be taken into account. However, such models are based on a normality assumption for the random effects and reflect the prior belief of homogeneity among all the subjects. To relax this strong assumption, Verbeke and Lesaffre (1996) proposed the extension of the classical linear mixed model by allowing the random effects to be sampled from a finite mixture of normal distributions with common covariance matrix. This extension naturally arises from the prior belief of the presence of unobserved heterogeneity in the random effects population. The model is therefore called the heterogeneity linear mixed model. Note that this model does not only extend the assumption about the random effects distribution, indeed, each component of the mixture can be considered as a cluster containing a proportion of the total population. Thus, this model is also suitable for classification purposes.
Concerning parameter estimation in the heterogeneity model, the use of the EMalgorithm, which takes into account the incomplete structure of the data, has been considered in the literature. Unfortunately, the Mstep in the estimation process is not available in analytic form and a numerical maximisation procedure such as NewtonRaphson is needed. Because deriving such a procedure is a nontrivial task, Komarek et al. (2002) proposed an approximate optimization. But this procedure proved to be very slow and limited to small samples due to requiring manipulation of very large matrices and prohibitive computation.
To overcome this problem, we have proposed in [28] , [52] an alternative approach which consists of fitting directly an equivalent mixture of linear mixed models. Contrary to the heterogeneity model, the Mstep of the EMalgorithm is tractable analytically in this case. Then, from the obtained parameter estimates, we can easily obtain the parameter estimates in the heterogeneity model.
Taking into account the curse of dimensionality
Participants : Stéphane Girard, Alessandro Chiancone, SeydouNourou Sylla.
Joint work with: C. Bouveyron (Univ. Paris 1), M. Fauvel (ENSAT Toulouse) and J. Chanussot (Gipsalab and GrenobleINP)
In the PhD work of Charles Bouveyron (coadvised by Cordelia Schmid from the Inria LEAR team) [64] , we propose new Gaussian models of high dimensional data for classification purposes. We assume that the data live in several groups located in subspaces of lower dimensions. Two different strategies arise:

the introduction in the model of a dimension reduction constraint for each group

the use of parsimonious models obtained by imposing to different groups to share the same values of some parameters
This modelling yields a new supervised classification method called High Dimensional Discriminant Analysis (HDDA) [4] . Some versions of this method have been tested on the supervised classification of objects in images. This approach has been adapted to the unsupervised classification framework, and the related method is named High Dimensional Data Clustering (HDDC) [3] . Our recent work consists in adding a kernel in the previous methods to deal with nonlinear data classification.
Mixture modelling using skewed multivariate heavy tailed distributions with variable amounts of tailweight
Participants : Florence Forbes, Darren Wraith.
Clustering concerns the assignment of each of $N$, possibly multidimensional, observations ${y}_{1},...,{y}_{N}$ to one of $K$ groups. A popular way to approach this task is via a parametric finite mixture model. While the vast majority of the work on such mixtures has been based on Gaussian mixture models in many applications the tails of normal distributions are shorter than appropriate or parameter estimations are affected by atypical observations (outliers). The family of location and scale mixtures of Gaussians has the ability to generate a number of flexible distributional forms. It nests as particular cases several important asymmetric distributions like the Generalised Hyperbolic distribution. The Generalised Hyperbolic distribution in turn nests many other well known distributions such as the Normal Inverse Gaussian (NIG) whose practical relevance has been widely documented in the literature. In a multivariate setting, we propose to extend the standard location and scale mixture concept into a so called multiple scaled framework which has the advantage of allowing different tail and skewness behaviours in each dimension of the variable space with arbitrary correlation between dimensions. The approach builds upon, and develops further, previous work on scale mixtures of Gaussians [25] . Estimation of the parameters is provided via an EM algorithm with a particular focus on NIG distributions. Inference is then extended to cover the case of mixtures of such multiple scaled distributions for application to clustering. Assessments on simulated and real data confirm the gain in degrees of freedom and flexibility in modelling data of varying tail behaviour and directional shape.
HighDimensional Regression with Gaussian Mixtures and PartiallyLatent Response Variables
Participant : Florence Forbes.
Joint work with: Antoine Deleforge and Radu Horaud from the Inria Perception team.
In this work we address the problem of approximating highdimensional data with a lowdimensional representation. We make the following contributions. We propose an inverse regression method which exchanges the roles of input and response, such that the lowdimensional variable becomes the regressor, and which is tractable. We introduce a mixture of locallylinear probabilistic mapping model that starts with estimating the parameters of inverse regression, and follows with inferring closedform solutions for the forward parameters of the highdimensional regression problem of interest. Moreover, we introduce a partiallylatent paradigm, such that the vectorvalued response variable is composed of both observed and latent entries, thus being able to deal with data contaminated by experimental artifacts that cannot be explained with noise models. The proposed probabilistic formulation could be viewed as a latentvariable augmentation of regression. We devise expectationmaximization (EM) procedures based on a data augmentation strategy which facilitates the maximumlikelihood search over the model parameters. We propose two augmentation schemes and we describe in detail the associated EM inference procedures that may well be viewed as generalizations of a number of EM regression, dimension reduction, and factor analysis algorithms. The proposed framework is validated with both synthetic and real data. We provide experimental evidence that our method outperforms several existing regression techniques.
Acoustic space learning via variational EM for SoundSource Separation and Localization
Participant : Florence Forbes.
Joint work with: Antoine Deleforge and Radu Horaud from the Inria Perception team.
In this paper we address the problems of modeling the acoustic space generated by a fullspectrum sound source and of using the learned model for the localization and separation of multiple sources that simultaneously emit sparsespectrum sounds. We lay theoretical and methodological grounds in order to introduce the binaural manifold paradigm. We perform an indepth study of the latent lowdimensional structure of the highdimensional interaural spectral data, based on a corpus recorded with a humanlike audiomotor robot head. A nonlinear dimensionality reduction technique is used to show that these data lie on a twodimensional (2D) smooth manifold parameterized by the motor states of the listener, or equivalently, the sound source directions. We propose a probabilistic piecewise affine mapping model (PPAM) specifically designed to deal with highdimensional data exhibiting an intrinsic piecewise linear structure. We derive a closedform expectationmaximization (EM) procedure for estimating the model parameters, followed by Bayes inversion for obtaining the full posterior density function of a sound source direction. We extend this solution to deal with missing data and redundancy in real world spectrograms, and hence for 2D localization of natural sound sources such as speech. We further generalize the model to the challenging case of multiple sound sources and we propose a variational EM framework. The associated algorithm, referred to as variational EM for source separation and localization (VESSL) yields a Bayesian estimation of the 2D locations and timefrequency masks of all the sources. Comparisons of the proposed approach with several existing methods reveal that the combination of acousticspace learning with Bayesian inference enables our method to outperform stateoftheart methods.