Overall Objectives
Scientific Foundations
Application Domains
New Results
Contracts and Grants with Industry
Other Grants and Activities
Inria / Raweb 2003
Project: METISS

Project : metiss

Section: Scientific Foundations

Keywords : probability density function , gaussian model , gaussian mixture model , Hidden Markov Model , maximum likelihood , maximum a posteriori , EM algorithm , Viterbi algorithm , beam search , classification , hypotheses testing , acoustic parameterisation .

Probabilistic approach

For more than a decade, the probabilistic approaches have been used successfully for various tasks in pattern recognition, and more particularly in speech recognition, whether it is for the recognition of isolated words, for the retranscription of continuous speech, for speaker recognition tasks or for language identification. Probabilistic models indeed make it possible to effectively account for various factors of variability occuring in the signal, while easily lending themselves to the definition of metrics between an observation and the model of a sound class (phoneme, word, speaker, etc...).

Probabilistic formalism and modeling

The probabilistic approach for the representation of an (audio) class X relies on the assumption that this class can be described by a probability density function (PDF) P ( . | X ) which associates a probability P ( Y | X ) to any observation Y .

In the field of speech processing, the class X can represent a phoneme, a sequence of phonemes, a word from a vocabulary, or a particular speaker, a type of speaker, a language, .... Class X can also correspond to other types of sound objects, for example a family of sounds (word, music, applause), a sound event (a particular noise, a jingle), a sound segment with stationary statistics (on both sides of a rupture),  etc.

In the case of audio signals, the observations Y are of an acoustical nature, for example vectors resulting from the analysis of the short-term spectrum of the signal (filter-bank coefficients, cepstrum coefficients, time-frequency principal components,  etc.) or any other representation accounting for the information that is required for an efficient separation of the various audio classes considered.

In practice, the PDF P is not accessible to measurement. It is therefore necessary to resort to an approximation P ^ of this function, which is usually refered to as the likelihood function. This function can be expressed in the form of a parametric model and the models most used in the field of speech processing (and audio signal) are the Gaussian Model (GM), the Gaussian Mixture Model (GMM) and the Hidden Markov Model (HMM).

In the rest of this text, we will denote as Λ the set of parameters which define the model under consideration : a mean value and a variance for a GM, p means, variances and weights for a GMM with p Gaussian, q states, q 2 transition probabilities and p × q , means, variances and weights for an HMM with q states the PDF of which being GMMs with p Gaussians. Λ X will denote the vector of parameters for class X , and in this case, the following notation will be used :

P ^ ( Y | X ) = P ( Y | Λ X )

Choosing a particular family of models is based on a set of considerations ranging from the general structure of the data, some knowledge on the audio class making it possible to size the model (number of Gaussian p , number of states q ,  etc.), the speed of calculation of the likelihood function, the number of degrees of freedom of the model compared to the volume of training data available,  etc.

Statistical estimation

The determination of the model parameters for a given class X is generally based on a step of statistical estimation consisting in determining the optimal value for the vector of parameters Λ , i.e. the parameters that maximize a modeling criterion on a training set { Y } t r comprising observations corresponding to class X .

In some cases, the Maximum Likelihood (ML) criterion can be used :

Λ M L * = arg max Λ P ( { Y } t r | Λ )

This approach is generally satisfactory when the number of parameters to be estimated is small w.r.t. the number of training observations. However, in many applicative contexts, other estimation criteria are necessary to guarantee more robustness of the learning process with small quantities of training data. Let us mention in particular the Maximum a Posteriori (MAP) criterion :

Λ M A P * = arg max Λ P ( { Y } t r | Λ ) . p ( Λ )

which relies on a prior probability p ( Λ ) of vector Λ , expressing possible knowledge on the estimated parameter distribution for the class considered. Discriminative training is another alternative to these two criteria, definitely more complex to implement than the ML and MAP criteria.

In addition to the fact that the ML criterion is only one particular case of the MAP criterion (under the assumption of uniform prior probability for Λ ), the MAP criterion happens to be experimentally better adapted to small volumes of training data and offers better generalization capabilities of the estimated models (this is measured for example by the improvement of the classification performance and recognition on new data). Moreover, the same scheme can be used in the framework of incremental adaptation, i.e. for the refinement of the parameters of a model using new data observed for instance, in the course of use of the recognition system. In this case, the value of p ( Λ ) is given by the model before adaptation and the MAP estimate uses the new data to update the model parameters.

Whatever criterion is considered (ML or MAP), the estimate of the parameters Λ is obtained with the EM algorithm (Expectation-Maximization), which provides a solution corresponding to a local maximum of the training criterion.

Likelihood computation and state sequence decoding

During the recognition phase, it is necessary to evaluate the likelihood function for the various class hypotheses X k . When the complexity of the model is high - i.e when the number of classes is large and the observations to be recognized are multidimensional - it is generally necessary to implement fast calculation algorithms to approximate the likelihood function.

In addition, when the class model are HMMs, the evaluation of the likelihood requires a decoding step to find the most probable sequence of hidden states. This is done by implementing the Viterbi algorithm, a traditional tool in the field of speech recognition.

If, moreover, the observations consist of segments belonging to different classes, chained by probabilities of transition between successive classes and without a priori knowledge of the borders between segments (which is for instance the case in a continuous speech utterance), it is necessary to call for beam-search techniques to decode a (quasi-)optimal sequence of states at the level of the whole utterance.

Bayesian decision

When the task to solve is the classification of an observation into one class among several closed-set possibilities, the decision usually relies on the maximum a posteriori rule :

X ^ k = arg max X k p ( X k ) . P ^ ( Y | X k )

where { X k } 1 k K denotes the set of possible classes.

In other contexts (for instance, in speaker verification, word-spotting or sound class detection), the problem of classification can be formulated as a binary hypotheses testing problem, consisting in deciding whether the tested observation is more likely to be pertaining to the class X (denoted as hypothesis X ) or not pertaining to it (i.e. pertaining to the ``non-class'', denoted as hypothesis X ¯ ). In this case, the decision consists in acceptance or rejection, respectively denoted X ^ and X ¯ ^ in the rest of this document.

This latter problem can be theoretically solved within the framework of Bayesian decision by calculating the ratio S X of the PDFs for the class and the non-class distributions, and comparing this ratio to a decision threshold :

S X ( Y ) = P ( Y | X ) P ( Y | X ¯ ) R hypothesis X ^ < R hypothesis X ¯ ^

where the optimal threshold R does not depend on the distribution of class X , but only of the operating conditions of the system via the ratio of the prior probabilities of the two hypotheses and the ratio of the costs of false acceptance and false rejection.

In practice, however, the Bayesian theory cannot be applied straightforwardly, because the quantities provided by the probabilistic models are not the true PDFs, but only likelihood functions which approximate the true PDFs more or less accurately, depending on the quality of the model of the class.

The rule of optimal decision must then be rewritten :

S ^ X ( Y ) = P ^ ( Y | X ) P ^ ( Y | X ¯ ) Θ X ( R ) hypothesis X ^ < Θ X ( R ) hypothesis X ¯ ^

and the optimal threshold Θ X ( R ) must be adjusted for class X , by modeling the behaviour of the ratio S ^ X on external (development) data.

The issue of how to estimate the optimal threshold Θ X ( R ) in the case of the likelihoo ratio test, can be formulated in an equivalent way as finding a normalisation of the likelihood ratio which brings back the optimal decision threshold to its theoretical value. Several transformations are now well known within the framework of speaker verification, in particular the Z-norm and the T-norm methods.