Section: Scientific Foundations
Statistical modeling and machine learning for image analysis
We are interested in learning and statistics mainly as technologies for attacking difficult vision problems, so we take an eclectic approach, using a broad spectrum of techniques ranging from classical statistical generative and discriminative models to modern kernel, margin and boosting based machines. Hereafter we enumerate a set of approaches that address some problems encountered in this context.

Parameterrich models and limited training data are the norm in vision, so overfitting needs to be estimated by crossvalidation, information criteria or capacity bounds and controlled by regularization, model and feature selection.

Visual descriptors tend to be high dimensional and redundant, so we often preprocess data to reduce it to more manageable terms using dimensionality reduction techniques including PCA and its nonlinear variants, latent structure methods such as Probabilistic Latent Semantic Analysis (PLSA) and Latent Dirichlet Allocation (LDA), and manifold methods such as Isomap/LLE.

To capture the shapes of complex probability distributions over high dimensional descriptor spaces, we either fit mixture models and similar structured semiparametric probability models, or reduce them to histograms using vector quantization techniques such as Kmeans or latent semantic structure models.

Missing data is common owing to unknown class labels, feature detection failures, occlusions and intraclass variability, so we need to use data completion techniques based on variational methods, belief propagation or MCMC sampling.

Weakly labeled data is also common – for example one may be told that a training image contains an object of some class, but not where the object is in the image – and variants of unsupervised, semisupervised and cotraining are useful for handling this. In general, it is expensive and tedious to label large numbers of training images so less supervised data mining style methods are an area that needs to be developed.

On the discriminative side, machine learning techniques such as Support Vector Machines, Relevance Vector Machines, and Boosting, are used to produce flexible classifiers and regression methods based on visual descriptors.

Visual categories have a rich nested structure, so techniques that handle large numbers of classes and nested classes are especially interesting to us.

Images and videos contain huge amounts of data, so we need to use algorithms suited to largescale learning problems.