## Section: Research Program

### Data Dimensionality Reduction

Manifolds, graph-based transforms, compressive sensing

Dimensionality reduction encompasses a variety of methods for low-dimensional data embedding, such as sparse and low-rank models, random low-dimensional projections in a compressive sensing framework, and sparsifying transforms including graph-based transforms. These methods are the cornerstones of many visual data processing tasks (compression, inverse problems).

*Sparse representations*, *compressive sensing*, and *dictionary learning* have been shown to be powerful tools for efficient processing of visual data. The objective of *sparse representations* is to find a sparse approximation of a given input data. In theory, given a dictionary matrix $A\in {\mathbb{R}}^{m\times n}$, and a data $\mathbf{b}\in {\mathbb{R}}^{m}$ with $m<<n$ and $A$ is of full row rank, one seeks the solution of $min\{\parallel \mathbf{x}{\parallel}_{0}\phantom{\rule{0.277778em}{0ex}}:\phantom{\rule{0.277778em}{0ex}}A\mathbf{x}=\mathbf{b}\},$ where ${\parallel \mathbf{x}\parallel}_{0}$ denotes the ${\ell}_{0}$ norm of $\mathbf{x}$, i.e. the number of non-zero components in $\mathbf{x}$.
$A$ is known as the dictionary, its columns ${a}_{j}$ are the atoms, they are assumed to be normalized in Euclidean norm.
There exist many solutions $x$ to $Ax=b$. The problem is to find the sparsest solution $x$, i.e. the one having the fewest nonzero components. In practice, one actually seeks an approximate and thus even sparser solution which satisfies $min\{\parallel \mathbf{x}{\parallel}_{0}\phantom{\rule{0.277778em}{0ex}}:\phantom{\rule{0.277778em}{0ex}}\parallel A\mathbf{x}-\mathbf{b}{\parallel}_{p}\le \rho \},$ for some $\rho \ge 0$, characterizing an admissible reconstruction error.

The recent theory of *compressed sensing*, in the context of discrete signals, can be seen as an effective dimensionality reduction technique.
The idea behind compressive sensing is that
a signal can be accurately recovered from a small number of linear measurements, at a rate much smaller than what is commonly prescribed by the Shannon-Nyquist theorem, provided that it is sparse or compressible in a known basis. Compressed sensing has emerged as a powerful framework for signal acquisition and sensor design, with a number of open issues such as learning the basis in which the signal is sparse, with the help of dictionary learning methods, or the design and optimization of the sensing matrix. The problem is in particular investigated in the context of light fields acquisition, aiming at novel camera design with the goal of offering a good trade-off between spatial and angular resolution.

While most image and video processing methods have been developed for cartesian sampling grids, new imaging modalities (e.g. point clouds, light fields) call for representations on irregular supports that can be well represented by *graphs*. Reducing the dimensionality of such signals require designing novel transforms yielding compact signal representation.
One example of transform is the Graph Fourier transform
whose basis functions are given by the eigenvectors of the graph Laplacian matrix
$\mathbf{L}=\mathbf{D}-\mathbf{A}$, where $\mathbf{D}$ is a diagonal degree matrix whose ${i}^{th}$ diagonal element is equal to the sum of the weights of all edges incident to the node $i$, and $\mathbf{A}$ the adjacency matrix.
The eigenvectors of the Laplacian of the graph, also called Laplacian eigenbases, are analogous to the Fourier bases in the Euclidean domain and allow representing the signal residing on the graph as a linear combination of eigenfunctions akin to Fourier Analysis. This transform is particularly efficient for compacting smooth signals on the graph.
The problems which therefore need to be addressed are (i) to define graph structures on which the corresponding signals are smooth for different imaging modalities and (ii) the design of transforms compacting well the signal energy with a tractable computational complexity.