Section: Overall Objectives
Sparsity in Imaging
Sparse ${\ell}^{1}$ regularization.
Beside image warping and registration in medical image analysis, a key problem in nearly all imaging applications is the reconstruction of high quality data from low resolution observations. This field, commonly referred to as “inverse problems”, is very often concerned with the precise location of features such as point sources (modeled as Dirac masses) or sharp contours of objects (modeled as gradients being Dirac masses along curves). The underlying intuition behind these ideas is the socalled sparsity model (either of the data itself, its gradient, or other more complicated representations such as wavelets, curvelets, bandlets [143] and learned representation [179]).
The huge interest in these ideas started mostly from the introduction of convex methods to serve as proxy for these sparse regularizations. The most well known is the ${\ell}^{1}$ norm introduced independently in imaging by Donoho and coworkers under the name “Basis Pursuit” [103] and in statistics by Tibshirani [170] under the name “Lasso”. A more recent resurgence of this interest dates back to 10 years ago with the introduction of the socalled “compressed sensing” acquisition techniques [85], which make use of randomized forward operators and ${\ell}^{1}$type reconstruction.
Regularization over measure spaces.
However, the theoretical analysis of sparse reconstructions involving reallife acquisition operators (such as those found in seismic imaging, neuroimaging, astrophysical imaging, etc.) is still mostly an open problem. A recent research direction, triggered by a paper of Candès and FernandezGranda [87], is to study directly the infinite dimensional problem of reconstruction of sparse measures (i.e. sum of Dirac masses) using the total variation of measures (not to be mistaken for the total variation of 2D functions). Several works [86], [115], [112] have used this framework to provide theoretical performance guarantees by basically studying how the distance between neighboring spikes impacts noise stability.

Low complexity regularization and partial smoothness.
In image processing, one of the most popular methods is the total variation regularization [163], [79]. It favors lowcomplexity images that are piecewise constant, see Figure 3 for some examples on how to solve some image processing problems. Beside applications in image processing, sparsityrelated ideas also had a deep impact in statistics [170] and machine learning [43]. As a typical example, for applications to recommendation systems, it makes sense to consider sparsity of the singular values of matrices, which can be relaxed using the socalled nuclear norm (a.k.a. trace norm) [44]. The underlying methodology is to make use of lowcomplexity regularization models, which turns out to be equivalent to the use of partlysmooth regularization functionals [136], [172] enforcing the solution to belong to a lowdimensional manifold.