## Section: New Results

### Sparse decompositions: theory and algorithms

#### Learning of shift-invariant atoms (MoTIF algorithm)

Keywords : Redundant dictionary learning, atom, sparsity, shift invariance, Principal Component Analysis.

Participants : Sylvain Lesage, Boris Mailhé, Rémi Gribonval.

Sparse approximation using redundant dictionaries is an efficient tool for many applications in the field of signal processing. The performances largely depend on the adaptation of the dictionary to the signal to decompose. As the statistical dependencies are most of the time not obvious in natural high-dimensionnal data, learning fundamental patterns is an alternative to analytical design of bases and has become a field of acute research. Most of the time, the underlying patterns of a class of signals can be found at any time, and in the design of a dictionary, this shift invariance property should be present. We developped a new algorithm for learning short generating functions, each of them building a set of atoms corresponding to all its translations. The resulting dictionary is highly redundant and shift invariant.

This algorithm, called MoTIF for Matching of Translation Invariant Features, learns the generating functions iteratively, from a set of learning signals. Each step is an alternate routine : we begin with an initial function, then we find the location, in each learning signal, where this function is the most present. From these located patches, the function is updated by a least-square approximation (Principal Component Analysis). This mechanism is done iteratively. It is monotonic and converges in a finite number of iterations.

The estimation of the next functions needs the addition of a constraint that helps the atoms to be as decorrelated as possible with the previous ones. This way, no atom is selected multiple times. Note that with this constraint, if there were two close underlying patterns in the signal, the algorithm would not retrieve both.

On natural images, the learnt atoms are similar to what is generally found in literacy. On other data, like ECG or EEG, typical waveforms are retrieved. We also show the results of a test on audio data, where the approximation using some learnt atoms is sparser than using local cosines.

This work can be found in [41] . It was done in collaboration with Philippe Jost and Pierre Vandergheynst (EPFL, Lausanne).

#### The Matching Pursuit Toolkit : Matching Pursuit made tractable

Keywords : sparsity, Matching Pursuit.

Participants : Sacha Krstulovic, Rémi Gribonval.

Matching Pursuit (MP) aims at finding sparse decompositions of signals over redundant bases of elementary waveforms. Traditionally, MP has been considered too slow an algorithm to be applied to real-life problems with high-dimensional signals. Indeed, in terms of floating points operations, its typical numerical implementations have a complexity of and are associated with impractical runtimes. In this work, we propose a new architecture which exploits the structure shared by many redundant MP dictionaries, and thus decreases its complexity to . This architecture is implemented in a new software toolkit, called MPTK (the Matching Pursuit Toolkit), which is able to reach, e.g., 0.25× real time for a typical MP analysis scenario applied to a 1 hour long audio track. This substantial acceleration makes it possible, from now on, to explore and apply MP in the framework of real-life, high-dimensional data processing problems.

This work is currently submitted for publication and the corresponding software is distributed at http://mptk.gforge.inria.fr

#### Structured sound decomposition with Matching Pursuit

Keywords : sparsity, Matching Pursuit, harmonic structures, chirplets.

Participants : Sacha Krstulovic, Rémi Gribonval.

In the framework of audio signal analysis, it is desirable to obtain sparse representations that are able to reflect the harmonic structures, e.g., issued from musical instruments. In this work, we compare two approaches which introduce some explicit models of harmonic features into the Matching Pursuit analysis framework. The first approach is the Harmonic Matching Pursuit (HMP), where the harmonic structures are modeled by sets of harmonically related Gabor atoms which are directly optimized in the analysis loop. The second approach, called Meta-Molecular Matching Pursuit (M3P), is based on the a posteriori agglomeration of elementary features coming from a Short Time Fourier Transform. We discuss the pros and cons of each method through experiments involving different audio signals, and conclude on possible approaches for combining the two methods. This work is published in [44] .

By definition, the Matching Pursuit algorithm with constant (or ``flat'') Gabor atoms provides a coarse estimate of frequency modulated sinusoids in music and voice signals. Chirped Gabor atoms, closer to the nature of these signals, would fit them in a finer and sparser way. Though a method for the direct analytic estimation of chirped Gabor atoms has been proposed in the past [65] , we propose an alternative method where the chirp factor and scale parameter are estimated through a regression over an iteratively selected chain of small-scale atoms defined by a Short Time Fourier Transform. This new technique suits the Matching Pursuit framework, and is compared with a ``flat atoms'' version of the algorithm. The influence of various frequency interpolation techniques over the sparsity of the resulting representation is also studied.

This work is published in [47] . It was done in collaboration with Pierre Leveau and Laurent Daudet.

#### A simple test to check the optimality of a sparse signal approximation

Participant : Rémi Gribonval.

Approximating a signal or an image with a sparse linear expansion from an overcomplete dictionary of atoms is an extremely useful tool to solve many signal processing problems. Finding the sparsest approximation of a signal from an arbitrary dictionary is a NP-hard problem. Despite of this, several algorithms have been proposed that provide sub-optimal solutions. However, it is generally difficult to know how close the computed solution is to being ``optimal'', and whether another algorithm could provide a better result. In this work, we provide a simple test to check whether the output of a sparse approximation algorithm is nearly optimal, in the sense that no significantly different linear expansion from the dictionary can provide both a smaller approximation error and a better sparsity. As a by-product of our theorems, we obtain results on the identifiability of sparse overcomplete models in the presence of noise, for a fairly large class of sparse priors.

This work will appear in [19] . It was done in collaboration with Rosa Figueras and Pierre Vandergheynst, EPFL.

#### Beyond sparsity : recovering structured representations

Keywords : sparse decomposition, matching pursuit, basis pursuit, recovery analysis.

Participant : Rémi Gribonval.

In a series of recent results, several authors have shown that both
L_{1} minimization (Basis Pursuit) and greedy algorithms (Matching Pursuit) can successfully recover a sparse representation of a signal provided that it is sparse enough, that is to say if its support (which indicates where are located the nonzero coefficients) is of sufficiently small
size. In this paper we define more general identifiable structures that support signals that can be recovered exactly by
L_{1} minimization and greedy algorithms. In addition, we obtain that if the output of an arbitrary decomposition algorithm is supported on an identifiable structure, then the representation is optimal within the class of signals supported by the structure. As an application of the
theoretical results, we give a detailed study of a family of multichannel dictionaries with a special structure (corresponding to the representation problem
X=
AC ) often used in, e.g., under-determined source separation problems or in multichannel signal processing. The theoretical results obtained in this framework have served as an
inspiration for new source separation algorithms.

This work will appear in [21] . It was done in collaboration with Morten Nielsen, Aalborg University.

#### An adaptive computational strategy for optimal sparse signal approximation

Keywords : sparse decomposition, matching pursuit, basis pursuit, recovery analysis.

Participant : Rémi Gribonval.

Sparse approximation using redundant signal dictionaries is most useful for compressing, denoising or separating mixtures of signals, images and other high-dimensional data, but has the reputation of being a computationally intensive, almost intractable task. Yet, algorithms as simple as
thresholding sometimes provide solutions that are surprisingly close to those obtained with more time hungry techniques such as Matching Pursuit, Basis Pursuit or FOCUSS. We propose an adaptive computational strategy to compute, with as simple techniques as possible, nearly as good sparse
approximations as with substantially more complex ones. The strategy is based on a low cost
*a priori* prediction of the behaviour of complex algorithms which serves as an
*a posteriori* test of the ``success'' of simple algorithms. We discuss the tradeoff between the accuracy and the computational cost of the test with numerical examples, and we provide simple tests for the thresholding, Matching Pursuit and Basis Pursuit algorithms. Preliminary
proposals for FOCUSS type algorithms are also discussed. The proposed tests – which rely on combined properties of the signal dictionary and the computed sparse approximation – are more accurate and only slightly more complex than known tests based on the (cumulative) coherence of the
dictionary and the number of terms of the sparse approximation. To our knowledge, our results tests also provide, for the first time, practical tools that are applicable in real settings, for many standard dictionaries that are not necessarily ``quasi-incoherent''.

This work was done in collaboration with Morten Nielsen, Aalborg Univ. and Pierre Vandergheynst, EPFL. A paper is in preparation.