## Section: Scientific Foundations

### Sparsity and _{1} –regularization

Extensively used approaches in modern nonparametric statistics for the problems of estimation, prediction, or model selection, are based on
*regularization*. The joint minimization of some empirical criterion and some penalty function should lead to a model that not only fits well the data but is also as simple as possible. For instance, the Lasso uses a ^{1} –regularization instead of a ^{0} –one; it is
popular mostly because it leads to *sparse* solutions (the estimate has only a few nonzero coordinates),
which usually have a clear interpretation in many settings (e.g., the influence or lack of influence of some variables).
In addition, unlike ^{0} –penalization, the Lasso is *computationally feasible* for high-dimensional data.

The Lasso algorithm, however, needs a tuning parameter, which is to be calibrated. However, the parameters that are good in theory (the ones that are used to derive sharp oracle inequalities) are in general too conservative for practical purposes. Our primary aim is to exhibit a calibration procedure for stochastic data that ensures both good practical and theoretical performance.

A secondary aim is to have a theoretical analysis of the Lasso in the context of individual sequences.