Section: Application Domains
Inverse problems in Neuroimaging
Many problems in neuroimaging can be framed as forward and inverse problems. For instance, the neuroimaging inverse problem consists in predicting individual information (behavior, phenotype) from neuroimaging data, while an important the forward problem consists in fitting neuroimaging data with highdimensional (e.g. genetic) variables. Solving these problems entails the definition of two terms: a loss that quantifies the goodness of fit of the solution (does the model explain the data reasonably well ?), and a regularization schemes that represents a prior on the expected solution of the problem. In particular some priors enforce some properties of the solutions, such as sparsity, smoothness or being piecewise constant.
Let us detail the model used in the inverse problem: Let $\mathbf{X}$ be a neuroimaging dataset as an $({n}_{subj},{n}_{voxels})$ matrix, where ${n}_{subj}$ and ${n}_{voxels}$ are the number of subjects under study, and the image size respectively, $\mathbf{Y}$ an array of values that represent characteristics of interest in the observed population, written as $({n}_{subj},{n}_{f})$ matrix, where ${n}_{f}$ is the number of characteristics that are tested, and $\beta $ an array of shape $({n}_{voxels},{n}_{f})$ that represents a set of patternspecific maps. In the first place, we may consider the columns ${\mathbf{Y}}_{1},..,{\mathbf{Y}}_{{n}_{f}}$ of $Y$ independently, yielding ${n}_{f}$ problems to be solved in parallel:
where the vector contains ${\beta}_{i}$ is the ${i}^{th}$ row of $\beta $. As the problem is clearly illposed, it is naturally handled in a regularized regression framework:
${\widehat{\beta}}_{i}={\text{argmin}}_{{\beta}_{i}}{\parallel {\mathbf{Y}}_{i}\mathbf{X}{\beta}_{i}\parallel}^{2}+\Psi \left({\beta}_{i}\right),$  (1) 
where $\Psi $ is an adequate penalization used to regularize the solution:
$\Psi (\beta ;{\lambda}_{1},{\lambda}_{2},{\eta}_{1},{\eta}_{2})={\lambda}_{1}{\parallel \beta \parallel}_{1}+{\lambda}_{2}{\parallel \beta \parallel}_{2}^{2}+{\eta}_{1}{\parallel \nabla \beta \parallel}_{1}+{\eta}_{2}{\parallel \nabla \beta \parallel}_{2}^{2}$  (2) 
with ${\lambda}_{1},\phantom{\rule{0.166667em}{0ex}}{\lambda}_{2},\phantom{\rule{0.166667em}{0ex}}{\eta}_{1},\phantom{\rule{0.166667em}{0ex}}{\eta}_{2}\ge 0$. In general, only one or two of these constraints is considered (hence is enforced with a nonzero coefficient):

When ${\lambda}_{1}>0$ only (LASSO), and to some extent, when ${\lambda}_{1},{\lambda}_{2}>0$ only (elastic net), the optimal solution $\beta $ is (possibly very) sparse, but may not exhibit a proper image structure; it does not fit well with the intuitive concept of a brain map.

Total Variation regularization (see Fig. 1 ) is obtained for (${\eta}_{1}>0$ only), and typically yields a piecewise constant solution.

Smooth lasso is obtained with (${\eta}_{2}>0$ and ${\lambda}_{1}>0$ only), and yields smooth, compactly supported spatial basis functions.

The performance of the predictive model can simply be evaluated as the amount of variance in ${\mathbf{Y}}_{i}$ fitted by the model, for each $i\in \{1,..,{n}_{f}\}$. This can be computed through crossvalidation, by learning ${\widehat{\beta}}_{i}$ on some part of the dataset, and then estimating $({Y}_{i}X{\widehat{\beta}}_{i})$ using the remainder of the dataset.
This framework is easily extended by considering

Grouped penalization, where the penalization explicitly includes a prior clustering of the features, i.e. voxelrelated signals, into given groups. This is particularly important to include external anatomical priors on the relevant solution.

Combined penalizations, i.e. a mixture of simple and groupwise penalizations, that allow some variability to fit the data in different populations of subjects, while keeping some common constraints.

Logistic regression, where a logistic nonlinearity is applied to the linear model so that it yields a probability of classification in a binary classification problem.

Robustness to betweensubject variability is an important question, as it makes little sense that a learned model depends dramatically on the particular observations used for learning. This is an important issue, as this kind of robustness is somewhat opposite to sparsity requirements.

Multitask learning: if several target variables are thought to be related, it might be useful to constrain the estimated parameter vector $\beta $ to have a shared support across all these variables.
For instance, when one of the variables ${\mathbf{Y}}_{i}$ is not well fitted by the model, the estimation of other variables ${\mathbf{Y}}_{j},j\ne i$ may provide constraints on the support of ${\beta}_{i}$ and thus, improve the prediction of ${\mathbf{Y}}_{i}$. Yet this does not impose constraints on the nonzero parameters of the parameters ${\beta}_{i}$.
$\widehat{\beta}={\text{argmin}}_{\beta =\left({\beta}_{i}\right),i=1..{n}_{f}}\sum _{i=1}^{{n}_{f}}{\parallel {\mathbf{Y}}_{\mathbf{i}}\mathbf{X}{\beta}_{\mathbf{i}}\parallel}^{2}+\lambda \sum _{j=1}^{{n}_{voxels}}\sqrt{{\sum}_{i=1}^{{n}_{f}}{\beta}_{\mathbf{i},\mathbf{j}}^{\mathbf{2}}}$ (4)