Section: Research Program
Markov Chain Monte Carlo algorithms
While these Monte Carlo algorithms have turned into standard tools over the past decade, they still face difficulties in handling less regular problems such as those involved in deriving inference for highdimensional models. One of the main problems encountered when using MCMC in this challenging settings is that it is difficult to design a Markov chain that efficiently samples the state space of interest.
The Metropolisadjusted Langevin algorithm (MALA) is a Markov chain Monte Carlo (MCMC) method for obtaining random samples from a probability distribution for which direct sampling is difficult. As the name suggests, MALA uses a combination of two mechanisms to generate the states of a random walk that has the target probability distribution as an invariant measure:

new states are proposed using Langevin dynamics, which use evaluations of the gradient of the target probability density function;

these proposals are accepted or rejected using the MetropolisHastings algorithm, which uses evaluations of the target probability density (but not its gradient).
Informally, the Langevin dynamics drives the random walk towards regions of high probability in the manner of a gradient flow, while the MetropolisHastings accept/reject mechanism improves the mixing and convergence properties of this random walk.
Several extensions of MALA have been proposed recently by several authors, including fMALA (fast MALA), AMALA (anisotropic MALA), MMALA (manifold MALA), positiondependent MALA (PMALA), ...
MALA and these extensions have demonstrated to represent very efficient alternative for sampling from high dimensional distributions. We therefore need to adapt these methods to general mixed effects models.