Section: Research Program
Computational Statistical Methods
Central to modern statistics is the use of probabilistic models. To relate these models to data requires the ability to calculate the probability of the observed data: the likelihood function, which is central to most statistical methods and provides a principled framework to handle uncertainty.
The emergence of computational statistics as a collection of powerful and general methodologies for carrying out likelihoodbased inference made complex models with nonstandard data accessible to likelihood, including hierarchical models, models with intricate latent structure, and missing data.
In particular, algorithms previously developed by Popix for mixed effects models, and today implemented in several software tools (especially Monolix ) are part of these methods:

the adaptive MetropolisHastings algorithm allows one to sample from the conditional distribution of the individual parameters $p\left({\psi}_{i}\right{y}_{i};{c}_{i},\theta )$,

the SAEM algorithm is used to maximize the observed likelihood $\mathcal{L}(\theta ;y)=p(y;\theta )$,

Importance Sampling Monte Carlo simulations provide an accurate estimation of the observed loglikelihood $log\left(\mathcal{L}\right(\theta ;y\left)\right)$.
Computational statistics is an area which remains extremely active today. Recently, one can notice that the incentive for further improvements and innovation comes mainly from three broad directions: the high dimensional challenge, the quest for adaptive procedures that can eliminate the cumbersome process of tuning "by hand" the settings of the algorithms and the need for flexible theoretical support, arguably required by all recent developments as well as many of the traditional MCMC algorithms that are widely used in practice.
Working in these three directions is a clear objective for Xpop .