## Section: Research Program

### Relating statistical accuracy to computational complexity

When several learning algorithms are available, with increasing computational complexity and statistical performance, which one should be used, given the amount of data and the computational power available?
This problem has emerged as a key question induced by the challenge of analyzing large amounts of data – the “big data” challenge.
Celeste wants to tackle the major challenge of understanding the time-accuracy trade-off,
which requires providing new statistical analyses of machine learning procedures
– as they are done in practice, including optimization algorithms –
that are *precise enough* in order to account for differences of performance observed in practice,
leading to general conclusions that can be trusted more generally.
For instance, we study the performance of ensemble methods combined with subsampling, which is a common strategy for handling big data; examples include random forests and median-of-means algorithms.