Section: Overall Objectives
Scientific Context
Important problems in various scientific domains like biology, physics, medicine or in industry critically rely on the resolution of difficult numerical optimization problems. Often those problems depend on noisy data or are the outcome of complex numerical simulations such that derivatives are not available or not useful and the function is seen as a blackbox.
Many of those optimization problems are in essence multiobjective—one needs to optimize simultaneously several conflicting objectives like minimizing the cost of an energy network and maximizing its reliability—and most of the challenging blackbox problems are nonconvex, nonsmooth and combine difficulties related to illconditioning, nonseparability, and ruggedness (a term that characterizes functions that can be nonsmooth but also noisy or multimodal). Additionally, the objective function can be expensive to evaluate—a single function evaluation might take several minutes to hours (it can involve for instance a CFD simulation).
In this context, the use of randomness combined with proper adaptive mechanisms has proven to be one key component for the design of robust global numerical optimization algorithms.
The field of adaptive stochastic optimization algorithms has witnessed some important progress over the past 15 years. On the one hand, subdomains like mediumscale unconstrained optimization may be considered as “solved” (particularly, the CMAES algorithm, an instance of Evolution Strategy (ES) algorithms, stands out as stateoftheart method) and considerably better standards have been established in the way benchmarking and experimentation are performed. On the other hand, multiobjective populationbased stochastic algorithms became the method of choice to address multiobjective problems when a set of some best possible compromises is sought for. In all cases, the resulting algorithms have been naturally transferred to industry (the CMAES algorithm is now regularly used in companies such as Bosch, Total, ALSTOM, ...) or to other academic domains where difficult problems need to be solved such as physics, biology [28], geoscience [23], or robotics [25].
Very recently, ES algorithms attracted quite some attention in Machine Learning with the OpenAI article Evolution Strategies as a Scalable Alternative to Reinforcement Learning. It is shown that the training time for difficult reinforcement learning benchmarks could be reduced from 1 day (with standard RL approaches) to 1 hour using ES [27]. A few years ago, another impressive application of CMAES, how “Computer Sim Teaches Itself To Walk Upright” (published at the conference SIGGRAPH Asia 2013) was presented in the press in the UK.
Several of those important advances around adaptive stochastic optimization algorithms are relying to a great extent on works initiated or achieved by the founding members of RandOpt particularly related to the CMAES algorithm and to the Comparing Continuous Optimizer (COCO) platform (see Section on Software and Platform).
Yet, the field of adaptive stochastic algorithms for blackbox optimization is relatively young compared to the “classical optimization” field that includes convex and gradientbased optimization. For instance, the stateofthe art algorithms for unconstrained gradient based optimization like quasinewton methods (e.g. the BFGS method) date from the 1970s [20] while the stochastic derivativefree counterpart, CMAES dates from the early 2000s [21]. Consequently, in some subdomains with important practical demands, not even the most fundamental and basic questions are answered:

This is the case of constrained optimization where one needs to find a solution ${x}^{*}\in {\mathbb{R}}^{n}$ minimizing a numerical function ${min}_{x\in {\mathbb{R}}^{n}}f\left(x\right)$ while respecting a number of constraints $m$ typically formulated as ${g}_{i}\left({x}^{*}\right)\le 0$ for $i=1,...,m$. Only recently, the fundamental requirement of linear convergence, as in the unconstrained case, has been clearly stated [13].

In multiobjective optimization, most of the research so far has been focusing on how to select candidate solutions from one iteration to the next one. The difficult question of how to generate effectively new solutions is not yet answered in a proper way and we know today that simply applying operators from singleobjective optimization may not be effective with the current best selection strategies. As a comparison, in the singleobjective case, the question of selection of candidate solutions was already solved in the 1980s and 15 more years where needed to solve the trickier question of an effective adaptive strategy to generate new solutions.

With the current demand to solve larger and larger optimization problems (e.g. in the domain of deep learning), optimization algorithms that scale linearly with the problem dimension are nowadays needed. Only recently, first proposals of how to reduce the quadratic scaling of CMAES have been made without a clear view of what can be achieved in the best case in practice. These later variants apply to medium scale optimization with thousands of variables. The question of designing randomized algorithms capable to handle efficiently problems with one or two orders of magnitude more variables is still largely open.

For expensive optimization, standard methods are so called Bayesian optimization algorithms based on Gaussian processes. In this domain, no standard implementations exist and performance across implementations can vary significantly as particularly different algorithm components are used. For instance, there is no common agreement on which initial design to use, which bandwidth for the Gaussian kernel to take, and which strategy to follow to optimize the expected improvement.