## Section: New Results

### Homogenization and related topics

Participants : Sébastien Brisard, Ludovic Chamoin, Virginie Ehrlacher, Claude Le Bris, Frédéric Legoll, Simon Lemaire, François Madiot, William Minvielle.

The homogenization of (deterministic) non periodic systems is a well known topic. Although well explored theoretically by many authors, it has been less investigated from the standpoint of numerical approaches (except in the random setting). In collaboration with X. Blanc and P.-L. Lions, C. Le Bris has introduced a possible theory, giving rise to a numerical approach, for the simulation of multiscale nonperiodic systems. The theoretical considerations are based on earlier works by the same authors (derivation of an algebra of functions appropriate to formalize a theory of homogenization). The numerical endeavour is completely new. The theoretical results obtained to date are being collected in a series of manuscripts that will be available shortly.

The team has pursued its efforts in the field of stochastic homogenization of elliptic equations, aiming at designing numerical approaches that both are pratically relevant and keep the computational workload limited.

Using the standard homogenization theory, one knows that the homogenized tensor, which is a deterministic matrix, depends on the solution of a stochastic equation, the so-called corrector problem, which is posed on the *whole* space ${\mathbb{R}}^{d}$. This equation is therefore delicate and expensive to solve. In practice, the space ${\mathbb{R}}^{d}$ is truncated to some bounded domain, on which the corrector problem is numerically solved. In turn, this yields a converging approximation of the homogenized tensor, which happens to be a *random* matrix.

In [28] , F. Legoll and W. Minvielle have proposed a variance reduction procedure, based on the control variate technique, to obtain estimates of the apparent homogenized tensor with a smaller statistical error (at a given computational cost) than standard Monte Carlo approaches. The control variate technique is based on using a surrogate model, somewhat in the spirit of a preconditionner. In [28] , the surrogate model that is used is inspired by a weakly stochastic approach previously introduced by A. Anantharaman and C. Le Bris to describe periodic models perturbed by rare defects.

In addition, C. Le Bris, F. Legoll and W. Minvielle have investigated the possibility to use another variance reduction technique based on computing the corrector equation only for selected environments. These environments are chosen based on the fact that their statistics in the finite supercell matches the statistics of the materials in the infinite supercell. This method yields an estimator with a smaller variance that standard estimators. Preliminary encouraging numerical results have been obtained.

As pointed out above, the corrector problem is in practice solved on a large bounded domain, often complemented with periodic boundary conditions. Solving that problem can still be challenging, in particular because producing a conforming mesh of realistic heterogeneous microstructures can be a daunting task. In such situations, numerical methods formulated on cartesian grids may be more interesting. These methods can still be Finite Element Methods, or methods in the spirit of that proposed by Moulinec and Suquet in the mid-nineties. In their approach, the corrector problem (a partial differential equation) is reformulated as an equivalent integral equation. This equation can readily be discretized using a Galerkin approach. This leads to numerical schemes that can be implemented as a matrix-free method. In [18] , S. Brisard and F. Legoll have reviewed the different variants that have been proposed in the literature along these ideas, and proposed a mathematical analysis of the numerical schemes. This work extends in various directions previous works by S. Brisard.

In somewhat the same vein, Eric Cancès, Virginie Ehrlacher and Frédéric Legoll (in collaboration with Benjamin Stamm, University Paris 6) have worked on alternative methods to approximate the homogenized coefficients of a random stationary material. These methods are alternative to those proposed e.g. by Bourgeat and Piatniski, and which consist in solving a corrector problem on a bounded domain. The method introduced is based on a new corrector problem. This problem is posed on the entire space. In some cases (including the case of randomly located spherical inclusions), it can be recast as an integral equation posed on the surface of the inclusions. The problem can then be efficiently solved via domain decomposition and using spherical harmonics.

We have discussed above approaches to efficiently compute the homogenized coefficient, assuming we have a complete knowledge of the microstructure of the material. We have actually also considered a related inverse problem, and more precisely a parameter fitting problem. Knowing the homogenized quantities, is it possible to recover some features of the microstructure properties? Obviously, since homogenization is an averaging procedure, not everything can be recovered from macroscopic quantities. A realistic situation is the case when a functional form of the distribution of the microscopic properties is assumed, but with some unknown parameters to determine. In collaboration with A. Obliger and M. Simon, F. Legoll and W. Minvielle have addressed that problem in [29] , showing how to determine the unknown parameters of the microscopic distribution on the basis of macroscopic (e.g. homogenized) quantities.

From a numerical perspective, the Multiscale Finite Element Method (MsFEM) is a classical strategy to address the situation when the homogenized problem is not known (e.g. in difficult nonlinear cases), or when the scale of the heterogeneities, although small, is not considered to be zero (and hence the homogenized problem cannot be considered as an accurate enough approximation).

The MsFEM has been introduced more than 10 years ago. However, even in simple deterministic cases, there is actually still room for improvement in many different directions. In collaboration with A. Lozinski (University of Besançon), F. Legoll and C. Le Bris have introduced and studied a variant of MsFEM that considers Crouzeix-Raviart type elements on each mesh element. The continuity across edges (or facets) of the (multiscale) finite element basis set functions is enforced only weakly, using fluxes rather than point values. That approach has been analyzed and tested on an elliptic problem set on a domain with a huge number of perforations. The variant developed outperforms all existing variants of MsFEM.

A follow up on this work, in collaboration with U. Hetmaniuk (University of Washington in Seattle) and A. Lozinski (University of Besançon), consists in the study of multiscale advection-diffusion problems. Such problems are possibly advection dominated and a stabilization procedure is therefore required. How stabilization interplays with the multiscale character of the equation is an unsolved mathematical question worth considering for numerical purposes. In that spirit, C. Le Bris, F. Legoll and F. Madiot have studied several variants of the Multiscale Finite Element Method (MsFEM), specifically designed to address multiscale advection-diffusion problems in the convection-dominated regime. Generally speaking, the idea of the MsFEM is to perform a Galerkin approximation of the problem using specific basis functions, that are precomputed (in an offline stage) and adapted to the problem considered. Several possibilities for the basis functions have been examined (for instance, they may or may not encode the convection field). The various approaches have been compared in terms of accuracy and computational costs.

Most of the numerical analysis studies of the MsFEM are focused on obtaining *a priori* error bounds. In collaboration with L. Chamoin, who is currently in delegation in our team (from ENS Cachan, since September 2014), we have started to work on *a posteriori* error analysis for MsFEM approaches, with the aim to develop error estimation and adaptation tools. We have extended to the MsFEM case an approach that is classical in the computational mechanics community for single scale problems, and which is based on the so-called Constitutive Relation Error (CRE). Once a numerical solution ${u}_{h}$ has been obtained, the approach needs additional computations in order to determine a divergence-free field as close as possible to the exact flux $k\nabla u$. In the context of the MsFEM, it is important to be able to do all the expensive computations in an offline stage, independently of the right-hand side. The standard CRE approach thus needs to be adapted to that context, in
order to keep that feature that makes it adapted to a multiscale, multi-query context. The preliminary approach that we have introduced already yields promising results.

Still another question investigated in the group is to find an alternative to standard homogenization techniques when these latter are difficult to use in practice. This is the aim of the post-doc of Simon Lemaire, which began in June 2014, and which takes over previous works of the group on the subject. Consider a linear elliptic equation, say in divergence form, with a highly oscillatory matrix coefficient, and assume that this problem is to be solved for a large number of right-hand sides. If the coefficient oscillations are infinitely rapid, the solution can be accurately approximated by the solution to the homogenized problem, where the homogenized coefficient has been evaluated beforehand by solving the corrector problem. If the oscillations are moderately rapid, one can think instead of MsFEM-type approaches to approximate the solution to the reference problem. However, in both cases, the complete knowledge of the oscillatory matrix coefficient is required, either to build the average model or to compute the multiscale basis. In many practical cases, this coefficient is often only partially known, or merely completely unavailable, and one only has access to the solution of the equation for some loadings. This observation has lead to think about alternative methods, in the following spirit. Is it possible to approximate the reference solution by the solution to a problem with a *constant* matrix coefficient? How can this 'best' constant matrix approximating the oscillatory problem be constructed in an efficient manner?

A preliminary step, following discussion and interaction with A. Cohen, has been to cast the problem as a convex optimization problem. We have then shown that the 'best' constant matrix defined as the solution of that problem converges to the homogenized matrix in the limit of infinitely rapidly oscillatory coefficients. Furthermore, the optimization problem being convex, it can be efficiently solved using standard algorithms. C. Le Bris, F. Legoll and S. Lemaire are currently working on making the resolution of the optimization problem as efficient as possible.

To conclude this section, we mention a project involving V. Ehrlacher, C. Le Bris and F. Legoll, in collaboration with G. Leugering and M. Stingl (Cluster of Excellence, Erlangen-Nuremberg University). This project aims at optimizing the shape of some materials (modelled as structurally graded linear elastic materials) in order to achieve the best mechanical response at the minimal cost. As often the case in shape optimization, the solution tends to be highly oscillatory, thus the need of homogenization techniques. Materials under consideration are being thought of as microstructured materials composed of steel and void and whose microstructure patterns are constructed as the macroscopic deformation of a reference periodic microstructure. The optimal material (i.e. the best macroscopic deformation) is the deformation achieving the best mechanical response. For a given deformation, we have first chosen to compute the mechanical response using a homogenized model. We are currently aiming at computing the mechanical response at the microscale, using the highly oscillatory model. Model reduction techniques (such as MsFEM, Reduced Basis methods, ...) are then in order, in order to expedite the resolution of the oscillatory problem, which has to be solved at each loop of the optimization algorithm. Current efforts are targeted towards choosing an appropriate model reduction strategy.