The research activity of our team is dedicated to the design, analysis and implementation of efficient numerical methods to solve inverse and shape/topological optimization problems, eventually including system uncertainties, in connection with acoustics, electromagnetism, elastodynamics, diffusion, and fluid mechanics.

Sought practical applications include radar and sonar applications, bio-medical imaging techniques, non-destructive testing, structural design, composite materials, diffusion magnetic resonance imaging, fluid-driven applications in aerospace/energy fields.

Roughly speaking, the model problem consists in determining information on, or optimizing the geometry (topology) and the physical properties of unknown targets from given constraints or measurements, for instance, measurements of diffracted waves or induced magnetic fields. Moreover, system uncertainties can ben systematically taken into account to provide a measure of confidence of the numerical prediction.

In general this kind of problems is non-linear. The inverse ones are also severely ill-posed and therefore require special attention from regularization point of view, and non-trivial adaptations of classical optimization methods.

Our scientific research interests are the following:

Theoretical understanding and analysis of the forward and inverse mathematical models, including in particular the development of simplified models for adequate asymptotic configurations.

The design of efficient numerical optimization/inversion methods which are quick and robust with respect to noise. Special attention will be paid to algorithms capable of treating large scale problems (e.g. 3-D problems) and/or suited for real-time imaging.

Propose new methods and develop advanced tools to perform uncertainty quantification for optimization/inversion.

Development of prototype softwares for specific applications or tutorial toolboxes.

We were particularly interested in the development of the following themes

Qualitative and quantitative methods for inverse scattering problems

Topological optimization methods

Forward and inverse models for Diffusion MRI

Forward/Backward uncertainty quantification methods for optimization/inversion problems in the context of expensive computer codes.

The research activity of our team is dedicated to the design, analysis and implementation of efficient numerical methods to solve inverse and shape/topological optimization problems, eventually including system uncertainties, in connection with wave imaging, structural design, non-destructive testing and medical imaging modalities. We are particularly interested in the development of fast methods that are suited for real-time applications and/or large scale problems. These goals require to work on both the physical and the mathematical models involved and indeed a solid expertise in related numerical algorithms. A part of the research activity is also devoted to take into account system uncertainties in the solving of inverse/optimization problems. At the interface of physics, mathematics, and computer science, Uncertainty Quantification (UQ) focuses on the development of frameworks and methods to characterize uncertainties in predictive computations. Uncertainties and errors arise at different stages of the numerical simulation. First, errors are introduced due to the physical simplifications in the mathematical modeling of the system investigated; other errors come from the numerical resolution of the mathematical model, due in particular to finite discretization and computations with finite accuracy and tolerance; finally, errors are due a limited knowledge of input quantities (parameters) appearing in the definition of the numerical model being solved.

This section intends to give a general overview of our research interests and themes. We choose to present them through the specific academic example of inverse scattering problems (from inhomogeneities), which is representative of foreseen developments on both inversion and (topological) optimization methods. The practical problem would be to identify an inclusion from measurements of diffracted waves that result from the interaction of the sought inclusion with some (incident) waves sent into the probed medium. Typical applications include biomedical imaging where using micro-waves one would like to probe the presence of pathological cells, or imaging of urban infrastructures where using ground penetrating radars (GPR) one is interested in finding the location of buried facilities such as pipelines or waste deposits. This kind of applications requires in particular fast and reliable algorithms.

By “imaging” we refer to the inverse problem where the concern is only the location and the shape of the inclusion, while “identification” may also indicate getting informations on the inclusion physical parameters.

Both problems (imaging and identification) are non linear and ill-posed (lack of stability with respect to measurements errors if some careful constrains are not added). Moreover, the unique determination of the geometry or the coefficients is not guaranteed in general if sufficient measurements are not available. As an example, in the case of anisotropic inclusions, one can show that an appropriate set of data uniquely determine the geometry but not the material properties.

These theoretical considerations (uniqueness, stability) are not only important in understanding the mathematical properties of the inverse problem, but also guide the choice of appropriate numerical strategies (which information can be stably reconstructed) and also the design of appropriate regularization techniques. Moreover, uniqueness proofs are in general constructive proofs, i.e. they implicitly contain a numerical algorithm to solve the inverse problem, hence their importance for practical applications. The sampling methods introduced below are one example of such algorithms.

A large part of our research activity is dedicated to numerical methods applied to the first type of inverse problems, where only the geometrical information is sought. In its general setting the inverse problem is very challenging and no method can provide universally satisfying solution (respecting the balance cost-precision-stability). This is why in the majority of the practically employed algorithms, some simplification of the underlying mathematical model is used, according to the specific configuration of the imaging experiment. The most popular ones are geometric optics (the Kirchhoff approximation) for high frequencies and weak scattering (the Born approximation) for small contrasts or small obstacles. They actually give full satisfaction for a wide range of applications as attested by the large success of existing imaging devices (radar, sonar, ultrasound, X-ray tomography, etc.), that rely on one of these approximations.

In most cases, the used simplification result in a linearization of
the inverse problem and therefore is usually valid only if the latter is
weakly non-linear. The development of simplified models
and the improvement of their efficiency is still a very active research
area. With that perspective, we are particularly interested in deriving and
studying higher order asymptotic models associated with small geometrical
parameters such as: small obstacles, thin coatings, wires, periodic media,

A larger part of our research activity is dedicated to algorithms that avoid the use of such approximations and that are efficient where classical approaches fail: i.e. roughly speaking when the non linearity of the inverse problem is sufficiently strong. This type of configuration is motivated by the applications mentioned below, and occurs as soon as the geometry of the unknown media generates non negligible multiple scattering effects (multiply-connected and closely spaces obstacles) or when the used frequency is in the so-called resonant region (wave-length comparable to the size of the sought medium). It is therefore much more difficult to deal with and requires new approaches. Our ideas to tackle this problem is mainly motivated and inspired by recent advances in shape and topological optimization methods and in so-called sampling methods.

Sampling methods are fast imaging solvers adapted to multi-static data (multiple receiver-transmitter pairs) at a fixed frequency. Even if they do not use any linearization the forward model, they rely on computing the solutions to a set of linear problems of small size, that can be performed in a completely parallel procedure. Our team has already a solid expertise in these methods applied to electromagnetic 3-D problems. The success of such approaches was their ability to provide a relatively quick algorithm for solving 3-D problems without any need for a priori knowledge on the physical parameters of the targets. These algorithms solve only the imaging problem, in the sense that only the geometrical information is provided.

Despite the large efforts already spent in the development of this type of methods, either from the algorithmic point of view or the theoretical one, numerous questions are still open. These attractive new algorithms also suffer from the lack of experimental validations, due to their relatively recent introduction. We also would like to invest on this side by developing collaborations with engineering research groups that have experimental facilities. From the practical point of view, the most potential limitation of sampling methods would be the need of a large amount of data to achieve a reasonable accuracy. On the other hand, optimization methods do not suffer from this constrain but they require good initial guess to ensure convergence and reduce the number of iterations. Therefore it seems natural to try to combine the two class of methods in order to calibrate the balance between cost and precision.

Among various shape optimization methods, the Level Set method seems to be particularly suited for such a coupling. First, because it shares similar mechanism as sampling methods: the geometry is captured as a level set of an “indicator function” computed on a cartesian grid. Second, because the two methods do not require any a priori knowledge on the topology of the sought geometry. Beyond the choice of a particular method, the main question would be to define in which way the coupling can be achieved. Obvious strategies consist in using one method to pre-process (initialization) or post-process (find the level set) the other. But one can also think of more elaborate ones, where for instance a sampling method can be used to optimize the choice of the incident wave at each iteration step. The latter point is closely related to the design of so called “focusing incident waves” (which are for instance the basis of applications of the time-reversal principle). In the frequency regime, these incident waves can be constructed from the eigenvalue decomposition of the data operator used by sampling methods. The theoretical and numerical investigations of these aspects are still not completely understood for electromagnetic or elastodynamic problems.

Other topological optimization methods, like the homogenization method or the topological gradient method, can also be used, each one provides particular advantages in specific configurations. It is evident that the development of these methods is very suited to inverse problems and provide substantial advantage compared to classical shape optimization methods based on boundary variation. Their applications to inverse problems has not been fully investigated. The efficiency of these optimization methods can also be increased for adequate asymptotic configurations. For instance small amplitude homogenization method can be used as an efficient relaxation method for the inverse problem in the presence of small contrasts. On the other hand, the topological gradient method has shown to perform well in localizing small inclusions with only one iteration.

A broader perspective would be the extension of the above mentioned techniques to time-dependent cases. Taking into account data in time domain is important for many practical applications, such as imaging in cluttered media, the design of absorbing coatings or also crash worthiness in the case of structural design.

For the identification problem, one would like to also have information on the physical properties of the targets. Of course optimization methods is a tool of choice for these problems. However, in some applications only a qualitative information is needed and obtaining it in a cheaper way can be performed using asymptotic theories combined with sampling methods. We also refer here to the use of so called transmission eigenvalues as qualitative indicators for non destructive testing of dielectrics.

We are also interested in parameter identification problems arising in diffusion-type problems. Our research here is mostly motivated by applications to the imaging of biological tissues with the technique of Diffusion Magnetic Resonance Imaging (DMRI). Roughly speaking DMRI gives a measure of the average distance travelled by water molecules in a certain medium and can give useful information on cellular structure and structural change when the medium is biological tissue. In particular, we would like to infer from DMRI measurements changes in the cellular volume fraction occurring upon various physiological or pathological conditions as well as the average cell size in the case of tumor imaging. The main challenges here are 1) correctly model measured signals using diffusive-type time-dependent PDEs 2) numerically handle the complexity of the tissues 3) use the first two to identify physically relevant parameters from measurements. For the last point we are particularly interested in constructing reduced models of the multiple-compartment Bloch-Torrey partial differential equation using homogenization methods.

The Team devotes a large effort focused on the formulation, implementation and validation of numerical methods for using scientific computing to drive experiments and available data (coming from models, simulation and experiments) by taking into account the system uncertainty. The team is also invested in exploiting the intimate relationship between optimisation and UQ to make Optimisation Under Uncertainty (OUU) tractable. A part of these activities is declined to the simulation of high-fidelity models for fluids, in three main fields, aerospace, energy and environment.

The Team is working on developing original UQ representations and algorithms to deal with complex and large scale models, having high dimensional input parameters with complexes influences. We are organizing our core research activities along different methodological UQ developments related to the challenges discussed above. Obviously, some efforts are shared by different initiatives or projects, and some of them include the continuous improvement of the non-intrusive methods constituting our software libraries. These actions are not detailed in the following, to focus the presentation on more innovative aspects, but we mentioned nonetheless the continuous developments and incorporation in our libraries of advanced sparse grid methods, sparsity promoting strategies and low rank methods.

An effort is dedicated to the efficient construction of surrogate models that are central in both forward and backward UQ problems, aiming at large-scale simulations relevant to engineering applications, with high dimensional input parameters.

Sensitivity analyses and other forward UQ problems (e.g., estimation of failure probabilities, rare events,. . . ) depends on the input uncertainty model. Most often, for convenience or because of the lack of data, the independence of the uncertain inputs is assumed. In the Team, we are investigating approaches dedicated to a) the construction of uncertainty models that integrate the available information and expert knowledge(s) in a consistent and objective fashion. To this end, several mathematical frameworks are already available, e.g the maximum entropy principle, likelihood maximization and moment matching methods, but their application to real engineering problems remains scarce and their systematic use raises multiple challenges, both to construct the uncertainty model and to solve the related UQ problems (forward and backward). Because of the importance of the available data and expertise to build the model, the contributions of the Team in these areas depend on the needs and demands of end-users and industrial partners.

To mitigate computational complexity, the Team is exploring multi-fidelity approaches in the context of expensive simulations. We combine predictions of models with different levels of discretizations and physical simplifications to construct, at a controlled cost, reliable surrogate models of simulation outputs or directly objective functions and possibly constraints, to enable the resolution of robust optimization and stochastic inverse problems. Again, one difficulty to be addressed by the Team is the design of the computer experiments to obtain the best multi-fidelity model at the lowest cost (of for a prescribed computational budgets), with respect to the end use of the model. The last point is particularly challenging as it calls for accuracy for output values that are usually unknown a priori but must be estimated as the model construction proceeds.

Conventional radar imaging techniques (ISAR, GPR, etc.) use backscattering data to image targets. The commonly used inversion algorithms are mainly based on the use of weak scattering approximations such as the Born or Kirchhoff approximation leading to very simple linear models, but at the expense of ignoring multiple scattering and polarization effects. The success of such an approach is evident in the wide use of synthetic aperture radar techniques.

However, the use of backscattering data makes 3-D imaging a very challenging problem (it is not even well understood theoretically) and as pointed out by Brett Borden in the context of airborne radar: “In recent years it has become quite apparent that the problems associated with radar target identification efforts will not vanish with the development of more sensitive radar receivers or increased signal-to-noise levels. In addition it has (slowly) been realized that greater amounts of data - or even additional “kinds” of radar data, such as added polarization or greatly extended bandwidth - will all suffer from the same basic limitations affiliated with incorrect model assumptions. Moreover, in the face of these problems it is important to ask how (and if) the complications associated with radar based automatic target recognition can be surmounted.” This comment also applies to the more complex GPR problem.

Our research themes will incorporate the development, analysis and testing of several novel methods, such as sampling methods, level set methods or topological gradient methods, for ground penetrating radar application (imaging of urban infrastructures, landmines detection, underground waste deposits monitoring, ) using multistatic data.

Among emerging medical imaging techniques we are particularly interested in those using low to moderate frequency regimes. These include Microwave Tomography, Electrical Impedance Tomography and also the closely related Optical Tomography technique. They all have the advantage of being potentially safe and relatively cheap modalities and can also be used in complementarity with well established techniques such as X-ray computed tomography or Magnetic Resonance Imaging.

With these modalities tissues are differentiated and, consequentially can be imaged, based on differences in dielectric properties (some recent studies have proved that dielectric properties of biological tissues can be a strong indicator of the tissues functional and pathological conditions, for instance, tissue blood content, ischemia, infarction, hypoxia, malignancies, edema and others). The main challenge for these functionalities is to built a 3-D imaging algorithm capable of treating multi-static measurements to provide real-time images with highest (reasonably) expected resolutions and in a sufficiently robust way.

Another important biomedical application is brain imaging. We are for instance interested in the use of EEG and MEG techniques as complementary tools to MRI. They are applied for instance to localize epileptic centers or active zones (functional imaging). Here the problem is different and consists into performing passive imaging: the epileptic centers act as electrical sources and imaging is performed from measurements of induced currents. Incorporating the structure of the skull is primordial in improving the resolution of the imaging procedure. Doing this in a reasonably quick manner is still an active research area, and the use of asymptotic models would offer a promising solution to fix this issue.

One challenging problem in this vast area is the identification and imaging of defaults in anisotropic media. For instance this problem is of great importance in aeronautic constructions due to the growing use of composite materials. It also arises in applications linked with the evaluation of wood quality, like locating knots in timber in order to optimize timber-cutting in sawmills, or evaluating wood integrity before cutting trees. The anisotropy of the propagative media renders the analysis of diffracted waves more complex since one cannot only relies on the use of backscattered waves. Another difficulty comes from the fact that the micro-structure of the media is generally not well known a priori.

Our concern will be focused on the determination of qualitative information on the size of defaults and their physical properties rather than a complete imaging which for anisotropic media is in general impossible. For instance, in the case of homogeneous background, one can link the size of the inclusion and the index of refraction to the first eigenvalue of so-called interior transmission problem. These eigenvalues can be determined form the measured data and a rough localization of the default. Our goal is to extend this kind of idea to the cases where both the propagative media and the inclusion are anisotropic. The generalization to the case of cracks or screens has also to be investigated.

In the context of nuclear waste management many studies are conducted on the possibility of storing waste in a deep geological clay layer. To assess the reliability of such a storage without leakage it is necessary to have a precise knowledge of the porous media parameters (porosity, tortuosity, permeability, etc.). The large range of space and time scales involved in this process requires a high degree of precision as well as tight bounds on the uncertainties. Many physical experiments are conducted in situ which are designed for providing data for parameters identification. For example, the determination of the damaged zone (caused by excavation) around the repository area is of paramount importance since microcracks yield drastic changes in the permeability. Level set methods are a tool of choice for characterizing this damaged zone.

In biological tissues, water is abundant and magnetic resonance imaging (MRI) exploits the magnetic property of the nucleus of the water proton. The imaging contrast (the variations in the grayscale in an image) in standard MRI can be from either proton density, T1 (spin-lattice) relaxation, or T2 (spin-spin) relaxation and the contrast in the image gives some information on the physiological properties of the biological tissue at different physical locations of the sample. The resolution of MRI is on the order of millimeters: the greyscale value shown in the imaging pixel represents the volume-averaged value taken over all the physical locations contained that pixel.

In diffusion MRI, the image contrast comes from a measure of the average distance the water molecules have moved (diffused) during a certain amount of time. The Pulsed Gradient Spin Echo (PGSE) sequence is a commonly used sequence of applied magnetic fields to encode the diffusion of water protons. The term 'pulsed' means that the magnetic fields are short in duration, an the term gradient means that the magnetic fields vary linearly in space along a particular direction. First, the water protons in tissue are labelled with nuclear spin at a precession frequency that varies as a function of the physical positions of the water molecules via the application of a pulsed (short in duration, lasting on the order of ten milliseconds) magnetic field. Because the precessing frequencies of the water molecules vary, the signal, which measures the aggregate phase of the water molecules, will be reduced due to phase cancellations. Some time (usually tens of milliseconds) after the first pulsed magnetic field, another pulsed magnetic field is applied to reverse the spins of the water molecules. The time between the applications of two pulsed magnetic fields is called the 'diffusion time'. If the water molecules have not moved during the diffusion time, the phase dispersion will be reversed, hence the signal loss will also be reversed, the signal is called refocused. However, if the molecules have moved during the diffusion time, the refocusing will be incomplete and the signal detected by the MRI scanner if weaker than if the water molecules have not moved. This lack of complete refocusing is called the signal attenuation and is the basis of the image contrast in DMRI. the pixels showning more signal attenuation is associated with further water displacement during the diffusion time, which may be linked to physiological factors, such as higher cell membrane permeability, larger cell sizes, higher extra-cellular volume fraction.

We model the nuclear magnetization of water protons in a sample due to diffusion-encoding magnetic fields by a multiple compartment Bloch-Torrey partial differential equation, which is a diffusive-type time-dependent PDE. The DMRI signal is the integral of the solution of the Bloch-Torrey PDE. In a homogeneous medium, the intrinsic diffusion coeffcient D will appear as the slope of the semi-log plot of the signal (in approporiate units). However, because during typical scanning times, 50-100ms, water molecules have had time to travel a diffusion distance which is long compared to the average size of the cells, the slope of the semi-log plot of the signal is in fact a measure of an 'effective' diffusion coefficient. In DMRI applications, this measured quantity is called the 'apparent diffusion coefficient' (ADC) and provides the most commonly used form the image contrast for DMRI. This ADC is closely related to the effective diffusion coefficient obtainable from mathematical homogenization theory.

Specific actions are devoted to the problem of atmospheric reentry simulations. We focus on several aspects : i) on the development of innovative algorithms improving the prediction of hypersonic flows and including system uncertainties, ii) on the application of these methods to the atmospheric reentry of space vehicles for the control and the optimization of the trajectory, iii) on the debris reentry, which is of fundamental importance for NASA, CNES and ESA. Several works are already initiated with funding from CNES, Thales, and ASL. An ongoing activity concerns the design of the Thermal Protection System (TPS) that shields the spacecraft from aerothermal heating, generated by friction at the surface of the vehicle. The TPS is usually composed of different classes of materials, depending on the mission and the planned trajectory. One major issue is to model accurately the material response to ensure a safe design. High-fidelity material modeling for ablative materials has been developed by NASA, but a lot of work is still needed concerning the assessment of physical and modeling uncertainties during the design process. Our objective is to set up a predictive numerical tool to reliably estimate the response of ablative materials for different aerothermal conditions.

An important effort is dedicated to the simulation of fluids featuring complex thermodynamic behavior, in the context of two distinct projects: the VIPER project, funded by Aquitaine Region, and a project with CWI (Scientific Computing Group). Dense gases (DGs) are defined as single-phase vapors operating at temperatures and pressures conditions close to the saturation curve. The interest in studying complex dynamics of compressible dense gas flows comes from the potential technological advantages of using these fluids in energy conversion cycles, such as in Organic Rankine Cycles (ORCs) which used dense gases as energy converters for biomass fuels and low-grade heat from geothermal or industrial waste heat sources. Since these fluids feature large uncertainties in their estimated thermodynamic properties (critical properties, acentric factor, etc.), a meaningful numerical prediction of the performance must necessarily take into account these uncertainties. Other sources of uncertainties include, but are not limited to, the inlet boundary conditions which are often unknown in dense gases applications. Moreover, a robust optimization must also include the more generic uncertainty introduced by the machining tolerance in the construction of the turbine blades.

H. haddar and F. Pourahmadian

Differential evolution indicators are introduced for 3D spatiotemporal imaging of micromechanical processes in complex materials where progressive variations due to manufacturing and/or aging are housed in a highly scattering background of a-priori unknown or uncertain structure. In this vein, a three-tier imaging platform is established where: (1) the domain is periodically (or continuously) subject to illumination and sensing in an arbitrary configuration; (2) sequential sets of measured data are deployed to distill segment-wise scattering signatures of the domain's internal structure through carefully constructed, non-iterative solutions to the scattering equation; and (3) the resulting solution sequence is then used to rigorously construct an imaging functional carrying appropriate invariance with respect to the unknown stationary components of the background e.g., pre-existing interstitial boundaries and bubbles. This gives birth to differential indicators that specifically recover the 3D support of micromechanical evolution within a network of unknown scatterers. The direct scattering problem is formulated in the frequency domain where the background is comprised of a random distribution of monolithic fragments. The constituents are connected via highly heterogeneous interfaces of unknown elasticity and dissipation which are subject to spatiotemporal evolution. The support of internal boundaries are sequentially illuminated by a set of incident waves and thusinduced scattered fields are captured over a generic observation surface. The performance of the proposed imaging indicator is illustrated through a set of numerical experiments for spatiotemporal reconstruction of progressive damage zones featuring randomly distributed cracks and bubbles .

P.-H. Tournier, I. Aliferis, M. Bonazzoli, M. De Buhan, M. Darbas, V. Dolean, F. Hecht, P. Jolivet, I. El Kanfoud, C. Migliaccio, F. Nataf, C. Pichot, S. Semenov

The motivation of this work is the detection of cerebrovascular accidents by microwave tomographic imaging. This requires the solution of an inverse problem relying on a minimization algorithm (for example, gradient-based), where successive iterations consist in repeated solutions of a direct problem. The reconstruction algorithm is extremely computationally intensive and makes use of efficient parallel algorithms and high-performance computing. The feasibility of this type of imaging is conditioned on one hand by an accurate reconstruction of the material properties of the propagation medium and on the other hand by a considerable reduction in simulation time. Fulfilling these two requirements will enable a very rapid and accurate diagnosis. From the mathematical and numerical point of view, this means solving Maxwell’s equations in time-harmonic regime by appropriate domain decomposition methods, which are naturally adapted to parallel architectures .

H. Haddar and X. Liu

We develop a factorization method to obtain explicit characterization of a (possibly non-convex) impedance scattering object from measurements of time-dependent causal scattered waves in the far field regime. In particular, we prove that far fields of solutions to the wave equation due to particularly modified incident waves, characterize the obstacle by a range criterion involving the square root of the time derivative of the corresponding far field operator. Our analysis makes essential use of a coercivity property of the solution of the initial boundary value problem for the wave equation in the Laplace domain. This forces us to consider this particular modification of the far field operator. The latter in fact, can be chosen arbitrarily close to the true far field operator given in terms of physical measurements. We provide validating numerical examples in 2D on synthetic data. The latter is generated using a FDTD solver with PML. An article on this topic is under preparation.

M. Bakry, H. Haddar and O. Bunau

The Local Monodisperse Approximation (LMA) is a two-parameters model commonly employed for the retrieval of size distributions from the small angle scattering (SAS) patterns obtained on dense nanoparticle samples (e.g. dry powders and concentrated solutions). This work features an original, beyond state-of-the-art implementation of the LMA model resolution for the inverse scattering problem. Our method is based on the Expectation Maximization iterative algorithm and is free from any fine tuning of model parameters. The application of our method on SAS data acquired in laboratory conditions on dense nanoparticle samples is shown to provide very good results .

H. Haddar and A. Konschin

We analyze the Factorization method to reconstruct the geometry of a local defect in a periodic absorbing layer using almost only incident plane waves at a fixed frequency. A crucial part of our analysis relies on the consideration of the range of a carefully designed far field operator, which characterizes the geometry of the defect. We further provide some validating numerical results in a two dimensional setting .

L. Audibert, H. Girardon and H. Haddar

Non-destructive testing is an essential tool to assess the safety of the facilities within nuclear plants. In particular, conductive deposits on U-tubes in steam generators constitute a major danger as they may block the cooling loop. To detect these deposits, eddy-current probes are introduced inside the U-tubes to generate currents and measuring back an impedance signal. Based on earlier work on this subject, we develop a shape optimization technique with regularized gradient descent to invert these measurements and recover the deposit shape. To deal with the unknown, and possibly complex, topological nature of the latter, we propose to model it using a level set function. The methodology is first validated on synthetic axisymmetric configurations and fast convergence in ensured by careful adaptation of the gradient steps and regularization parameters. We then consider a more realistic modeling that incorporates the support plate and the presence of imperfections on the tube interior section. We employ in particular an asymptotic model to take into account these imperfections and treat them as additional unknowns in our inverse problem. A multi-objective optimization strategy, based on the use of different operating frequencies, is then developed to solve this problem. Various numerical experimentation with synthetic data demonstrated the viability of our approach. The approach is also successfully validated against experimental data. An article on this topic is under preparation.

L. Bourgeois, L. Chesnel

We are interested in the classical ill-posed Cauchy problem for the Laplace equation. One method to approximate the solution associated with compatible data consists in considering a family of regularized well-posed problems depending on a small parameter *à la* Grisvard do not work and instead, we apply the Kondratiev approach. We describe the procedure in detail to keep track of the dependence in

M. Aussal, Y. Boukari and H. Haddar

We propose and study a data completion algorithm for recovering missing data from the knowledge of Cauchy data on parts of the same boundary. The algorithm is based on surface representation of the solution and is presented for the Helmholtz equation. This work is an extension of the data completion algorithm proposed by the two last authors where the case of data available of a closed boundary was studied. The proposed method is a direct inversion method robust with respect to noisy incompatible data. Classical regularization methods with discrepancy selection principles can be employed and automatically lead to a convergent schemes as the noise level goes to zero. We conduct 3D numerical investigations to validate our method on various synthetic examples .

L. Audibert, L. Chesnel, H. Haddar

We use the inside-outside duality approach proposed by Kirsch-Lechleiter to identify transmission eigenvalues associated with artificial backgrounds. We prove that for well chosen artificial backgrounds, in particular for the ones with zero index of refraction at the inclusion location, one obtains a necessary and sufficient condition characterizing transmission eigenvalues via the spectrum of the modified far field operator. We also complement the existing literature with a convergence result for the invisible generalized incident field associated with the transmission eigenvalues .

L. Chesnel, S.A. Nazarov, J. Taskinen

We consider the propagation of surface water waves in a straight planar channel perturbed at the bottom by several thin curved tunnels and wells. We propose a method to construct non reflecting underwater topographies of this type at an arbitrary prescribed wave number. To proceed, we compute asymptotic expansions of the diffraction solutions with respect to the small parameter of the geometry taking into account the existence of boundary layer phenomena. We establish error estimates to validate the expansions using advances techniques of weighted spaces with detached asymptotics. In the process, we show the absence of trapped surface waves for perturbations small enough. This analysis furnishes asymptotic formulas for the scattering matrix and we use them to determine underwater topographies which are non-reflecting. Theoretical and numerical examples are given

L. Chesnel, S.A. Nazarov

We investigate a time-harmonic wave problem in a waveguide. We work at low frequency so that only one mode can propagate. It is known that the scattering matrix exhibits a rapid variation for real frequencies in a vicinity of a complex resonance located close to the real axis. This is the so-called Fano resonance phenomenon. And when the geometry presents certain properties of symmetry, there are two different real frequencies such that we have either

R. Bunoiu, L. Chesnel, K. Ramdani, M. Rihani

In this work, we are interested in the homogenization of time-harmonic Maxwell's equations in a composite medium with periodically distributed small inclusions of a negative material. Here a negative material is a material modelled by negative permittivity and permeability. Due to the sign-changing coefficients in the equations, it is not straightforward to obtain uniform energy estimates to apply the usual homogenization techniques. The goal of this work is to explain how to proceed in this context. The analysis of Maxwell's equations is based on a precise study of two associated scalar problems: one involving the sign-changing permittivity with Dirichlet boundary conditions, another involving the sign-changing permeability with Neumann boundary conditions. For both problems, we obtain a criterion on the physical parameters ensuring uniform invertibility of the corresponding operators as the size of the inclusions tends to zero. In the process, we explain the link existing with the so-called Neumann-Poincaré operator, complementing the existing literature on this topic. Then we use the results obtained for the scalar problems to derive uniform energy estimates for Maxwell's system. At this stage, an additional difficulty comes from the fact that Maxwell's equations are also sign-indefinite due to the term involving the frequency. To cope with it, we establish some sort of uniform compactness result .

G. Allaire, F. Feppon and C. Dapogny

The purpose of this article is to introduce a gradient-flow algorithm for solving equality and inequality constrained optimization problems, which is particularly suited for shape optimization applications. We rely on a variant of the Ordinary Differential Equation (ODE) approach proposed by Yamashita for equality constrained problems: the search direction is a combination of a null space step and a range space step, aiming to decrease the value of the minimized objective function and the violation of the constraints, respectively. Our first contribution is to propose an extension of this ODE approach to optimization problems featuring both equality and inequality constraints. In the literature, a common practice consists in reducing inequality constraints to equality constraints by the introduction of additional slack variables. Here, we rather solve their local combinatorial character by computing the projection of the gradient of the objective function onto the cone of feasible directions. This is achieved by solving a dual quadratic programming subproblem whose size equals the number of active or violated constraints. The solution to this problem allows to identify the inequality constraints to which the optimization trajectory should remain tangent. Our second contribution is a formulation of our gradient flow in the context of—infinite-dimensional—Hilbert spaces, and of even more general optimization sets such as sets of shapes, as it occurs in shape optimization within the framework of Hadamard's boundary variation method. The cornerstone of this formulation is the classical operation of extension and regularization of shape derivatives. The numerical efficiency and ease of implementation of our algorithm are demonstrated on realistic shape optimization problems. An article on this topic is under preparation.

G. Allaire, F. Feppon and C. Dapogny

In the formulation of shape optimization problems, multiple geometric constraint
functionals involve the signed distance function to the optimized shape

G. Allaire, P. Geoffroy-Donders and O. Pantz

This paper is motivated by the optimization of so-called lattice materials which are becoming increasingly popular in the context of additive manufacturing. Generalizing our previous work in 2-d we propose a method for topology optimization of structures made of periodically perforated material, where the microscopic periodic cell can be macroscopically modulated and oriented. This method is made of three steps. The first step amounts to compute the homogenized properties of an adequately chosen parametrized microstructure (here, a cubic lattice with varying bar thicknesses). The second step optimizes the homogenized formulation of the problem, which is a classical problem of parametric optimization. The third, and most delicate, step projects the optimal oriented microstructure at a desired length scale. Compared to the 2-d case where rotations are parametrized by a single angle, to which a conformality constraint can be applied, the 3-d case is more involved and requires new ingredients. In particular, the full rotation matrix is regularized (instead of just one angle in 2-d) and the projection map which deforms the square periodic lattice is computed component by component. Several numerical examples are presented for compliance minimization in 3-d. An article on this topic is under preparation.

L. Bourgeois, L. Chesnel, S. Fliss

We study the propagation of elastic waves in the time-harmonic regime in a waveguide which is unbounded in one direction and bounded in the two other (transverse) directions. We assume that the waveguide is thin in one of these transverse directions, which leads us to consider a Kirchhoff-Love plate model in a locally perturbed 2D strip. For time harmonic scattering problems in unbounded domains, well-posedness does not hold in a classical setting and it is necessary to prescribe the behaviour of the solution at infinity. This is challenging for the model that we consider and constitutes our main contribution. Two types of boundary conditions are considered: either the strip is simply supported or the strip is clamped. The two boundary conditions are treated with two different methods. For the simply supported problem, the analysis is based on a result of Hilbert basis in the transverse section. For the clamped problem, this property does not hold. Instead we adopt the Kondratiev's approach, based on the use of the Fourier transform in the unbounded direction, together with techniques of weighted Sobolev spaces with detached asymptotics. After introducing radiation conditions, the corresponding scattering problems are shown to be well-posed in the Fredholm sense. We also show that the solutions are the physical (outgoing) solutions in the sense of the limiting absorption principle.

M. Bonazzoli, V. Dolean, I. G. Graham, E. A. Spence and P.-H. Tournier

In this work we rigorously analyse preconditioners for the time-harmonic Maxwell equations with absorption, where the PDE is discretised using curl-conforming finite-element methods of fixed, arbitrary order and the preconditioner is constructed using additive Schwarz domain decomposition methods. The theory we developed shows that if the absorption is large enough, and if the subdomain and coarse mesh diameters and overlap are chosen appropriately, then the classical two-level overlapping additive Schwarz preconditioner (with PEC boundary conditions on the subdomains) performs optimally–in the sense that GMRES converges in a wavenumber-independent number of iterations–for the problem with absorption. An important feature of the theory is that it allows the coarse space to be built from low-order elements even if the PDE is discretised using high-order elements. It also shows that additive methods with minimal overlap can be robust. Several numerical experiments illustrate the theory and its dependence on various parameters. These experiments motivate some extensions of the preconditioners which have better robustness for problems with less absorption, including the propagative case. Finally, we illustrate the performance of these on two substantial applications; the first (a problem with absorption arising from medical imaging) shows the empirical robustness of the preconditioner against heterogeneity, and the second (scattering by a COBRA cavity) shows good scalability of the preconditioner with up to 3,000 processors .

M. Bonazzoli, X. Claeys

This work is about the scattering of an acoustic wave by an object composed of piecewise homogenous parts and an arbitrarily heterogeneous part. We propose and analyze a formulation that couples, adopting a Costabel-type approach, boundary integral equations for the homogeneous subdomains with domain variational formulations for the heterogenous subdomain. This is an extension of Costabel FEM-BEM coupling to a multi-domain configuration, with junctions points allowed, i.e. points where three or more subdomains abut. Usually just the exterior unbounded subdomain is treated with the BEM; here we wish to exploit the BEM whenever it is applicable, that is for all the homogenous parts of the scattering object, since it yields a reduction in the number of unknowns compared to the FEM. Our formulation is based on the multi-trace formalism for acoustic scattering by piecewise homogeneous objects; here we allow the wavenumber to vary arbitrarily in a part of the domain. We prove that the bilinear form associated with the proposed formulation satisfies a Gårding coercivity inequality, which ensures stability of the variational problem if it is uniquely solvable. We identify conditions for injectivity and construct modified versions immune to spurious resonances. An article on this topic is under preparation.

M. Bonazzoli, X. Claeys, F. Nataf, P.-H. Tournier

The matrices arising from the finite element discretization of problems such as high-frequency Helmholtz, time-harmonic Maxwell or convection-diffusion equations are not self-adjoint or positive definite. For this reason, it is difficult to analyze the convergence of Schwarz domain decomposition preconditioners applied to these problems. Note also that the conjugate gradient method cannot be used, and the analysis of the spectrum of the preconditioned matrix is not sufficient for methods suited for general matrices such as GMRES. In order to apply Elman-type estimates for the convergence of GMRES we need to prove an upper bound on the norm of the preconditioned matrix, and a lower bound on the distance of its field of values from the origin. We generalize the theory for the Helmholtz equation developed for the SORAS (Symmetrized Optimized Restricted Additive Schwarz) preconditioner, and we identify a list of assumptions and estimates that are sufficient to prove the two bounds needed for the convergence analysis for a general linear system. As an illustration of this technique, we prove estimates for the heterogenous reaction-convection-diffusion equation. An article on this topic is under preparation.

J.-R. Li, K. V. Nguyen and T. N. Tran

Diffusion Magnetic Resonance Imaging (DMRI) is a promising tool to obtain useful information on microscopic structure and has been extensively applied to biological tissues.

We obtained the following results.

The Bloch-Torrey partial differential equation can be used to describe the evolution of the transverse magnetization of the imaged sample under the influence of diffusion-encoding magnetic field gradients inside the MRI scanner. The integral of the magnetization inside a voxel gives the simulated diffusion MRI signal. This work proposes a finite element discretization on manifolds in order to efficiently simulate the diffusion MRI signal in domains that have a thin layer or a thin tube geometrical structure. The variable thickness of the three-dimensional domains is included in the weak formulation established on the manifolds. We conducted a numerical study of the proposed approach by simulating the diffusion MRI signals from the extracellular space (a thin layer medium) and from neurons (a thin tube medium), comparing the results with the reference signals obtained using a standard three-dimensional finite element discretization. We show good agreements between the simulated signals using our proposed method and the reference signals for a wide range of diffusion MRI parameters. The approximation becomes better as the diffusion time increases. The method helps to significantly reduce the required simulation time, computational memory, and difficulties associated with mesh generation, thus opening the possibilities to simulating complicated structures at low cost for a better understanding of diffusion MRI in the brain .

The nerve cells of the Aplysia are much larger than mammalian neurons. Using the Aplysia ganglia to study the relationship between the cellular structure and the diffusion MRI signal can potentially shed light on this relationship for more complex organisms. We measured the dMRI signal of chemically-fixed abdominal ganglia of the Aplysia at several diffusion times. At the diffusion times measured and observed at low b-values, the dMRI signal is mono-exponential and can be accurately represented by the parameter ADC (Apparent Diffusion Coefficient). We performed numerical simulations of water diffusion for the large cell neurons in the abdominal ganglia after creating geometrical configurations by segmenting high resolution T2-weighted (T2w) images to obtain the cell outline and then incorporating a manually generated nucleus. The results of the numerical simulations validate the claim that water diffusion in the large cell neurons is in the short diffusion time regime at our experimental diffusion times. Then, using the analytical short time approximation (STA) formula for the ADC, we showed that in order to explain the experimentally observed behavior, it is necessary to consider the nucleus and the cytoplasm as two separate diffusion compartments. By using a two compartment STA model, we were able to illustrate the effect of the highly irregular shape of the cell nucleus on the ADC .

The complex transverse water proton magnetization subject to diffusion-encoding magnetic field gradient pulses in a heterogeneous medium can be modeled by the multiple compartment Bloch-Torrey partial differential equation. Under the assumption of negligible water exchange between compartments, the time-dependent apparent diffusion coefficient can be directly computed from the solution of a diffusion equation subject to a time-dependent Neumann boundary condition.
This work describes a publicly available MATLAB toolbox called SpinDoctor that can be used 1) to solve the Bloch-Torrey partial differential equation in order to simulate the diffusion magnetic resonance imaging signal; 2) to solve a diffusion partial differential equation to obtain directly the apparent diffusion coefficient; 3) to compare the simulated apparent diffusion coefficient with a short-time approximation formula.
The partial differential equations are solved by

The numerical simulation of the diffusion MRI signal arising from complex tissue micro-structures is helpful for understanding and interpreting imaging data as well as for designing and optimizing MRI sequences. The discretization of the Bloch-Torrey equation by finite elements is a more recently developed approach for this purpose, in contrast to random walk simulations, which has a longer history. While finite element discretization is more difficult to implement than random walk simulations, the approach benefits from a long history of theoretical and numerical developments by the mathematical and engineering communities. In particular, software packages for the automated solutions of partial differential equations using finite element discretization, such as FEniCS, are undergoing active support and development. However, because diffusion MRI simulation is a relatively new application area, there is still a gap between the simulation needs of the MRI community and the available tools provided by finite element software packages. In this paper, we address two potential difficulties in using FEniCS for diffusion MRI simulation. First, we simplified software installation by the use of FEniCS containers that are completely portable across multiple platforms. Second, we provide a portable simulation framework based on Python and whose code is open source. This simulation framework can be seamlessly integrated with cloud computing resources such as Google Colaboratory notebooks working on a web browser or with Google Cloud Platform with MPI parallelization. We show examples illustrating the accuracy, the computational times, and parallel computing capabilities. The framework contributes to reproducible science and open-source software in computational diffusion MRI with the hope that it will help to speed up method developments and stimulate research collaborations .

We performed simulations for a collaborative project with Demian Wassermann of the Parietal team on distinguishing between spindle and pyramidal neurons with Multi-shell Diffusion MRI .

We continued in the simulation and modeling of heart diffusion MRI with the post-doc project of Imen Mekkaoui, funded by Inria-EPFL lab. The project is co-supervised with Jan Hesthaven, Chair of Computational Mathematics and Simulation Science (MCSS), EPFL. An article on this topic is under preparation.

João F. Reis, Olivier P. Le Maître, Pietro M. Congedo, Paul Mycek

We propose a Monte Carlo based method to compute statistics from a solution of a stochastic elliptic equation. Solutions are computed through an iterative solver. We present a parallel construction of a robust stochastic preconditioner to accelerate the iterative scheme. This preconditioner is built before the sampling, at an offline stage, based on a decomposition of the geometric domain. Once constructed, a realisation of the preconditioner is generated for each sample and applied to an iterative method to solve the corresponding deterministic linear system. This approach is not restricted to a single iterative method and can be adapted to different iterative techniques. We demonstrate the efficiency of this approach with extensive numerical results, divided into two examples. The first example is a one-dimensional equation. The reduced dimension of the first example allows the construction of global operators and consequently, an extensive analysis of the convergence and stability properties of the proposed approach. The second example is an analogous two-dimensional version. We demonstrate the performance of the proposed preconditioner by comparison with other deterministic preconditioners based on the median of the coefficient fields. An article on this topic is under preparation.

F. Sanson, O.P. Le Maitre, P.M. Congedo

Multi-physics problems in engineering can be often modeled using a System of Solvers (SoS), which is simply a set of solvers coupled together. SoS could be computationally expensive, for example in parametric studies, uncertainty quantification or sensitivity analysis, so typically requiring the construction of a global surrogate model of the SoS to perform such costly analysis. One recurrent strategy in literature consists of building a system of surrogate models where each solver is approximated with a local surrogate model. This approach can be efficient if good training sets for each surrogate can be generated, in particular on the intermediate variables (which are the outputs of an upstream solver and the inputs of a downstream one) that are a priori unknown. In this work, we propose a novel strategy to construct efficient training sets of the intermediate variables, using clustering-based techniques formulated for a systems of Gaussian processes (SoGP). In this way, improved coverage of the intermediate spaces is attained compared to randomly generated training sets. The performances of this approach are assessed on several test-cases showing that the clustering training strategy is systematically more efficient than randomly sampled training points .

N. Razaaly, P.M. Congedo

We consider the problem of estimating a probability of failure *e.g.* output of an expensive-to-run finite element model) scalar performance function *extreme* AK-MCS (eAK-MCS) inherits its former multi-point enrichment algorithm allowing to add several points at each iteration step, and provide an estimated failure probability based on the Gaussian nature of the Kriging surrogate.
Both the efficiency and the accuracy of the proposed method are showcased through its application to two to eight dimensional analytic examples, characterized by very low failure probabilities. Numerical experiments conducted with *unfavorable* initial *Design of Experiment* suggests the ability of the proposed method to detect failure domains .

N. Razaaly, P.M. Congedo

We propose here a method for fast estimation of the quantiles associated with very small levels of probability, where the scalar performance function J is complex (e.g. output of an expensive-to-run finite element model), under a probability measure that can be recast as a multivariate standard Gaussian law using an isoprobabilistic transformation. A surrogate-based approach (Gaussian Processes) combined with adaptive experimental designs allows to iteratively increase the accuracy of the surrogate while keeping the overall number of J evaluations low. Direct use of Monte-Carlo simulation even on the surrogate model being too expensive, the key idea consists in using an Importance Sampling method based on an isotropic centered Gaussian with large Standard deviation permitting a cheap estimation of the quantiles of the surrogate. Similar to the strategy presented by Schobi and Sudret (2016), the surrogate is adaptively refined using a parallel infill refinement of an algorithm suitable for very small failure probability. We finally elaborate a multi-quantile selection approach allowing to exploit high-performance computing architectures further. We illustrate the performances of the proposed method on several two and six-dimensional cases. Accurate results are obtained with less than 100 evaluations of J. An article on this topic is under preparation.

M. Rivier, P.M. Congedo

In this paper, we propose non-parametric estimations of robustness and reliability measures to solve constrained multi-objective optimisation under uncertainty. These approximations with tunable fidelity permit to capture the Pareto front in a parsimonious way, and can be exploited within an adaptive refinement strategy. First, we build a non-Gaussian surrogate model of the objectives and constraints, allowing for more representativeness and detecting potential correlations. Additionally, we illustrate an efficient approach for obtaining joint representations of these robustness and reliability measures, which discriminates more sharply the Pareto-optimal designs from the others. Secondly, we propose an adaptive refinement strategy, using these tunable fidelity approximations to drive the computational effort towards the computation of the optimal area. To this extent, an adapted Pareto dominance rule and Pareto optimal probability computation are formulated. We assess the performance of the proposed strategy on several analytical test-cases against classical approaches in terms of the average distance to the Pareto front. Finally, we illustrate the performances of the method on the case of shape optimisation under uncertainty of an Organic Rankine Cycle turbine .

C. Sabater, O. Le Maitre, P.M. Congedo, S. Goertz

When dealing with robust optimization problems, surrogate models are traditionally constructed to efficiently obtain the statistics of the random variable. However, when a large number of uncertainties are present, the required number of training samples to construct an accurate surrogate generally increases exponentially. The use of surrogates also requires the parametrization of the uncertainties. We present a novel approach for robust design insensitive to the number of uncertainties and that is able to deal with non-parametric uncertainties by leveraging a Bayesian formulation of quantile regression. The method does not require the use of any surrogate in the stochastic space. It is able to globally predict at every design point any given quantile of the random variable. In addition, it provides an estimation of the error in this prediction due to the limited sampling by making use of the posterior distribution of the model parameters.The framework includes an active infill that efficiently balances exploration with exploitation and accelerates the optimization process by increasing the accuracy of the statistic in the regions of interest. We validate the method in test functions and observe good convergence properties towards the determination of the location of the global optima. The framework is applied towards the aerodynamic robust design of an airfoil with a shock control bump under 382 geometrical and operational uncertainties. The framework is able to efficiently find the optimum configuration in complex, large-scale problems. An article on this topic is under preparation.

N. Razaaly, G. Persico, P.M. Congedo

This study presents a multifidelity surrogate-based approach for the optimization of the LS89 high pressure axial turbine vane to significantly reduce the computational cost associated to high fidelity CFD simulations while exploiting models of lower fidelity. A cokriging method is used to simultaneously take into account quantities of interest (QoI) coming from models of different fidelities providing a global surrogate model. A classic bayesian global optimization method permits to iteratively propose desing of interest. It relies on the maximization of the so-called Expected Improvement criterion. A geometrical parametrization technique based on B-splines is considered to describe the profile geometry. The mass-flow rate and the outlet angle are constrained. The optimization study reveals significant reduction in computational cost w.r.t. classic optimization frameworks based on a single fidelity, such as, adjoint-based and gradient-free methods, while providing similar improvements in term of fitness functions .

A CIFRE PhD thesis started in April 2017 with Safran Tech. The student is M. Florian Feppon who is working on "topology optimization for a coupled thermal-fluid-structure system”.

A CIFRE PhD thesis started in October 2017 with Renault. The student is Mrs Lalaina Rakotondrainibe who is working on "topology optimization of connections between mechanical parts”.

A CIFRE PhD thesis started in January 2019 with Safran Tech. The student is M. Martin Bihr who is working on "Optimisation Topologique du couple support/pièce pour la fabrication additive métallique sur lit de poudre".

A CIFRE PhD thesis started November 2017 with EDF. The student is H. Girardon who is working on "level set method for eddy current non destructive testting".

A CIFRE PhD thesis started May 2017 with ArianeGroup. The student is M. Mickael Rivier who is working on "Optimization under uncertainty methods for expensive computer codes".

A CIFRE PhD thesis started November 2018 with CEA CESTA. The student is M. Paul Novello who is working on "Deep Learning for atmospheric reentry".

The SOFIA project (SOlutions pour la Fabrication Industrielle Additive métallique) started in the summer of 2016. Its purpose is to make research in the field of metallic additive manufacturing. The industrial partners include Michelin, FMAS, ESI, Safran and others. The academic partners are different laboratories of CNRS, including CMAP at Ecole Polytechnique. The project is funded for 6 years by BPI (Banque Publique d'Investissement).

G. Allaire is participating to the TOP project at IRT SystemX which started in February 2017. It is concerned with the development of a topology optimization platform with industrial partners (Renault, Safran, Airbus, ESI).

FUI project Saxsize. This three years project started in October 2015 and extended till April 2019 and it involves Xenocs (coordinator), Inria (DEFI), Pyxalis, LNE, Cordouan and CEA. It is a followup of Nanolytix where a focus is put on SAXS quantifications of dense nanoparticle solutions.

Contract with ArianeGroup, Activity around techniques for Uncertainty Quantification, Coordinator: P.M. Congedo.

Contract with CEA, Activity around techniques for numerical error estimation and uncertainty quantification, Coordinator: P.M. Congedo.

Title : Virtual prototyping of EVE engines

Type : Co-funded from Region Aquitaine and Inria

Duration : 36 months

Starting : October 2018

Coordinator : P.M. Congedo

Abstract : The main objective of this thesis is the construction of a numerical platform, for permitting an efficient virtual prototyping of the EVE expander. This will provide EXOES with a numerical tool, that is much more predictive with respect to the tools currently available and used in EXOES, by respecting an optimal trade-off in terms of complexity/cost needed during an industrial design process.i Two research axes will be mainly developed. First, the objective is to perform some high- predictive numerical simulation for reducing the amount of experiments, thanks to a specific devel- opment of RANS tools (Reynolds Averaged Navier-Stokes equations) for the fluids of interest for EXOES. These tools would rely on complex thermodynamic models and a turbulence model that should be modified. The second axis is focused on the integration of the solvers of different fidelity in a multi-fidelity platform for performing optimization under uncertainties. The idea is to evaluate the system performances by using massively the low-fidelity models, and by correcting these estimations via only few calculations with the high-fidelity code.

Program: H2020 MSCA-ITN

Project acronym: UTOPIAE

Project title: Handling the unknown at the edge of tomorrow

Duration: January 2017- December 2020

Coordinator: M. Vasile (Strathclyde University)

Other partners: see http://

UTOPIAE is a European research and training network looking at cutting edge methods bridging optimisation and uncertainty quantification applied to aerospace systems. The network will run from 2017 to 2021, and is funded by the European Commission through the Marie Skłodowska-Curie Actions of H2020. The network is made up of 15 partners across 6 European countries, including the UK, and one international partner in the USA, collecting mathematicians, engineers and computer scientists from academia, industry, public and private sectors.

Mission statement : To train, by research and by example, 15 Early Stage Researchers in the field of uncertainty quantification and optimisation to become leading independent researchers and entrepreneurs that will increase the innovation capacity of the EU. To equip the researchers with the skills they will need for successful careers in academia and industry. To develop fundamental mathematical methods and algorithms to bridge the gap between Uncertainty Quantification and Optimisation and between Probability Theory and Imprecise Probability Theory for Uncertainty Quantification to efficiently solve high-dimensional, expensive and complex engineering problems.

P.M. Congedo is the Inria Coordinator of the CWI-Inria Inria International Lab.

**IIL CWI-Inria**

Associate Team involved in the International Lab:

Title: Computational Methods for Uncertainties in Fluids and Energy Systems

International Partner (Institution - Laboratory - Researcher):

CWI (Netherlands) - Scientific Computing Group - Daan Crommelin

Start year: 2017

See also: https://project.inria.fr/inriacwi/projects/communes/

This project aims to develop numerical methods capable to take into account efficiently unsteady experimental data, synthetic data coming from numerical simulation and the global amount of uncertainty associated to measurements, and physical-model parameters. We aim to propose novel algorithms combining data-inferred stochastic modeling, uncertainty propagation through computer codes and data assimilation techniques. The applications of interest are both related to the exploitation of renewable energy sources: wind farms and solar Organic Rankine Cycles (ORCs).

University of Zurich : R. Abgrall. Collaboration on high order adaptive methods for CFD and uncertainty quantification.

Politecnico di Milano, Aerospace Department (Italy) : Pr. A. Guardone. Collaboration on ALE for complex flows (compressible flows with complex equations of state).

von Karman Institute for Fluid Dynamics (Belgium). With Pr. T. Magin we work on Uncertainty Quantification problems for the identification of inflow condition of hypersonic nozzle flows.

Rutgers University. Collaboration with Pr. F. Cakoni on transmission eigenvalues.

University of Delaware. Collaboration with Pr. D. Colton on inverse scattering theory

Ecole Nationale des Ingénieurs de Tunis. Collaboration with Pr. M. Bellasoued on inverse scattering problems

Faculté des Sciences de Sfax. Collaboration with Pr. S. Chaabane on inverse problems for singular parameters

University of Sousse. Collaboration with Pr. M. Khenissi on transmission eigenvalues

Colorado School of Mines. Collaboration with F. Pourahmadian on differential LSM

ns of solution derivatives.

Fioralba Cakoni and David Colton, 1 week, March 2019

PostDoc, Xiaoli Liu, Sampling methods for time dependent problems, H. Haddar

Master thesis, Marwa Mansouri, Inside-outside duality with artificial backgrounds, L. Chesnel and H. Haddar.

PostDoc, Imen Mekkaoui, In-vivo cardiac diffusion magnetic resonanace imaging: simulations and parameters estimation, Jing Rebecca Li and Jan Hesthaven.

Master thesis, Try Nguyen Tran, French-Vietnam Master Program in Applied Mathematics, Jing Rebecca Li

Master thesis, Nouha jenhani, ENIT, LAMSIN, H. Haddar

Master thesis, Amal Labidi, ENIT, LAMSIN, H. Haddar

P.M Congedo is the Chair of the CWI-Inria workshop at CWI in Amsterdam on September 19, 20 2019.

P.M. Congedo is the Chair of UQOP2020 Conference, organized in Paris on March 18-21, 2019.

L. Chesnel co-organizes the Journée de rentrée (2019) of the Centre de Mathématiques Appliquées of École Polytechnique.

L. Chesnel co-organizes the seminar of the Centre de Mathématiques Appliquées of École Polytechnique.

L. Chesnel co-organizes the seminar of the Inria teams Defi-M3DISIM-Poems.

M. Bonazzoli organizes the working group of Defi team.

J.R. Li is organizer of Ecole d'ete d'excellence for Chinese Master's students funded by French Embassy in China, 07/2019.

G. Allaire is member of the editorial boards of

Book series "Mathématiques et Applications" of SMAI and Springer,

ESAIM/COCV, Structural and Multidisciplinary Optimization,

Discrete and Continuous Dynamical Systems Series B,

Computational and Applied Mathematics,

Mathematical Models and Methods in Applied Sciences (M3AS),

Annali dell'Universita di Ferrara,

OGST (Oil and Gas Science and Technology),

Journal de l'Ecole Polytechnique - Mathématiques,

Journal of Optimization Theory and Applications.

P.M. Congedo is Editor of Mathematics and Computers in Simulation, MATCOM (Elsevier).

H. Haddar is member of the editorial boards of

Inverse Problems

SIAM Journal on Scientific Computing

SIAM Journal of Mathematical Analysis

We reviewed papers for top international journals in the main scientific themes of the team.

G. Allaire

SIAM Geosciences, Houston, March 11-14, 2019.

DCAMM seminar, DTU, Copenhagen, April 5, 2019.

WCSMO, Beijing, May 20-24, 2019.

Mathematical Design of New Materials, Cambridge, June 3-14, 2019.

Chalmers Colloquium, Sweden, August 26-30, 2019.

Sim-AM, Pavia, September 11-13, 2019.

Shape Optimization and Isoperimetric and Functional Inequalities, Levico Terme, September 23-27, 2019.

Computational modelling of Complex Materials across the Scales, Glasgow, October 1-4, 2019.

New trends in PDE constrained optimization, Linz, October 15-18, 2019.

M. Bonazzoli

Seminar at POEMS lab, ENSTA-ParisTech, Palaiseau, France.

ENUMATH 2019, European Numerical Mathematics and Advanced Applications Conference, Egmond aan Zee, Netherlands.

Parallel Solution Methods for Systems Arising from PDEs, Marseille, France (Plenary invited talk).

L. Chesnel

Waves conference, Vienna, August 2019.

Applied Inverse Problems conference, Grenoble, 2019.

H. Haddar

Applied Inverse Problems conference, Grenoble, July 2019

International Conference on Antenna Measurements & Applications, Bali, October 2019

New Trends in Analysis and Probability, Sousse, September 2019

Workshop in the memory of A. Lechleiter, Bremen, May 2019

La journée des rencontres DEFI-MEDISIM-POEMS, December, 2019

P.M. Congedo

Workshop "Numerical simulation of hypersonic flows, July 8, 2019.

Seminar at ONERA, Meudon, November 29, 2019.

G. Allaire is a board member of Institut Henri Poincaré (IHP). He is the chairman of the scientific council of IFPEN (French Petroleum Institute and New Energies). He is the chairman of the scientific council of AMIES (Agency for Interaction in Mathematics with Business and Society).

G. Allaire is a member of the "comité national" CNRS, section 41 (mathematics).

G. Allaire is a member of the scientific board of the Gaspard Monge program on optimization (PGMO) at the Jacques Hadamard Mathematical Foundation.

J.R. Li is Member of the SIAM Committee on Programs and Conferences 2017-2019.

J.R. Li is Member Elu of Inria Commission d'Evaluation, 2015-2019.

M. Bonazzoli was a member of the Evaluation committee for the 2020 call of Inria Associate Teams programme.

H. Haddar was the president of the evaluation committee for mathemùatical laboratries at the Universities of Sfax and Sousse (Tunisia)

J.R. Li is correspondant International for Centre de Mathematiques Appliquees, Ecole Polytechnique, 2018-present.

J.R. Li is responsable for the Ecole Polytechnique part of the French-Vietnam Master Program in Applied Mathematics, 2016-present.

M. Bonazzoli is the International partnerships Scientific Correspondent for Inria Saclay.

Master: Grégoire Allaire, Approximation Numérique et Optimisation, for students in the second year of Ecole Polytechnique curriculum: 8 lessons of 1h30.

Master: Grégoire Allaire, Transport and diffusion, for students in the third year of Ecole Polytechnique curriculum. 9 lessons of 2h jointly with F. Golse.

Master: Houssem Haddar, Waves and imaging: Concepts, Theory and Applications, Master M2 ”mathematical modeling”: 9 lessons of 3h.

Master: Houssem Haddar, Inverse scattering problems, Master M2, ENIT, 10 lessons of 3h.

Master: Lucas Chesnel, Elementary tools of analysis for partial differential equations, for students in the first year of Ensta ParisTech curriculum, 25 equivalent TD hours.

Master: Lucas Chesnel, Numerical approximation and optimisation, for students in the second year of Ecole Polytechnique curriculum: 2 TDs of 4h + one project.

Master: Lucas Chesnel, Modal Modélisation mathématique par la démarche expérimentale, for students in the second year of Ecole Polytechnique curriculum: 5 TDs of 2h.

Master: Grégoire Allaire, Optimal design of structures, for students in the third year of Ecole Polytechnique curriculum. 9 lessons of 1h30.

Master: Grégoire Allaire, Theoretical and numerical analysis of hyperbolic systems of conservation laws, Master M2 ”mathematical modeling”, 8 lessons of 3h.

Master: Jing Rebecca Li, Lecturer of course Mathematical and numerical foundations of modeling and simulation using partial differential equations French-Vietnam Master in Applied Mathematics, University of Science, Ho Chi Minh City, 9/2019. 2 weeks.

Master: P.M. Congedo, Numerical methods in Fluid Mechanics, ENSTA ParisTech, 12 h.

Master: P.M. Congedo, Numerical methods for Hyperbolic Problems, von Karman Institute for Fluid Dynamics, 12 h.

Doctorat: Houssem Haddar, Inverse problems, Executive Education, Ecole Polytechnique, 9h

PhD: K. Napal, On the use of sampling methods and spectral signatures for the resolution of inverse scattering problems (defended, December 2019), L. Audibert, L. Chesnel and H. Haddar.

PhD: F. Feppon (defended, December 2019) sur l'optimisation topologique de systèmes couplés fluide-solide-thermique, G. Allaire and Ch. Dapogny.

PhD: M. Kchaw, Higher order homogenization tensors for DMRI modeling (defended July 2019), H. Haddar and M. Moakher

PhD: B. Charfi, Identification of the singular support of generalized impedance boundary conditions (defended September 2019), S. Chaabane and H. Haddar.

PhD: F. Sanson, Estimation du risque humain lié à la retombée d’objets spatiaux sur Terre (defended in September 2019), P.M. Congedo, O. Le Maitre.

PhD: N. Razaaly, Rare Event Estimation and Robust Optimization Methods with Applications to ORC Turbine Cascade (defended in July 2019), P.M. Congedo.

PhD: G. Gori, Non-ideal compressible-fluid dynamics: developing a combined perspective on modeling, numerics and experiments (defended in January 2019), A. Guardone, P.M. Congedo.

PhD: J. Carlier, Schémas aux résidus distribués et méthodes à propagation des ondes pour la simulation d’écoulements compressibles diphasiques avec transfert de chaleur et de masse (defended in December 2019), M. Pelanti, P.M. Congedo.

Ph.D. in progress: S. Houbar sur la cavitation dans le fluide caloporteur induite par les mouvements des assemblages d'un réacteur (CEA, to be defended in 2020). G. Allaire and G. Campioni.

Ph.D. in progress: M. Boissier, Optimisation couplée de la topologie des formes et de la trajectoire de lasage en fabrication additive (to be defended in 2020). G. Allaire and Ch. Tournier.

L. Rakotondrainibe sur l'optimisation des liaisons enre pièces dans les système mécaniques (Renault, to be defended in 2020). G. Allaire.

J. Desai sur l'optimisation topologique de structures au comportement non-linéaire avec des méthodes de déformation de maillage (IRT SystemX, to be defended in 2021). G. Allaire and F. Jouve,

PhD in progress: H. Girardon, Non destructive testing of PWR tubes using eddy current rotating coils, to be defended in 2021, H. Haddar and L. Audibert

PhD in progress: M. Rihani, Maxwell's equations in presence of metamaterials (to be defended in 2021), A.-S. Bonnet-BenDhia and L. Chesnel.

PhD in progress: Chengran Fang, Enabling cortical cell-specific sensitivity on clinical multi-shell diffusion MRI microstructure measurements. (to be defended in 2022) Jing Rebecca Li and Demian Wassermann

PhD in progress: Nouha Jenhani, Differential sampling methods for defect imaging in periodic layers. (to be defended in 2022) Houssem Haddar and Mourad Bellasoued

PhD in progress: Amal Labidi, Inverse problems for wave equation with magnetic potential. (to be defended in 2022) Houssem Haddar and Mourad Bellasoued

PhD in progress: Marwa Mansouri, Inside outside duality for artificial backgrounds. (to be defended in 2022) Houssem Haddar, Lucas Chesnel and Moez Khenissi

PhD in progress: M. Bihr sur la fabrication additive et l'optimisation topologique de structures (to be defended in 2022) G. Allaire and B. Bogosel.

PhD in progress: R. Delvaux sur les algorithmes de couplage à convergence super-linéaire entre neutronique, thermohydraulique et thermique (to be defended in 2022) G. Allaire and C. Patricot,.

PhD in progress: A. Touiti sur l'optimisation de l'anisotropie pour des structures issues de la fabrication additive (to be defended in 2022) G. Allaire and F. Jouve.

PhD in progress: M. Rivier, optimization under uncertainty through a Bounding-Box concept (to be defended in May 2020), P.M. Congedo.

PhD in progress: Joao Reis, Advanced methods for stochastic elliptic PDEs (to be defended in October 2020), P.M. Congedo, O. Le Maitre.

PhD in progress: Anabel Del Val, Advanced bayesian methods for aerospace applications (to be defended in October 2020), P.M. Congedo, O. Le Maitre, O. Chazot, T. Magin.

PhD in progress: P. Novello, Deep learning for reentry atmosperic flows (to be defended in November 2021), P.M. Congedo, D. Lugato, G. Poette.

PhD in progress: E. Solai, Virtual Prototyping of the EVE expander (to be defended in October 2021), P.M. Congedo, H. Beaugendre.

PhD in progress: N. Leoni, Bayesian inference of model error in imprecise models (to be defended in February 2022), P.M. Congedo, O. Le Maitre, M.G. Rodio.

M. Bonazzoli and L. Chesnel made a presentation in the context of the Fête de la science 2019 to several groups of young students (from 10 to 17 years old).

M. Bonazzoli was representative for SMAI at the Métiers des Maths stand at the 20th Salon Culture et Jeux Mathématiques.

P.M. Congedo is Deputy Coordinator of "Maths/Engineering" Program of the Labex Mathématiques Hadamard.

J.R. Li is Member Elu of Inria Commission d'Evaluation, 2015-2019.

M. Bonazzoli is the International partnerships Scientific Correspondent for Inria Saclay.

M. Bonazzoli, H. Haddar and J.R. Li monitored an internship at Defi team for 6 middle school students (one afternoon).