Team, Visitors, External Collaborators
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
XML PDF e-pub
PDF e-Pub

Section: Research Program

Methodological core

The work described in this section is aimed at investigating fundamental mathematical and numerical problems which arise in the first two research axes.

Mathematical analysis of PDEs

The mathematical analysis of the multi-scale and multi-physics models are a fundamental tool of the simulation chain. Indeed, well-posedness results provide precious insights on the properties of solutions of the systems which can, for instance, guide the design of the numerical methods or help to discriminate between different modeling options.

Fluid-structure interaction. Most of the existing results concern the existence of solutions locally in time or away from contacts. One fundamental problem, related to the modeling and simulation of valve dynamics (see Sections 3.1.1 and 3.3.2), is the question of whether or not the model allows for contact (see [57], [55]). The proposed research activity is aimed at investigating the case of both immersed rigid or elastic structures and explore if the considered model allows for contact and if existence can be proved beyond contact. The question of the choice of the model is crucial and considering different types of fluid (newtonian or non newtonian), structure (smooth or rough, elastic, viscoelastic, poro-elastic), or various interface conditions has an influence on whether the model allows contact or not.

Fluid–structure mixture. The main motivation to study fluid-solid mixtures (i.e., porous media consisting of a skeleton and connecting pores filled with fluid) comes from the modeling of the lung parenchyma and cerebral hemorrhages (see Sections The Biot model is the most widely used in the literature for the modeling of poro-elastic effects in the arterial wall. Here, we propose to investigate the recent model proposed by the M3DISIM project-team in [47], which allows for nonlinear constitutive behaviors and viscous effects, both in the fluid and the solid. Among the questions which will be addressed, some of them in collaboration with M3DISIM, we mention the justification of the model (or its linearized version) by means of homogenization techniques and its well-posedness.

Fluid–particle interaction. Mathematical analysis studies on the Navier-Stokes-Vlasov system for fluid-particle interaction in aerosols can be found in [38], [39]. We propose to extend these studies to more realistic models which take into account, for instance, changes in the volume of the particles due to humidity.

Numerical methods for multi-physics problems

In this section we describe the main research directions that we propose to explore as regards the numerical approximation of multi-physics problems.

Fluid-structure interaction. The spatial discretization of fluid-structure interaction (FSI) problems generally depends on the amount of solid displacement within the fluid. Problems featuring moderate interface displacements can be successfully simulated using (moving) fitted meshes with an arbitrary Lagrangian-Eulerian (ALE) description of the fluid. This facilitates, in particular, the accurate discretization of the interface conditions. Nevertheless, for problems involving large structural deflections, with solids that might come into contact or that might break up, the ALE formalism becomes cumbersome. A preferred approach in this case is to combine an Eulerian formalism in the fluid with an unfitted mesh discretization, in which the fluid-structure interface deforms independently of a background fluid mesh. In general, traditional unfitted mesh approaches (such as the immersed boundary and the fictitious domain methods [65], [37], [54], [35]) are known to be inaccurate in space. These difficulties have been recently circumvented by a Nitsche-based cut-FEM methodolgy (see [32], [43]). The superior accuracy properties of cut-FEM approaches comes at a price: these methods demand a much more involved computer implementation and require a specific evaluation of the interface intersections.

As regards the time discretization, significant advances have been achieved over the last decade in the development and the analysis of time-splitting schemes that avoid strong coupling (fully implicit treatment of the interface coupling), without compromising stability and accuracy. In the vast majority these studies, the spatial discretization is based on body fitted fluid meshes and the problem of accuracy remains practically open for the coupling with thick-walled structures (see, e.g., [52]). Within the unfitted mesh framework, splitting schemes which avoid strong coupling are much more rare in the literature.

Computational efficiency is a major bottleneck in the numerical simulation of fluid-structure interaction problems with unfitted meshes. The proposed research activity is aimed at addressing these issues. Another fundamental problem that we propose to face is the case of topology changes in the fluid, due to contact or fracture of immersed solids. This challenging problem (fluid-structure-contact-fracture interaction) has major role in many applications (e.g., heart valves repair or replacement, break-up of drug-loaded micro-capsules) but most of the available studies are still merely illustrative. Indeed, besides the numerical issues discussed above, the stability and the accuracy properties of the numerical approximations in such a singular setting are not known.

Fluid–particle interaction and gas diffusion.

Aerosols can be described through mesoscopic equations of kinetic type, which provide a trade-off between model complexity and accuracy. The strongly coupled fluid-particle system involves the incompressible Navier-Stokes equations and the Vlasov equation. The proposed research activity is aimed at investigating the theoretical stability of time-splitting schemes for this system. We also propose to extend these studies to more complex models that take into account the radius growth of the particles due to humidity, and for which stable, accurate and mass conservative schemes have to be developed.

As regards gas diffusion, the mathematical models are generally highly non-linear (see, e.g., [62], [64], [40]). Numerical difficulties arise from these strong non linearities and we propose to develop numerical schemes able to deal with the stiff geometrical terms and that guarantee mass conservation. Moreover, numerical diffusion must be limited in order to correctly capture the time scales and the cross-diffusion effects.

Statistical learning and mathematical modeling interactions

Machine learning and in general statistical learning methods (currently intensively developed and used, see [33]) build a relationship between the system observations and the predictions of the QoI based on the a posteriori knowledge of a large amount of data. When dealing with biomedical applications, the available observations are signals (think for instance to images or electro-cardiograms, pressure and Doppler measurements). These data are high dimensional and the number of available individuals to set up precise classification/regression tools could be prohibitively large. To overcome this major problem and still try to exploit the advantages of statistical learning approaches, we try to add, to the a posteriori knowledge of the available data an a priori knowledge, based on the mathematical modeling of the system. A large number of numerical simulations is performed in order to explore a set of meaningful scenarios, potentially missing in the dataset. This in silico database of virtual experiments is added to the real dataset: the number of individuals is increased and, moreover, this larger dataset can be used to compute semi-empirical functions to reduce the dimension of the observed signals.

Several investigations have to be carried out to systematically set up this framework. First, often there is not a single mathematical model describing a physiological phenomenon, but hierarchies of model of different complexity. Every model is characterized by a model error. How can this be accounted for? Moreover, several statistical estimators can be set up and eventually combined together in order to improve the estimations (see [70]). Other issues have an actual impact and has to be investigated: what is the optimal number of in silico experiments to be added? What are the most relevant scenarios to be simulated in relation to the statistical learning approach considered in order to obtain reliable results? In order to answer to these questions, discussions and collaborations with statistics and machine learning groups have to be developed.

Tensor approximation and HPC

Tensor methods have a recent significant development because of their pertinence in providing a compact representation of large, high-dimensional data. Their applications range from applied mathematics and numerical analysis to machine learning and computational physics. Several tensor decompositions and methods are currently available (see [56]). Contrary to matrices, for tensors of order higher or equal to three, there does not exist, in general, a best low rank approximation, the problem being ill posed (see [68]). Two main points will be addressed: (i) The tensor construction and the multi-linear algebra operations involved when solving high-dimensional problems are still sequential in most of the cases. The objective is to design efficient parallel methods for tensor construction and computations; (ii) When solving high-dimensional problems, the tensor is not assigned; instead, it is specified through a set of equations and tensor data. Our goal is to devise numerical methods able to (dynamically) adapt the rank and the discretization (possibly even the tensor format) to respect the chosen error criterion. This could, in turn, improve the efficiency and reduce the computational burden.

These sought improvements could make the definition of parsimonious discretizations for kinetic theory and uncertainty quantification problems (see Section 3.2.1) more efficient and suitable for a HPC paradigm. This work will be carried out in collaboration with Olga Mula (Université Paris-Dauphine) and the ALPINES and MATHERIALS project-teams.