`BACCHUS` is a joint team of Inria Bordeaux - Sud-Ouest, LaBRI
(Laboratoire Bordelais de Recherche en Informatique – CNRS UMR 5800,
University of Bordeaux and IPB) and IMB (Institut de Mathématiques
de Bordeaux – CNRS UMR 5251, University of Bordeaux).
`BACCHUS` has been created on January 1^{st}, 2009
(http://

The purpose of the `BACCHUS` project is to analyze and solve
efficiently scientific computation problems that arise in complex
research and industrial applications and that involve scaling. By
scaling we mean that the applications considered require an enormous
computational power, of the order of tens or hundreds of teraflops,
and that they handle huge amounts of data. Solving these kinds of
problems requires a multidisciplinary approach involving both applied
mathematics and computer science.

Our major focus are fluid problems, and especially the simulation of
*physical wave propagation problems* including fluid mechanics,
inert and reactive flows, multimaterial and multiphase flows,
acoustics, real-gas effects, etc. `BACCHUS` intends to contribute to the solution of
these problems by bringing contributions to all steps of the
development chain that goes from the design of new high-performance,
more robust and more precise numerical schemes, to the creation and
implementation of optimized parallel algorithms and high-performance
codes.

By taking into account architectural and performance concerns from the early stages of design and implementation, the high-performance software which will implement our numerical schemes will be able to run efficiently on most of today's major parallel computing platforms (UMA and NUMA machines, large networks of nodes, production GRIDs).

Many achievements in rocket science have been made since Apollo, but prediction of the heat flux to the surface of spacecraft remains an imperfect science, and inaccuracies in these predictions can be fatal for the crew or the success of robotic missions. Predicting an accurate heat flux is a particularly complex task, regarding uncertainty on the complex multi-physics phenomena involved in hypersonic flows models as well as on atmospheric properties such as density and temperature. Hence, it is difficult to establish “error bars” on the heat flux prediction. We succeded the first call for project from ESA concerning uncertainty quantification for aerospace applications. In this project, we are the main investigator concerning the set-up of efficient numerical techniques for UQ.

In June and July, we joined the NASA Center for Turbulence Research (CTR) Summer Program at Stanford University. We developed a novel method to solve stochastic partial differential equations, in particular hyperbolic equations.

We have developed an algorithm for the robust construction of curved simplicial meshes in two and three dimensions. Starting from a classical (straight) mesh, we are able to curve the boundary elements then the volumic ones in keeping as much as possible the structure of the initial mesh. In particular, this algorithm does not destroy the boundary layer structures, even for meshes designed for turbulent simulations.

We have succeded in having Residual Distribution schemes that are *uniformly* accurate whatever the Peclet number for scalar advection diffusion problems. The schemes have been extended to turbulent flow simulations.

The native scheduler of the PaStiX solver can be replaced with generic runtimes to address sparse direct factorizations on heterogeneous architectures (clusters of multicore/multigpu). Our results on heterogeneous architectures show we can easily improve the factorization time on a personal computer (1 GPU and several cores), and we have identified leads, both on algorithms and on schedulers, to optimize the performances on larger platforms.

A large number of engineering problems involve fluid
mechanics. They may involve the coupling of one or more physical
models. An example is provided by aeroelastic problems, which have
been studied in details by other Inria teams. Another example is given by
flows in pipelines where the fluid (a mixture of air–water–gas) does
not have
well-known physical properties, and there are even more exotic situations
that will be discussed later.
Another application is the influence of fluid flow on noise production.
Problems in aeroacoustics are indeed becoming more and more important in
everyday life. In some occasions, one needs specific numerical tools
to take into account *e.g.* a fluids' exotic equation of state, or
because the
amount of required computational resources becomes huge, as in unsteady flows. Another situation
where specific tools are needed is when one is interested in very specific physical quantities, such as
*e.g.* the lift and drag of an airfoil, a situation where commercial tools can only provide a very crude answer.

It is a fact that there are many commercial codes. They allow users to
simulate a lot of different flow types. The quality of the results is however
far from optimal in many cases. Moreover, the numerical technology implemented in these codes
is often not the most recent. To give a few examples, consider the noise generated
by wake vortices in supersonic flows (external aerodynamics/aeroacoustics),
or the direct simulation of a 3D compressible mixing layer in a complex geometry (as in combustion chambers).
Up to our knowledge, due to the very different temporal
and physical scales that need to be captured,
a direct simulation of these phenomena is
not in the reach of the most recent technologies because the numerical
resources required are currently unavailable.
*We need to invent specific algorithms for this purpose.*

In order to simulate efficiently these complex physical
problems, we are working on some fundamental aspects of the numerical
analysis of non linear hyperbolic problems. *Our goal is to develop
more accurate and more efficient schemes that can adapt to modern computer architectures*.

More precisely, *we are working on a class of numerical schemes*, known in literature as
Residual Distribution schemes, *specifically
tailored to unstructured and hybrid meshes*. They have the most possible compact
stencil that is compatible with the expected order of accuracy.
This *accuracy is at least of second order, and it can go up to any order
of accuracy, even though fourth order is considered for practical applications.*
Since the stencil is compact, the implementation on parallel machines becomes
simple. These schemes are very flexible in nature, which is so far one of the most important advantage
over other techniques. This feature has allowed us to adapt the schemes to the requirements of different
physical situations (*e.g.* different formulations allow either en efficient explicit
time advancement for problems involving small time-scales, or a fully implicit space-time
variant which is unconditionally stable and allows to handle stiff problems
where only the large time scales are relevant). This flexibility has also enabled
to devise a variant using the same data structure of the popular Discontinuous Galerkin
schemes, which are also part of our scientific focus.

The compactness of the second order version of the schemes enables us to use efficiently the high performance parallel linear algebra tools developed by the team. However, the high order versions of these schemes, which are under development, require modifications to these tools taking into account the nature of the data structure used to reach higher orders of accuracy. This leads to new scientific problems at the border between numerical analysis and computer science. In parallel to these fundamental aspects, we also work on adapting more classical numerical tools to complex physical problems such as those encountered in interface flows, turbulent or multiphase flows, geophysical flows, and material science. An effort for developing a more predictive tool for multiphase compressible flows is also underway. Within this project, several advancements have been performed, i.e. considering a more complete systems of equations including viscosity, working on the thermodynamic modeling of complex fluids, and developing stochastic methods for uncertainty quantification in compressible flows.

We expect within a few years to be able to demonstrate the potential of our developments on applications ranging from the the reproduction of the complex multidimensional interactions between tidal waves and estuaries, unsteady aerodynamics and aeroacoustics associated to both external and internal compressible flows, compressible ideal and non-ideal MHD (in relation with the ITER project), and the behavior of complex materials. This will be achieved by means of a multi-disciplinary effort involving our research on residual discretizations schemes, the parallel advances in algebraic solvers and partitioners, and the strong interactions with specialists in computer science, scientific computing, physics, mechanics, and mathematical modeling.

Our research in numerical algorithms has led to the development of the
`RealfluiDS` platform which is described in
section .
New software developments are under way in the field of free surface flows and complex materials modeling.
These developments are performed in the code `SLOWS` (Shallow-water fLOWS) for free surface flows,
and in the solver `COCA` (CodeOxydationCompositesAutocicatrisants)
for the simulation of the self-healing process in composite materials.
These developments will be described in sections and .

This work is supported by the EU-Strep IDIHOM, various research contracts and in
part by the `ANEMOS` project and the ANR-Emergence `RealFluids`
grant. A large part of the team also beneficiates of the `ADDECCO` ERC grant.

Another topic of interest is the quantification of uncertainties in non linear problems. In many applications, the physical model is not known accurately. The typical example is that of turbulence models in aeronautics. These models all depend on a number of parameters which can radically change the output of the simulation. Being impossible to lump the large number of temporal and spatial scales of a turbulent flow in a few model parameters, these values are often calibrated to quantitatively reproduce a certain range of effects observed experimentally. A similar situation is encountered in many applications such as real gas or multiphase flows, where the equation of state form suffer from uncertainties, and free surface flows with sediment transport, where often both the hydrodynamic model and the sediment transport model depend on several parameters, and my have more than one formal expression.

This type of uncertainty, called *epistemic*, is associated
with a lack of knowledge and could be reduced by further experiments and investigation.
Instead, another type of uncertainty, called *aleatory*, is related to the
intrinsec aleatory quality of a physical measure and can not be reduced.
The dependency of the numerical simulation from these uncertainties can be studied by propagation of chaos
techniques such as those developped during the recent years via
polynomial chaos techniques. Different implementations exists,
depending whether the method is intrusive or not. The accuracy of these
methods is still a matter of research, as well how they can handle an
as large as possible number of uncertainties or their versatility with
respect to the structure of the random variable pdfs.
Our objective is to develop some non-intrusive or semi-intrusive methods, trying to define
an unified framework for obtained a reliable and accurate numerical solution
at a moderate computational cost.
Dealing with high dimensional representation of stochastic inputs in design optimiza-
tion is computationally prohibitive. In fact, for a robust design, statistics of the fitness
functions are also important, then uncertainty quantification (UQ) becomes the predom-
inant issue to handle if a large number of uncertainties is taken into account. Several
methods are proposed in literature to consider high dimension stochastic problem
but their accuracy on realistic problems where highly non-linear effects could exist is not
proven at all.
We developed several efficient global strategies for robust optimization: the first class of method is based on the extension of simplex stochastic collocation to the optimization space, the second one consists in hybrid strategies using ANOVA decomposition.

This part of our activities is supported by the ERC grant
`ADDECCO`, the ANR-MN project `UFO` and the associated
team `AQUARIUS`.

Many simulations which model the evolution of a given phenomenon along with time (turbulence and unsteady flows, for instance) need to re-mesh some portions of the problem graph in order to capture more accurately the properties of the phenomenon in areas of interest. This re-meshing is performed according to criteria which are closely linked to the undergoing computation and can involve large mesh modifications: while elements are created in critical areas, some may be merged in areas where the phenomenon is no longer critical.

Performing such re-meshing in parallel creates additional problems. In particular, splitting an element which is located on the frontier between several processors is not an easy task, because deciding when splitting some element, and defining the direction along which to split it so as to preserve numerical stability most, require shared knowledge which is not available in distributed memory architectures. Ad-hoc data structures and algorithms have to be devised so as to achieve these goals without resorting to extra communication and synchronization which would impact the running speed of the simulation.

Most of the works on parallel mesh adaptation attempt to parallelize in some way all the mesh operations: edge swap, edge split, point insertion, etc. It implies deep modifications in the (re)mesher and often leads to bad performance in term of CPU time. An other work proposes to base the parallel re-meshing on existing mesher and load balancing to be able to modify the elements located on the frontier between several processors.

In addition, the preservation of load balance in the re-meshed simulation requires dynamic redistribution of mesh data across processing elements. Several dynamic repartitioning methods have been proposed in the literature , , which rely on diffusion-like algorithms and the solving of flow problems to minimize the amount of data to be exchanged between processors. However, integrating such algorithms into a global framework for handling adaptive meshes in parallel has yet to be done.

The path that we are following bases on the decomposition of the areas
to remesh into boules that can be processed concurrently, each by a
sequential remesher. It requires to devise scalable algorithms for
building such boules, scheduling them on the available processors,
reconstructing the remeshed mesh and redistributing its data. This
research started within the context of the PhD of Cédric Lachat,
funded by a CORDI grant of EPI `PUMAS` and is continued thanks
to a funding by ADT grant `El Gaucho`.

Unlike their predecessors of two decades ago, today's very large parallel architectures can no longer implement a uniform memory model. They are based on a hierarchical structure, in which cores are assembled into chips, chips are assembled into boards, boards are assembled into cabinets and cabinets are interconnected through high speed, low latency communication networks. On these systems, communication is non uniform: communicating with cores located on the same chip is cheaper than with cores on other boards, and much cheaper than with cores located in other cabinets. The advent of these massively parallel, non uniform machines impacts the design of the software to be executed on them, both for applications and for service tools. It is in particular the case for the software whose task is to balance workload across the cores of these architectures.

A common method for task allocation is to use graph partitioning tools. The elementary computations to perform are represented by vertices and their dependencies by edges linking two vertices that need to share some piece of data. Finding good solutions to the workload distribution problem amounts to computing partitions with small vertex or edge cuts and that balance evenly the weights of the graph parts. Yet, computing efficient partitions for non uniform architectures requires to take into account the topology of the target architecture. When processes are assumed to coexist simultaneously for all the duration of the program, this generalized optimization problem is called mapping. In this problem, the communication cost function to minimize incorporates architecture-dependent, locality improving terms, such as the dilation of each edge (that is, by how much it is “stretched” across the graph representing the target architecture), which is sometimes also expressed as some “hop metric”. A mapping is called static if it is computed prior to the execution of the program and is never modified at run-time.

The sequential `Scotch` tool being developed within the `BACCHUS` team
(see Section )
was able to perform static mapping since its first version, in 1994,
but this feature was not widely known nor used by the community. With
the increasing need to map very large problem graphs onto very large
and strongly non uniform parallel machines, there is an increasing
demand for parallel static mapping tools. Since, in the context of
dynamic repartitioning, parallel mapping software will have to
run on their target architectures, parallel mapping and remapping
algorithms suitable for efficient execution on such heterogeneous
architectures have to be investigated. This leads to solve three
interwoven challenges:

scalability: such algorithms must be able to map graphs of more than a billion vertices onto target architectures comprising millions of cores;

heterogeneity: not only do these algorithms must take into account the topology of the target architecture they map graphs onto, but they also have themselves to run efficiently on these very architectures;

asynchronicity: most parallel partitioning algorithms use collective communication primitives, that is, some form of heavy synchronization. With the advent of machines having several millions of cores, and in spite of the continuous improvement of communication subsystems, the demand for more asynchronicity in parallel algorithms is likely to increase.

This research takes place within the context of the PhD of Sébastien Fourestier.

We are working on problems that can be written in the following form

in a domain

is assumed to be hyperbolic, the subsystem

is assumed to be elliptic. Last, () is supposed to satisfy an entropy inequality. The coefficients or models that define the flux and the boundary conditions can be deterministic or random.

The schemes we are working on have a variational formulation: find

The variational operator

This leads to highly non linear systems to solve, we use typicaly non linear Krylov space techniques. The cost is reduced thanks to
a parallel implementation, the domain is partionnned via `Scotch`. Mesh balancing, after mesh refinement, is handled via `PaMPA`. These schemes are implemented in `RealfluiDS` and, partialy, `Aerosol`. An example of such a simulation is given by Figure .

In case of non determistic problems, we have a semi-intrusive strategy. The randomness is expressed via

Staring from a discrete approximation of (), we can implement randomess in the scheme. An example is given on figure applied to the shallow water equations with dry shores, when the amplitude of the incoming tsunami wave is not known.

A classical application is the simulation of internal and external flows, with perfect or real gas equation of states, in complex geometries. This requires often the use of meshes having heterogeneous structures. We are working with unstructured meshes, either with simplicial elements or mixtures of hex, tets, pyramids and prisms. Mesh refinement can be enable in order to better resolve either the discontinuous flow structures or the capture of boundary layers.

Another domain of application is the simuation of multiphase flows. Here, the system () need to be supplemented by at least a PDE describing the phase volume changes, and the equation of states of the phases. The system is in most case writetn in non conservative form, so that additional difficulies need to be handled.

Multiphase fows occur in many applications: for petroleum industry, nuclear industry (accident management), engines, pipes, etc.

Actual concerns about greenhouse gases lead to changes in the design of aircraft with an increase use of composite materials. This in turns offers new possibilities for design of ice protection systems, thus renewing interest in de-icing simulation tools. To save fuel burn, aircraft manufacturers are investigating ice protection systems such as electro-thermal or electro-mechanical de-icing systems to replace anti-icing systems. By reducing the adhesive shear strength between ice and surface, de-icing systems remove ice formed on the protected surfaces following a periodic cycle. This cycle is defined such that inter cycle ice shapes remain acceptable from a performance point of view. One of the drawbacks of de-icing device is the ice pieces shed into the flow. The knowledge of ice shedding trajectories could allow assessing the risk of impact/ingestion on/in aircraft components located downstream. When the pieces leave the aircraft surface, they become projectiles that can hit and cause severe damage to aircraft surface or other components, such as aircraft horizontal and vertical tails, or aircraft engine. Aircraft certification authorities, such as FAA, have specific requirements for large ice fragment ingestion during engine certification. Control surfaces or wing flaps are also sensitive to ice shedding because they can be blocked by ice fragments. Aircraft manufacturers rely mainly on flight tests to evaluate the potential negative effects of ice shedding because of the lack of appropriate numerical tools. The random shape and size taken by ice shed particles together with their rotation as they move make it difficult for classical CFD tools to predict trajectories. The numerical simulation of a full unsteady viscous flow, with a set of moving bodies immersed within, shows several difficulties for grid based methods. Drawbacks income from the meshing procedure for complex geometries and the re-griding procedure in tracing the body motion. A new approach that take into account the effect of ice accretion on flow field is used to solve the ice trajectory problem. The approach is based on mesh adaptation, penalization method and level sets.

Sandy sediments of tidal beaches are poor in reactive substances because they are regularly flushed by significant flow caused by tidal forcing.This transport process may significantly affect the flux of reactive solutes to the ocean. A two dimensional model coupling the Richards equation that describes the flow in permeable sediments and the conservation equation of the silicic acid was developed to simulate the evolution of the silicic acid concentration into a variably saturated porous media submitted to tidal forcing.

ORCs are Rankine cycles that use properly chosen low-boiling molecularly heavy organic compounds to drive the turbine in place of steam. This makes them suitable for the exploitation of low grade heat sources like biomass combustion, geothermal reservoirs and heat recovery from industrial processes. ORC turbines mainly use a single (less frequently, two) stage to expand the fluid. Up till present, no experimental data are available for flows of heavy fluids in the dense gas region. Experiments are difficult be cause of high temperature and pressure conditions, and fluid decomposition or inflammability in presence of air. This has motivated the use of numerical simulation as a preferential tool for dense gas flow analysis, but only a limited number of papers have been devoted to the computation of dense gas flows. With no experimental validation yet available for any of these configurations, care must be taken in the analysis of the computed flow fields because of their sensitivity to the thermodynamic model and to the numerical ingredients of the discretization scheme. Since no comparison with experimental data is possible, particular attention is devoted to code validation and model assessment. We created the plateform ORComp, for computing some global performance metrics, and we applied some UQ and numerical methods for taking into account the sun variability in the design of ORCs cycles.

Cavitation consists in a local pressure drop below the vapor pressure at the liquid temperature, thus creating a phase change and vapor bubbles formation. Their collapse in high-pressure region can dramatically lead to failure, erosion and other undesirable effects. For this reason, there is a strong effort devoted to develop predictive numerical tools for cavitating flows in industrial applications. Unfortunately, an accurate description of interactions between the vapour and liquid phases requires accurate physical models and a way to take into account the dynamics of the interface. Moreover, multiscale effects, turbulence and thermodynamics should be also considered. Cavitation models are typically dependent on two types of parameters: first, on some physical parameters, such as for example the number of bubbles, that is not usually well measured; secondly, on some empiric parameters, useful for fitting and calibration procedures with respect to the experimental data. Therefore, model parameters represent an impor- tant source of uncertainty. Moreover, it is not an easy task to well define boundary and initial conditions, because of difficulties encountered in order to control accurately experiments in cavitating flows. As a result, conditions imposed for the setting of a numerical simulation, are affected by a dramatic randomness. We performed a systematic study for considering the probabilistic properties of the input parameters permitting to capture non-linearities in uncertainty propagation. Moreover, the DEM method has been modified to take into account cavitation phenomena with real-gas effects.

Simulation of atmospheric entries of spacecraft is a challenging problem involving many complex physical phenomena, including rarefied gas effects, aerothermochemistry, radiation, and the response of thermal protection materials to extreme conditions. The post-flight analysis of a space mission requires accurate determination of the freestream conditions for the trajectory, that is, temperature and pressure conditions and the Mach number in front of the shock. The latters can be rebuilt from the pressure and heat flux measured on the spacecraft by means of a Flush Air Data System (FADS). This instrumentation comprises a set of sensors flush mounted in the thermal protection system to measure the static pressure (pressure taps) and heat flux (calorimeters). In this context, Computational Fluid Dynamics (CFD) supplied with UQ tools permits to take into account chemical effects and to include both measurement errors and epistemic uncertainties on the chemical model parameters in the bulk and at the wall (surface catalysis). Rebuilding the freestream conditions from the FADS data therefore amounts to solving a stochastic inverse problem. In this contexte, we proposed a new methodology for solving the inverse problem based on a Bayesian setting, that is, probability densities of possible values of freestream conditions are rebuilt from stagnation-point pressure and heat flux measurements. A Bayesian setting offers a rigorous foundation for inferring input parameters from noisy data and uncertain forward models, a natural mechanism for incorporating prior information, and a quantitative assessment of uncertainty on the inferred results.

Prediction of shallow water equations in realistic application depends on the level of complexity used for the physical modelling (such as for example, for the fricton coefficient) and on a set of empirical coefficients that are usually chosen in order to fit the experimental data. Then, input environmental conditions, topography and modelling involve a certain degree of uncertainty. The capability to take into account these uncertainties in the numerical simulation is of great importance in order to correctly predict extreme flood events. Stochastic modeling of long-wave propagation demands a robust shallow-water model in order to characterize the physical processes. We coupled a residual distribution scheme for shallow water equations with some stochastic methods in order to take into account uncertainties in the numerical simulation. Preliminary results showed that influence of uncertainties is stronger after the phase interaction indicating the need for a stochastic simulation in order to have a correct prediction of the numerical solution.

The `Aerosol` software is jointly developed by teams `BACCHUS` and
`Cagire`. It is a high order finite element library written in C++. The
code design has been carried for being able to perform efficient
computations, with continuous and discontinuous finite elements
methods on hybrid and possibly curvilinear meshes. The distribution of
the unknowns is made with the software `PaMPA`, developed within
teams `BACCHUS` and `PUMAS`. Maxime Mogé has been hired on a
young engineer position (IJD) obtained in the ADT `OuBa HOP`
for participating in the parallelization of the library, and arrived
on November, 1st 2011. On January 2012, Dragan Amenga-Mbengoué was
recruited on the ANR `Realfluids`.

At the end of 2011, Aerosol had the following features:

Development environement: use of `CMake` for compilation,
`CTest` for automatic testing and memory checking,
`lcov` and `gcov` for code coverage reports.

In/Out:
link with the XML library for handling with parameter files. Reader for
`GMSH`, and writer to the VTK-ASCII legacy format.

Quadrature formula: up to 11th order for Lines, Quadrangles, Hexaedra, Pyramids, Prisms, up to 14th order for tetrahedron, up to 21st order for triangles.

Finite elements: up to fourth degree for Lagrange finite elements on lines, triangles and quadrangles.

Geometry: elementary geometrical functions for first order lines, triangles, quadrangles.

Time iteration: explicit Runge-Kutta up to fourth order, explicit Strong Stability Preserving schemes up to third order.

Linear Solvers: link with the external linear solver UMFPack.

Memory handling: discontinuous and continuous
discretizations based on `PaMPA` for triangular and quadrangular
meshes.

Numerical schemes: continuous Galerkin method for the Laplace problem (up to fifth order) with non consistent time iteration or with direct matrix inversion. Scalar stabilized residual distribution schemes with explicit Euler time iteration have been implemented for steady problems.

This year, the following features were added:

Development environement: development of a `CDash`
server for collecting the unitary tests and memory checking. Beginning
of the development of an interface for functional tests.

General structure: Parts of the code were abstracted in order to allow for parallel development: Linear solvers (template type abstraction for generic linear solver external library), Generic integrator classes (integrating on elements, on faces with handling neighbor elements, or for working on Lagrange points of a given element), models (template abstraction for generic hyperbolic systems), equations of state (template-based abstraction for a generic equation of state).

In/Out:
Parallel `GMSH` reader, cell and point centered visualization
based on VTK-legacy formats. XML paraview files on unstructured
meshes (vtu), and parallel XML based files (pvtu).

Quadrature formula: Gauss-Lobatto type quadrature formula.

Finite elements: Hierarchichal orthogonal finite element basis on lines, triangles (with Dubiner transform). Finite element basis that are interpolation basis on Gauss-Legendre points for lines, quadrangles, and hexaedra. Lagrange, and Hierarchical orthogonal finite elements basis for hexaedra, prisms and tetrahedra.

Geometry: elementary geometrical functions for first order three dimensional shapes: hexaedra, prisms, and tetrahedra.

Time iteration: CFL time stepping, optimized CFL time schemes: SSP(2,3) and SSP (3,4)

Linear Solvers: Internal solver for diagonal matrices. Link with the external solvers PETSc and MUMPS.

Memory handling: parallel degrees of freedom handling for continuous and discontinuous approximations

Numerical schemes: Discontinuous Galerkin methods for hyperbolic systems. SUPG and Residual Distribution schemes.

Models: Perfect gas Euler system, real gas Euler system, scalar advection, Waves equation in first order formulation, generic interface for defining space-time models from space models.

Numerical fluxes: centered fluxes, exact Godunov' flux for linear hyperbolic systems, and Lax-Friedrich flux.

Parallel computing: Mesh redistribution, computation of
Overlap with `PaMPA`. Collective asynchronous communications
(`PaMPA` based). Tests on the cluster Avakas from MCIA, and on
Mésocentre de Marseille. The library was also compiled on PlaFRIM.

C++/Fortran interface: Tests for binding fortran with C++.

`COCA`(CodeOxydationCompositesAutocicatrisants) is a
`Fortran 90` code for the simulation of the oxidation process
in self-healing composite materials, developed in collaboration with
the Laboratoire des Composites ThermoStructuraux in Bordeaux (UMR-5801
LCTS). This process involves the chemical oxidation of some of the
matrix components of the composite, and the production of a liquid
oxide that flows and fills material cracks, acting as a diffusion
barrier against oxygen and thus protecting the ceramic fibers of the
material. `COCA` simulates this process using a finite element
discretization of the model equations. In its current version only
transverse cracks are available. `COCA` makes use of `PaStiX` to solve
the algebraic systems arising from the discretization.

`RealfluiDS` is a software dedicated to the simulation of inert or
reactive flows. It is also able to simulate multiphase, multimaterial,
MHD flows and turbulent flows (using the SA model). There exist 2D and 3D
dimensional versions. The 2D version is used to test new ideas that
are later implemented in the 3D one. This software implements the more
recent residual distribution schemes. The code has been parallelized
with and without overlap of the domains. An Uncertainty Quantification
library has been added to the software. A partitioning tool exists in
the package, which uses `Scotch`. In the years to come, all the know-how
of `RealfluiDS` will be transferred to `Aerosol`.

`MMG3D` is a tetrahedral fully automatic remesher. Starting from a
tetrahedral mesh, it produces quasi-uniform meshes with respect to a
metric tensor field. This tensor prescribes a length and a direction
for the edges, so that the resulting meshes will be anisotropic.
The software is based on local mesh modifications and an anisotropic
version of Delaunay kernel is implemented to insert vertices in the
mesh. Moreover, `MMG3D` allows one to deal with rigid body motion and
moving meshes. When a displacement is prescribed on a part of the
boundary, a final mesh is generated such that the surface points will be
moved according this displacement. `MMG3D` is used in particular in GAMMA
for their mesh adaptation developments, but also at EPFL (maths
department), Dassault Aviation, Lemma (a french SME), etc. `MMG3D` can
be used in `FreeFem++` (http://

A new version of `MMG3D` is under development. The big novelty of this
version is the modification of the surface triangulation. A. Froehly,
ingenieer in the FUI Rodin, is working on this new version.

The `ORComp` platform is a simulation tool permitting to design an
ORC cycle. Starting from the solar radiation, this plateform computes
the cycle providing the best performance with optimal choices of the
fluid and the operating conditions. It includes `RobUQ`, a
simulation block of the ORC cycles, the `RealFluid` code for
the simulation of the turbine and of the heat exchanger, the software
`FluidProp` (developed at the University of Delft) for
computing the fluid thermodynamic properties.

`PaMPA` (“Parallel Mesh Partitioning and Adaptation”) is a
middleware library dedicated to the management of distributed
meshes. Its purpose is to relieve solver writers from the tedious and
error prone task of writing again and again service routines for mesh
handling, data communication and exchange, remeshing, and data
redistribution. It is based on a distributed data structure that
represents meshes as a set of *entities* (elements, faces,
edges, nodes, etc.), linked by *relations* (that is,
computation dependencies).

`PaMPA` interfaces with `Scotch` for mesh redistribution, and with
`MMG3D` for parallel remeshing of tetrahedral elements. Other sequential
remeshers can be plugged in order to handle other types of elements.

Version `0.2` allows users to declare a distributed mesh, to
declare values attached to the entities of the meshes
(e.g. temperature attached to elements, pressures to the faces, etc.),
to exchange values between overlapping entities located at the
boundaries of subdomains assigned to different processors, to iterate
over the relations of entities (e.g. iterate over the faces of
elements), to remesh the pieces of the mesh that need to, and to
redistribute evenly the remeshed mesh across the processors of the
parallel architecture.

`PaMPA` is already used as the data structure manager for two solvers
being developed at Inria: `Plato` and `Aerosol`.

The developement of `Plato` (“*A platform for Tokamak
simulation*”) (http://`Plato` is a suite of
data and software dedicated to the geometry and physics of Tokamaks
and its main objective is to provide the Inria large scale initiative
`FUSION` teams working with plasma fluid models with a common
development tool. The construction of this platform will integrate the
following developments.

A (small) database corresponding to axi-symmetrical solutions of the
equilibrium plasma equations for realistic geometrical and magnetic
configurations (ToreSupra, JET and ITER). The construction of meshes
is always an important time consuming task. `Plato` will provide meshes
and solutions corresponding to equilibrium solutions that will be used
as initial data for more complex computations.

A set of tools for the handling, manipulation and transformation of meshes and solutions using different discretisations (P1, Q1, P3, etc)

Numerical templates allowing the use of 3D discretization schemes using finite element schemes in the poloidal plane and spectral Fourier or structured finite volume representations in the toroidal one.

Several applications (Ideal MHD and drift approximation) used in the
framework of the Inria large scale initiative `FUSION`.

The `RobUQ` platform has been conceived to solve problems in
uncertainty quantification and robust design. It includes the
optimization code `ALGEN`, and the uncertainty quantification
code `NISP`. It includes also some methods for the computation
of high-order statistics, efficient strategies for robust
optimization, the Simplex2 method. Some methods are developed in
partnership with the Stanford University (in the framework of the
associated team AQUARIUS). Other methods are developed in the
context of ANR `UFO`.

parallel graph partitioning, parallel static mapping, parallel sparse matrix block ordering, graph repartitioning, mesh partitioning.

`Scotch` (http://

The initial purpose of `Scotch` was to compute high-quality static
mappings of valuated graphs representing parallel computations onto
target architectures of arbitrary topologies. This allows the mapper
to take into account the topology and heterogeneity of the target
architecture in terms of processor speed and link bandwidth. This
feature, which was meant for the NUMA machines of the 1980's, has not
been widely used in the past because machines in the 1990's became UMA
again thanks to hardware advances. Now, architectures become NUMA
again, and these features are regaining popularity.

Version `5.0` of `Scotch`, released on August 2007,
was the first version to comprise parallel routines. This extension,
called `PT-Scotch` (for “*Parallel Threaded *`Scotch`*”), is based
on a distributed memory model, and makes use of the MPI and,
optionally, Posix thread APIs. Version 5.1, released on
September 2008, extended the parallel features of PT-Scotch, which can
now compute graph partitions in parallel by means of a parallel
recursive bipartitioning framework. Release 5.1.10 had made
Scotch the first full 64-bit implementation of a general purpose
graph partitioner.*

Version `6.0`, released on December 2012, corresponding to the
20-year anniversary of `Scotch`, offers many new features: static
mapping with fixed vertices, static remapping, and static remapping
with fixed vertices. Several critical algorithms of the formerly
sequential `Scotch` library can now run in a multi-threaded way. All
of these features will be available for the parallel
`PT-Scotch` library in the upcoming release `6.1`.

`Scotch` has been integrated in numerous third-party software, which
indirectly contribute to its diffusion, e.g. OpenFOAM
(fluid mechanics solver, OpenCFD ltd.), the Code_Aster Libre
solver (thermal and mechanical analysis software developed by French
state-owned electricity producer EDF), the Zoltan module of
the Trilinos software (SANDIA Labs), the parallel linear
system solvers `MUMPS` (ENSEEITH/IRIT, LIP and LaBRI),
`SuperLUDist` (U.C. Berkeley), `PaStiX` (LaBRI) and `HIPS` (LaBRI),
etc. `Scotch` is natively available in several Linux and Unix
distributions, as well as on some vendors platform (SGI, etc).

`SLOWS` (“*Shallow-water fLOWS*”) is a `C`-platform
allowing the simulation of free surface shallow water flows with
friction. Arbitrary bathymetries are allowed, defined either by some
complex piecewise analytical expression, or by

We have understood how to approximate the advection diffusion problem in the context of residual distribution schemes. A third order version for scalar problem has been written. It is uniformly accurate, from pure viscous to pure convection problems. This scheme has been generalised to the laminar Navier Stokes equations. An extension to the turbulent case (with Spalart Allmaras model) has also been written and tested. We have studied the (iterative) convergence issues using Jacobian Free techniques or the LUSGS algorithm. Tests in two and three dimensions have been carried out. This work is submitted in and has been the topic of .

A. Froehly has submitted her PhD thesis about the extension of the residual distribution scheme using isogeometric analysis. In particular, we have foccussed on mesh adaption, including at the boundary. A paper is being written to summarized the work.

One of the main open problems in high order schemes is the design of meshes that fit with enough accuracy the boundary of the computational domain. If this curve/surface is not locally straight/planar, the elements must be curved near the boundary, and their curvature need to be propagated to the interior of the domain to have valid elements. When the mesh is very streched, this can be quite challenging since, in addition, we want that the mesh keep a structure, in particular for boundary layers. Using tools explored in isogeometrical analysis, we have been able to construct a prototype computing curved meshes (in 2D and 3D), while keeping the structure of the mesh.

In collaboration with CEA (P.H. maire), we have developped and tested a new finite volume like algorithm able to simulate hypoelastic-plastics problems on unstructured meshes. This has been published in .

In Computational Fluid Dynamics the interest on embedded boundary methods for Navier-Stokes equations increases because they simplify the meshing issue, the simulation of multi-physics flows and the coupling of fluid-solid interactions in situation of large motions or deformations. Nevertheless, an accurate treatment of the wall boundary conditions remains an issue of these methods. In this work we develop an immersed boundary method for unstructured meshes based on a penalization technique and we use mesh adaption to improve the accuracy of the method close to the boundary. The idea is to combine the strength of mesh adaptation, that is to provide an accurate flow description especially when dealing with wall boundary conditions, to the simplicity of embedded grids techniques, that is to simplify the meshing issue and the wall boundary treatment when combined with a penalization term to enforce boundary conditions. The bodies are described using a level-set method and are embedded in an unstructured grid. Once a first numerical solution is computed mesh adaptation based on two criteria the level-set and the quality of the solution is performed.

Using a reinterpretation of the explicit RD scheme we had designed 2 years ago, we have been able to construct a third order accurate RD scheme in one space dimension. The extension to multidimensional problems is pending.

We have studied the extention of second order unsteady RD scheme to the ALE formulation. New version of the explsiict unsteady RD schemes have been studied.

F. Vilar has achieved his thesis on the approximation of the Euler equations written in pure Lagrangian coordinates. He has foccussed on third order accuracy in time and space, usning a Discontinuous Galerkin formulation. The solution is approximated localy by quadratic polynomials. The boundary of elements are approximated by Bezier curves. He has managed to achieve an approximation consistant with the geometric Cosnervation Law. Many test cases have been computed, showing both a dramatic improvement of the accuracy and the robustness of the method with respect to its second order counterpart.

Arnaud Krust has finished his PhD thesis on boundary layer enrichment. We developed a numerical framework well suited for advection- diffusion problems when the advection part is dominant. In that case, given Dirichlet type boundary condition, it is well known that a boundary layer develops. In order to resolve correctly this layer, standard methods consist in increasing the mesh resolution and possibly increasing the formal accuracy of the numerical method. In this work, we follow another path: we do not seek to increase the formal accuracy of the scheme but, by a careful choice of finite element, to lower the mesh resolution in the layer. Indeed the finite element representation we choose is locally the sum of a standard one plus an enrichment. This work proposes such a method and with several numerical examples, we show the potential of this approach. In particular we show that the method is not very sensitive to the choice of the enrichment functions. The best choice of enrichment are shown to be obtained by an iterative mechanisms which bears some common features with mesh refinement.

We developed two research lines: the first one focused on the computation of high-order statistics, the second one is related to the formulation of a global framework in the coupled physical/stochastic space. First, we proposed a formulation in order to compute the decomposition of high-order statistics. The idea is to compute the most influential parameters for high orders permitting to improve the sensitivity analysis. Second objective is to illustrate the correlation between the high-order functional decomposition and the PC-based techniques, thus displaying how to compute each term from a numerical point of view. Secondly, Basing on the Harten multiresolution framework in the stochastic space, we proposed a method allowing an adaptive refinement/derefinement in both physical and stochastic space for time dependent problems. As a consequence, an higher accuracy is obtained with a lower computational cost with respect to classical non-intrusive approaches, where the adaptivity is performed in the stochastic space only. Performances of this algorithm are tested on scalar Burgers equation and Euler system of equations, comparing with the classical Monte Carlo and Polynomial Chaos techniques.

Application of some of these techniques to tsunami simulations have been conducted.

The Simplex-Simplex approach, that has been proposed in 2011, has been further developed. In particular, the algorithm has been improved yielding an evolved version of the Simplex2 approach, and the formulation has been extended to treat mixed aleatory/epistemic uncertainty. The resulting SSC/NM (Simplex Stochastic Collocation/Nelder-Mead) method, called Simplex2, is based on i) a coupled stopping criterion and ii) the use of an high-degree polynomial interpolation of the optimization space. Numerical results show that this method is very efficient for mono-objective optimization and minimizes the global number of deterministic evaluations to determine a robust design. This method is applied to some analytical test cases and a realistic problem of robust optimization of a multi-component airfoil. In this work, we present an extension of this method for treating epistemic uncertainty in the context of interval analysis approach. This method consists in a multi-scale strategy based on simplex space representation in order to minimize global cost of mixed epistemic-aleatory uncertainty quantification. This reduction is obtained i) by a coupled stopping criterion, ii) by an adaptive polynomial interpolation that could be used as a response surface in order to accelerate optimization convergence, iii) by a simultaneous min/max optimization sharing the same interpolating polynomials at each iteration [.....].

We developed the numerical solver based on a DEM formulation modified for including viscous effects and a more complex equation of state for the vapor region. The method used is the DEM for the resolution of a reduced five equation model with the hypothesis of pressure and velocity equilibrium , without mass and heat transfer. This method results in a well-posed hyperbolic systems, allowing an explicit treatment of non conservative terms, without conservation error. The DEM method directly obtains a well-posed discrete equation system from the single-phase conservation laws, producing a numerical scheme which accurately computes fluxes for arbitrary number of phases. We considered two thermodynamic models , i.e. the SG EOS and the Peng-Robinson (PR) EOS. While SG allows preserving the hyperbolicity of the system also in spinodal zone, real-gas effects are taken into account by using the more complex PR equation. The higher robustness of the PR equation when coupled with CFD solvers with respect to more complex and potentially more accurate multi-parameter equations of state has been recently discussed. In this paper, no mass transfer effect is taken into account, thus the PR equation can be used only to describe the vapor behavior, while only the SG model is used for describing the liquid.

Another topic covered by Bacchus is about the numerical approximation of non conservative systems. One very interetsing example is obtained by the Kapila model, for which shock relations can be found from physical principles. Most, if not all, the know discretisation are at best stable, but do not converge under mesh refinement. We have proposed a way to do so by using some modifications of a Roe-like solver.

Our studies regarding parallel remeshing use a dedicated software
framework called `PaMPA` (for “*Parallel Mesh
Partitioning and Adaptation*”; see
Section for more details about
it). This software, whose development started three years ago, allow
one to describe distributed meshes in an abstract way.

The work carried out this year concerns the definition of suitable
algorithms for performing remeshing in parallel, using a sequential
remesher. To do so, areas suitable for remeshing (that is, cells for
which a quality measurement routine indicates that remeshing is
necessary) are grouped into boules of a size small enough to be
handled by a sequential remesher, and big enough so that this remesher
can do useful work on each of the boules. The core of the work is
therefore to identify and build relevant boules, to send them to as many
processors as possible, to remesh them sequentially, and to merge the
remeshed boules into what remains of the original mesh. Then, areas
that have not already been processed (e.g. areas at the interface of
two or more boules) can be considered in turn, until all relevant
cells have been considered. The structure and operations of
`PaMPA` have been presented in .

Several algorithms have been experimented in order to build the
boules. The one which proved the most efficient is based on a
partitioning of an induced subgraph of the element graph, using
the `PT-Scotch` tool which is already used for mesh redistribution.
`PaMPA` has been interfaced with `MMG3D` in order to create a
demonstrator for remeshing in parallel tetraedral meshes. A set of
tetrahedral cube-shaped test meshes has been created, with a metric
that coerces remeshing in the interior of the cubes. `PaMPA` was able
to remesh a 12 million tetrahedral mesh into 18 million tetrahedra
on 80 processors, yielding a quality equivalent to the one of the
sequential remesher used alone. Scalability experiments on much larger
test cases are in progress; yet, their quality will no longer be
comparable to a sequential test case. This version of `PaMPA` will
soon be released and made available to the community.

Last year, a set of new algorithms for sequential remapping and
mapping with fixed vertices has been devised. These algorithms had
been intergrated in the `Charm++` parallel environment, in the
context of a collaboration with the Joint Laboratory for Petascale
Computing (JLPC) between Inria and UIUC.

These algorithms have been integrated in version `6.0` of
`Scotch`, which has been released in the beginning of December. This
release also comprises new threaded formulations of the critical and
most time-consuming algorithms used in graph partitioning, namely:
graph coarsening and our diffusion-based method.

All the remapping algorithms that have been designed last year were
meant to be easily parallelizable. The work of this year has been to
derive and implement their parallel formulation. This is now the case,
which completes this five year long work. These algorithms, which
offer a quality similar to the one of the sequential algorithms, will
be released in version `6.1` of `Scotch`.

In the context of ANR `PETALh`, our task is to find ways of reordering
sparse matrices so as to improve the robustness of incomplete LU
factorization techniques. The path we are following is to favor the
diagonal dominance of the matrices corresponding to the subdomains of
the Schur complement. Our studies aim at injecting some information
regarding off-diagonal numerical values into nested dissection like
reordering methods, so as to favor the preservation of high
off-diagonal values into either the subdomains or the separators of
Schur complement techniques.

This year, we have set-up a software testbed for experimenting such
methods. It comprises a modified version of the `Scotch` sparse matrix
ordering library for computing orderings and of the `HIPS` iterative
sparse linear system solver for evaluating them. The text cases used
are provided by the industrial partners of the `PETALh` project.

Our first experiments show that injecting information regarding
off-diagonal terms can indeed improve convergence. However, many
parameters have to be evaluated in a thorough experimentation plan.
Since `Scotch` uses integer terms only, some scaling has to be
performed, which imposes to determine how to scale the coefficients
(type of scaling and range), whether to filter small values, etc.
This work is in progress.

This work aims at finding subdomain decompositions that balance the sizes of off-diagonal contribution blocks.

In terms of graph partitioning, we have expressed this problem as a multi-constraint partitioning problem. In addition to bearing a weight that expresses the workload associated with its degrees of freedom, every graph vertex bears a second weight that holds the number of unknowns to which it is linked outside of its subdomain. Hence, in the nested dissection process, every time a separator is computed, this second weight is updated for each frontier vertex of the separated parts, before they are also recursively separated.

This year, we have set-up a software testbed for experimenting this
approach. The `Scotch` sparse matrix ordering library has been
modified so that graph vertices can bear multiple vertex weights. This
required a slight change in the interfaces, but also modifications of
the internal handling of graphs in many modules (nested dissection,
graph coarsening, etc).

The simulation code `CORBIS` (rarefied gases in 2 space dimensions on
structured meshes) has been entirely modified: modular form, use of
the `git` version control system,
modification to use unstructured meshes, MPI/OpenMP hybrid
parallelization. Very good performance in terms of scalability and efficiency
have been obtained, up to 700 cores.

In collaboration with CEA-CESTA, we have worked on the following subjects.

A new method to generate locally refined velocity grids has been proposed. Very high performance improvement have been obtained (acceleration of the CPU time by a ratio around 30 for 3D computations). This work has been published in the proceedings of the 28th Symposium on rarefied Gas Dynamics, and is the subject of a paper submitted for publication.

The second order Discontinuous Galerkin method has been studied for a one-dimensional problem of rarefied gases: we have shown that this method is clearly more accurate and faster than our finite volume method (which was used up to fourth order). This study will be developed in 2013 (numerical analysis and application to 2D problems).

We have presented one of the first numerical simulation of the Crookes radiometer. This phenomenon, due to the thermal creep flow, has been simulated with a Cartesian grid approach, with a cut-cell techniques that allow for an accurate treatment of solid boundaries. This work has been published in the proceedings of the 28th Symposium on rarefied Gas Dynamics.

We have proposed a new method to discretize kinetic equations. It is basedd on a discretization of the velocity variable which is local in time and space. This induces an important gain in term of memory storage and CPU time, at least for 1D problems (this work has been rpesented in a paper submitted for publication). Two-dimensional extensions are under development.

We have shown that the recent method “Unified Gas Kinetic Scheme”, proposed by K. Xu to simulated multiscale rarefied gas flows, can be extended to other fields, like radiative transfer. This approach, based on a simple finite volume technique, is very general and can be easily applied to complex geometries with unstructured meshes. This work has been presented in a paper submitted for publication.

The research department of VolksWagen AG uses the `OpenFOAM`
fluid dynamics code, among other software. The parallel version of
this code relies on `Scotch` to distribute mesh data across
processors. When running their simulations, the engineers of VW in
charge of running numerical simulations have noticed load imbalance
among the processors, and would like to have this problem solved in
order to achieve better machine utilization.

The purpose of this contract is to investigate the potential causes of
the evidenced imbalance, and to find remedies to it. The proposed
solutions should be integrated in the trunks of `Scotch` and/or of
`OpenFOAM`. This contract started in April and ended in
December.

Title: PETALH: Preconditioning scientific applications on pETascALe Heterogeneuous machines

Type: ANR

Grant: Cosinus 2010

Duration: September 2011 - May 2013

Coordinator: GRIGORI Laura (Inria Saclay-Île de France)

Other partners: Inria Saclay-Île de France (leader of the project), Paris 6, IFP (Rueil-Malmaison), CEA Saclay.

See also: http://

Abstract: In this collaborative effort, we propose to develop parallel preconditioning techniques for the emergent hierarchical models of clusters of multi-core processors, as used for example in future petascale machines. The preconditioning techniques are based on recent progress obtained in combining the well known incomplete LU (ILU) factorization with tangential filtering.

The track we are following in order to contribute to this goal is to investigate improved graph ordering techniques that would privilege the diagonal dominance of the matrices corresponding to the subdomains of the Schur complement. It amounts to integrating numerical values into the adjacency graph of the matrices, so that the importance of off-diagonal terms is taken into account when computing graph separators. The core of this work is planned to take place at the beginning of next year.

This project is a continuation of `PETAL` project that was
funded by ANR Cosinus 2008 call.

Title: Robust structural Optimization for Design in Industry (Rodin)

Type: FUI

Duration: July 2012 - July 2015

Coordinator: ALBERTELLI Marc (Renault)

Abstract: From the research point of view, the RODIN project will focus on: (1) extending level set methods to nonlinear mechanical or multiphysics models and to complex geometrical constraints, (2) developing algorithms for moving meshes with a possible change of topology, (3) adapting in a level-set framework second-order optimization algorithms having the ability of handling a large number of design variables and constraints.

The project will last 3 years and will be supported by a consortium of 7 partners: (1) 2 significant end-users, Renault and EADS, who will provide use-cases reflecting industrial complexity; (2) 3 academics partners, CMAP, J.-L. Lions laboratory and Inria of Bordeaux, who will bring expertise in applied mathematics, structural optimization and mesh deformation; (3) A software editor, ESI Group, who will provide mechanical software package and will pave the way of an industrialization; (4) A SME, Eurodecision, specialized in large-scale optimization.

Jointly with the team Bacchus and with ONERA, we participated in
project *Colargol*, which aimed at comparing implementations and
performances of high order finite elements methods implemented in our library
`Aerosol`, and in the high order discontinuous Galerkin library
Aghora developed at ONERA. For making fair comparisons
with this library, we had to extend our library to three dimensions, and to
finish the first parallel version of the code. Our first conclusions is the
necessity of storing all geometrical terms of the finite elements
methods (Jacobian, Jacobian matrices, etc...) for obtaining good
performance. We are still running the comparison tests on the
*Mésocentre de Calcul Intensif Aquitain*.

Title: Industrialisation of High-Order Methods

Type: COOPERATION (TRANSPORTS)

Instrument: Specific Targeted Research Project (STREP)

Duration: October 2010 - September 2013

Coordinator: Deutsches Zentrum fur Luft und Raumfahrt (Germany)

Others partners: DLR (Germany), Dassault Aviation (France), EADS-Cassidian (Germany), Cenaero (Belgium), Numeca (Belgium), ARA (UK), FOI (Sweden), Inria (france), NLR (the Nederlands), ONERA (France), TSAGI (Russia), ENSAM (France), Imperial College (UK), Universities of Bergamo (Italy), Varsaw (Poland), Poznan (Poland), Linköping (Sweden), UniversitĆatholique de Louvain (Belgium).

See also: http://

Abstract:The proposed IDIHOM project is motivated by the increasing demand of the European aerospace industries to advance their CFD-aided design procedure and analysis by using accurate and fast numerical methods, so-called high-order methods. They will be assessed and improved in a top-down approach by utilising industrially relevant complex test cases, so-called application challenges in the general area of turbulent steady and unsteady aerodynamic flows, covering external and internal aerodynamics as well as aeroelastic and aeroacoustic applications. Thus, the major aim is to support the European aeronautics industry with proven-track method(s) delivering an increased predictive accuracy for complex flows and (by same accuracy) an alleviation of computational costs which will secure their global leadership. An enhancement of the complete "high-order methods suite" is envisaged, including the most relevant methods, Discontinuous Galerkin and Continuous Residual-Based methods, in combination with underlying technologies as high-order grid generation and adaptation, visualisation, and parallelisation. The IDIHOM project is a key-enabler for meeting the ACARE goals, as higher-order methods offer the potential of more accurate prediction and at the same time faster simulations. Inria is involved in the design of Continuous Residual-Based methods for the simulation of steady trubulent flows.

Title: ADaptive schemes for DEterministic and stoChastiC Flow PrOblems (ADDECCO)

Type: IDEAS (AdG # 226316)

Instrument: ERC Advanced Grant (Advanced)

Duration: December 2008 - November 2013

Coordinator: Inria (France)

Others partners: none

See also: http://

Abstract: The numerical simulation of complex compressible flow problem is still a challenge nowadays, even for the simplest physical model such as the Euler and Navier Stokes equations for perfect gases. Researchers in scientific computing need to understand how to obtain efficient, stable, very accurate schemes on complex 3D geometries that are easy to code and to maintain, with good scalability on massively parallel machines. Many people work on these topics, but our opinion is that new challenges have to be tackled in order to combine the outcomes of several branches of scientific computing to get simpler algorithms of better quality without sacrificing their efficiency properties. In this proposal, we will tackle several hard points to overcome for the success of this program. We first consider the problem of how to design methods that can handle easily mesh refinement, in particular near the boundary, the locations where the most interesting engineering quantities have to be evaluated. CAD tools enable to describe the geometry, then a mesh is generated which itself is used by a numerical scheme. Hence, any mesh refinement process is not directly connected with the CAD. This situation prevents the spread of mesh adaptation techniques in industry and we propose a method to overcome this even for steep problems. Second, we consider the problem of handling the extremely complex patterns that occur in a flow because of boundary layers: it is not always sufficient to only increase the number of degrees of freedom or the formal accuracy of the scheme. We propose to overcome this with class of very high order numerical schemes that can utilise solution dependant basis functions. Our third item is about handling unsteady uncertainties in the model, for example in the geometry or the boundary conditions. This need to be done efficiently: the amount of computation increases a priori linearly with the number of uncertain parameters. We propose a non–intrusive method that is able to deal with general probability density functions (pdf), and also able to handle pdfs that may evolve during the simulation via a stochastic optimisation algorithm, for example. This will be combined with the first two items of this proposal. Many random variables may be needed, the curse of dimensionality will be dealt thanks to multiresolution method combined with sparse grid methods. The aim of this proposal is to design, develop and evaluate solutions to each of these challenges. Currently, and up to our knowledge, none of these problems have been dealt with for compressible flows with steep patterns as in many moderns aerodynamics industrial problems. We propose a work program that will lead to significant breakthroughs for flow simulations with a clear impact on numerical schemes and industrial applications. Our solutions, though developed and evaluated on flow problems, have a wider potential and could be considered for any physical problem that are essentially hyperbolic.

AQUARIUS associated team is a research project dealing with uncertainty quantification and numerical simulation of high Reynolds number flows. It represents a challenging study demanding accurate and efficient numerical methods. It involves the Inria team BACCHUS and the groups of Pr. Charbel Farhat from the Department of Aeronautics and Astronautics and Pr. G. Iaccarino from the Department of Mechanical Engineering at Stanford University. The first topic concerns the simulation of flows when only partial information about the physics or the simulation conditions (initial conditions, boundary conditions) is available. In particular we are interested in developing methods to be used in complex flows where the uncertainties represented as random variables can have arbitrary probability density functions. The second topic focuses on the accurate and efficient simulation of high Reynolds number flows. Two different approaches are developed (one relying on the XFEM technology, and one on the Discontinuous Enrichment Method (DEM), with the coupling based on Lagrange multipliers). The purpose of the proposed project is twofold : i) to conduct a critical comparison of the approaches of the two groups (Stanford and Inria) on each topic in order to create a synergy which will lead to improving the status of our individual research efforts in these areas ; ii) to apply improved methods to realistic problems in high Reynolds number flow.

Politechnico de Milano, Aerospace Department (Pr. A. Guardone)

We have a collaboration on ALE for compressible flows and ORC fluids.

von Karman Institute: T. Magin

We work together on Uncertainty Quantification problems for the identification of inflow condition of hypersonic nozzle flows.

In the context of the JLPC (Joint Laboratory for Petascale Computing),
people involved in the development of graph partitioning algorithms in
`Scotch` collaborate with several US partners (UIUC, Argonne) so as to
improve partitioning run time and quality for large scale simulations.
Sébastien Fourestier has been attending the Inria-UIUC meeting of
last September and has delivered two talks, one regarding `Scotch` and
the other regarding `PaMPA`.

In the context of the `HOSCAR` project jointly funded by Inria
and CNPq, coordinated by Stéphane LANTERI on the French side,
François Pellegrini and Pierre Ramet have participated in a
joint workshop in Petrópolis last September.
A collaboration is envisioned regarding parallel graph partitioning
algorithms for data placement in the context of big data applications.

People involved in the development of graph partitioning algorithms in
`Scotch` have a loose collaboration with Sherry Li and her team at
Berkeley, regarding sparse matrix reordering techniques.

Jan KLOSA (from Apr 2012 until Oct 2012)

Subject: Arbitrary Lagrangian Euler (ALE) for very high order schemes in compressible fluid dynamics

Institution: Technische Universität Braunschweig (Germany)

Paul Constantine (Post doc, January 2102)

Subject: Uncertainty quantification

Institution: Aquarius team, Stanford University (Germany)

Luca Arpaia (From Apr 2012 until Oct 2012)

Subject: Arbitrary Lagrangian Euler (ALE) for very high order schemes in compressible fluid dynamics

Institution: Politechnico de Milano (Italy)

Andrea Filipni (From october 2012 until April 2013)

Subject:

Institution: Politechnico de Milano (Italy)

Visits of Pietro Marco Congedo and Gianluca Geraci during a month (June-July 2012) at the NASA Center for Turbulence Research, Stanford University.

Rémi Abgrall is co-chief editor of the “International Journal on Numerical in Fluids”. He is associate editor of
the “Journal of Computational Physics”, “Mathematcis of Computation”,
“Journal of Scientific Computiong”, “Computers and Fluids” and
“Advances in Applied Mathematics and Mechanics”. He is member of the editorial
board of the “Mathematiques et Applications” book series of the french SMAI (edited by
Springer Verlag). He is responsible of the GAMNI group of SMAI. He is tresurer of ECCOMAS.
He is the organiser of HONOM 2013 (http://

Héloïse Beaugendre is a member of the organizing committee of the second ECCOMAS Young Investigators Conference (http://

Cécile Dobrzynski is one of the organizers of the seminar
“*Modélisation et Calcul*” of the Institut de
Mathématiques de Bordeaux.
She is member of the board of the GAMNI group of SMAI, of which she is secretary.
She is member of the scientific committee for the organization of mini-symposia in collaboration between SMAI-GAMNI and AUM for CANUM 2012.
She is co-chairwoman for the organization of the second ECCOMAS Young Investigators Conference (http://

Pietro Marco Congedo is a member of the organizing committee of HONOM2013. In 2012, he gave 5 invited seminars (von Karman Institute ; Complex modeling, Convergence, and Uncertainty Quantification Workshop, Uppsala, Sweden ; Workshop BIS2012, Paris ; SIAM Conference on Uncertainty Quantification, Raleigh, USA ; 1st meeting GAMNI-MAIRCI: Précision et Incertitudes, Paris).

License : Héloïse Beaugendre, Responsable des projets TER de première année, 10h, L3, ENSEIRB-MATMÉCA, France

License : Héloïse Beaugendre, Encadrement TER, 16h, L3, ENSEIRB-MATMÉCA, France

Licence : Cécile Dobrzynski, Langages en Fortran 90, 43h, L3, ENSEIRB-MATMÉCA, France

Licence : Cécile Dobrzynski, Analyse numérique, 24h, M1, ENSEIRB-MATMÉCA, France

Licence : Cécile Dobrzynski, Outils informatiques pour le calcul scientifique, 65h, formation Structures Composites, ENSCBP, France

Licence : François Pellegrini : Architecture des ordinateurs, 25h, L2, Université Bordeaux 1

Licence : Pietro Marco Congedo, Analyse numérique II, 24h, M1, ENSEIRB-MATMÉCA, France

Licence : Mario Ricchiuto, Fundamentals of Numerical Analysis, 24h,ENSEIRB-MATMÉCA, France.

Master : Héloïse Beaugendre, Mise à niveau en algorithmique et Programmation, 30h, M1, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, Approximation numérique et problèmes industriels, 52h, M1, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, Outils informatiques pour l'insertion professionnelle, 9h, M2, Université Bordeaux 1, France

Master : Héloïse Beaugendre, Calcul Haute Performance, 40h, M1, ENSEIRB-MATMÉCA, France

Master : Héloïse Beaugendre, Calcul Haute Performance, 40h, M2, ENSEIRB-MATMÉCA and Université Bordeaux 1, France

Master : Cécile Dobrzynski, Projet fin d'études, 6h, M2, ENSEIRB-MATMÉCA, France

Master : Cécile Dobrzynski, TER, 18h, M1, ENSEIRB-MATMÉCA, France

Master : Pietro Marco Congedo, Simulation Numérique des écoulements fluides, 20h, M3, ENSEIRB-MATMÉCA, France

Master : Mario Ricchiuto, Simulation Numérique des écoulements fluides, 16h, M3, ENSEIRB-MATMÉCA, France

Master : Pietro Marco Congedo, TER, 16h, M1, ENSEIRB-MATMÉCA, France

Master : Mario Ricchiuto, Post-graduate course on introduction to CFD, 18h, M2 IAS (Master Spécialisé Ingénierie Aéronautique et Spatiale, http://

Doctorat : Mario Ricchiuto, Post-Graduate plenary lecture on the use of residual methods in CFD, 3h, D1, CEMRACS summer school, France

PhD : Algiane Froehly, Méthodes numériques pour la prise en compte exacte des géométries dans les codes de CFD, Université Bordeaux I, 7 Dec. 2012, R. Abgrall and C. Dobrzynski

PhD : Arnaud Krust, Méthodes d'enrichissement pour Les équations de Navier Stokes, Université de Bordeaux I, 31 october 2012. R. Abgrall.

PhD : François Vilar, “Méthodes d'ordre très élevé pour la résolution des équations de l'hydrodynamique Lagrangienne multidimensionnelles”. Université de Bordeaux I, November 16th 2012, R. Abgrall and P.H. Maire.

PhD in progress: Dante de Santis, High order residual distribution methods for turbulent steady flows, since september 2010, R. Abgrall and M. Ricchiuto

PhD in progress: Gianluca Geraci. multi-resolution inspired methods for uncertainty quantification, 2010, Rémi Abgrall and Pietro Marco Congedo.

PhD in progress : Sébastien Fourestier, Redistribution dynamique parallèle efficace de la charge pour les problèmes numériques de très grandes tailles, 2008, F. Pellegrini

PhD in progress : Damien Genêt, Conception d'une plate forme parallèle pour la résolution des EDP de la mécanique des fluides, 2009, M. Ricchiuto, F. Pellegrini

PhD in progress : Cédric Lachat, Partitionnement et adaptation parallèles de maillages pour des simulations dans les tokamaks, 2009, F. Pellegrini and C. Dobrzynski

HdR : Stéphane Brull, Université Bordeaux I, R. Abgrall, November 19th, 2012.

HdR : Patrice Kadionik, Contribution à la conception des systèmes numériques embarqués. Application à l'adéquation algorithme-architecture pour la compression vidéo et à l'informatique ubiquitaire, September 5th, François Pellegrini : referee

Olivier Saut, Contributions en optique non-linéaire et en modélisation de la croissance tumorale en vue des applications cliniques, September 23th 2012, Université Bordeaux I, R. Abgrall, jury

HdR, Aswhin Chinnayya, Contribution à l'étude numérique des écoulements diphasiques et compressibles, Université de Rouen, Rémi Abgrall : referee. December 6th, 2012

PhD : Guilherme Cunha (ISAE), Optimisation d'une méthodologie de simulation numérique pour l’Aéroacoustique sur un couplage faible des méthodes d’aérodynamique instationnaire et de propagation acoustique, R. Abgrall, referee. October 18th, 2012

PhD, Steven Diot. Méthodes d'ordre élevé pour la mécanique des fluides compressible, Université de Toulouse, 30 August 2012. R. Abgrall, referee

PhD: Koen Hillewaert, Discontinuous Galerkin schemes for turbulent 3D applications, Université Catholique de Louvain, October 4th, 2012. R. Abgrall, referee.

PhD : Matthieu Lefebvre, Algorithmes sur GPU pour la simulation numérique en mécanique des fluides, François Pellegrini : referee

PhD : François-Henry Rouet, Memory and performance issues in parallel multifrontal factorizations and triangular solutions with sparse right-hand sides, François Pellegrini : jury

PhD : Kurt Sermeus, Multi-dimensional upwind discretization and application to compressible flows, Université Libre de Bruxelles, R. Abgrall, December 12th, 2012, referee

PhD : KunKun Tan, Combining discrete equations method and upwind downwind-controlled splitting for non-reacting and reacting two-fluid computations, Université de Grenoble, December 14th 2012, R. Abgrall, referee

PhD : Dario Isola, An interpolation-free two dimensional conservative ALE scheme over adaptive unstructured grids for rotorcraft aerodynamics, Politechnico de Milano, March 1st, 2012. R. Abgrall, referee.

MdC : Participation of Pietro Marco Congedo in the selection committee for position number 876 (section 60), Université Pierre et Marie Curie.

Pietro Marco Congedo, Maria-Giovanna Rodio and Julie Tryoen
participated in the “*Fête de la Science*”, concerning
flows and renewable energies, Bordeaux, October.

François Pellegrini has many activities related to software
law and economic development, which are becoming part of his research
activity. Yet, as they do not fit in the scope of the
`BACCHUS` EPI, they are presented here:

Talk entitled “*The case for creation and innovation
vs. ACTA*”, S&D Hearing “*ACTA: Whose rights does it
protect?*”, European Parliament, Brussels, April.

Invited by the students of the ENSAA engineering school to deliver talks at the JOSENSAA open-source conference, Agadir, May.

Presentation of `Scotch` during the `I-Match`
academics-industry meeting organized by Inria Bordeaux Sud-Ouest,
Talence, June.

Participation in the round table “*Open innovation*” at
Solutions Linux, Paris, June.

Invitation to deliver a talk at the seminar on
“*Accessibility and Diversity on Internet*” organized by
the *Organization Internationale de la Francophonie* in the
context of the Internet Governance Forum, Baku, November.

Talk “*Le droit du numérique : une histoire à
préserver*” delivered at the *Colloque pour un Musée de
l'informatique et de la société numérique en France*,
CNAM, Paris, November (published as ).

Co-organization and co-chair of the colloquium
“*Innovation ouverte et innovation libre*” at Conseil
Régional d'Aquitaine, as co-president of Aquinetic, Bordeaux,
November.

Conference on the digital revolution at the *Festival du
Film d'Histoire de Pessac*, Pessac, November.

Participation in the colloquium “*Le droit au Libre*” on
libre software licenses, organized by the association of
barristers of Toulouse, November.

Three hour training on author's right and software law delivered to about thirty academics and engineers. Training day organized by CNRS, Talence, November.

Three hour training on software licenses, libre software licenses and interoperability delivered to about forty industry people, mostly from local SMEs. Training day organized by Cap'Tronic, Talence, December.