## Section:
Software2>
### AeroSol3>

Participants : Damien Genêt [corresponding member for Bacchus] , Maxime Mogé, Dragan Amenga-Mbengoué, François Pellegrini, Vincent Perrier [corresponding member] , Mario Ricchiutto, François Rue.

The `Aerosol` software is jointly developed by teams `BACCHUS` and
`Cagire` . It is a high order finite element library written in C++. The
code design has been carried for being able to perform efficient
computations, with continuous and discontinuous finite elements
methods on hybrid and possibly curvilinear meshes. The distribution of
the unknowns is made with the software `PaMPA` , developed within
teams `BACCHUS` and `PUMAS` . Maxime Mogé has been hired on a
young engineer position (IJD) obtained in the ADT `OuBa HOP`
for participating in the parallelization of the library, and arrived
on November, 1st 2011. On January 2012, Dragan Amenga-Mbengoué was
recruited on the ANR `Realfluids` .

At the end of 2011, Aerosol had the following features:

Development environement: use of

`CMake`for compilation,`CTest`for automatic testing and memory checking,`lcov`and`gcov`for code coverage reports.In/Out: link with the XML library for handling with parameter files. Reader for

`GMSH`, and writer to the VTK-ASCII legacy format.Quadrature formula: up to 11th order for Lines, Quadrangles, Hexaedra, Pyramids, Prisms, up to 14th order for tetrahedron, up to 21st order for triangles.

Finite elements: up to fourth degree for Lagrange finite elements on lines, triangles and quadrangles.

Geometry: elementary geometrical functions for first order lines, triangles, quadrangles.

Time iteration: explicit Runge-Kutta up to fourth order, explicit Strong Stability Preserving schemes up to third order.

Linear Solvers: link with the external linear solver UMFPack.

Memory handling: discontinuous and continuous discretizations based on

`PaMPA`for triangular and quadrangular meshes.Numerical schemes: continuous Galerkin method for the Laplace problem (up to fifth order) with non consistent time iteration or with direct matrix inversion. Scalar stabilized residual distribution schemes with explicit Euler time iteration have been implemented for steady problems.

This year, the following features were added:

Development environement: development of a

`CDash`server for collecting the unitary tests and memory checking. Beginning of the development of an interface for functional tests.General structure: Parts of the code were abstracted in order to allow for parallel development: Linear solvers (template type abstraction for generic linear solver external library), Generic integrator classes (integrating on elements, on faces with handling neighbor elements, or for working on Lagrange points of a given element), models (template abstraction for generic hyperbolic systems), equations of state (template-based abstraction for a generic equation of state).

In/Out: Parallel

`GMSH`reader, cell and point centered visualization based on VTK-legacy formats. XML paraview files on unstructured meshes (vtu), and parallel XML based files (pvtu).Finite elements: Hierarchichal orthogonal finite element basis on lines, triangles (with Dubiner transform). Finite element basis that are interpolation basis on Gauss-Legendre points for lines, quadrangles, and hexaedra. Lagrange, and Hierarchical orthogonal finite elements basis for hexaedra, prisms and tetrahedra.

Geometry: elementary geometrical functions for first order three dimensional shapes: hexaedra, prisms, and tetrahedra.

Time iteration: CFL time stepping, optimized CFL time schemes: SSP(2,3) and SSP (3,4)

Linear Solvers: Internal solver for diagonal matrices. Link with the external solvers PETSc and MUMPS.

Memory handling: parallel degrees of freedom handling for continuous and discontinuous approximations

Numerical schemes: Discontinuous Galerkin methods for hyperbolic systems. SUPG and Residual Distribution schemes.

Models: Perfect gas Euler system, real gas Euler system, scalar advection, Waves equation in first order formulation, generic interface for defining space-time models from space models.

Numerical fluxes: centered fluxes, exact Godunov' flux for linear hyperbolic systems, and Lax-Friedrich flux.

Parallel computing: Mesh redistribution, computation of Overlap with

`PaMPA`. Collective asynchronous communications (`PaMPA`based). Tests on the cluster Avakas from MCIA, and on Mésocentre de Marseille. The library was also compiled on PlaFRIM.