CONCHA is an INRIA Project-Team joint with University of Pau and Pays de l'Adour and CNRS (LMA, UMR 5142). CONCHA has been created as an 'equipe INRIA' in april 2007, and is an EPI since february 2008.

The main objective of this project is the development of innovative algorithms and efficient software tools for the simulation of complex flow problems. Our contributions concern modern discretization methods (high-order and adaptivity) and goal-oriented simulation tools (prediction of physical quantities, numerical sensitivities, and inverse problems). Concrete applications originate at the moment from three fields: combustion (see Section ), turbulent flows (see Section ), and viscoelastic flows (see Section ).

Our short-term goal is to develop flow solvers based on modern numerical methods such as high-order discretization in space and time and self-adaptive algorithms. Adaptivity based on a posteriori error estimators has become a new paradigm in scientific computing, first because of the necessity to give rigorous error bounds, and second because of the possible speed-up of simulation tools. A systematic approach to these questions requires an appropriate variational framework and the development of a posteriori error estimates and adaptive algorithms, as well as sufficiently general software tools able to realize these algorithms. To this end we develop a single common library written in C++ and study at hand of concrete applications the possible benefits and difficulties related to these algorithms in the context of fluid mechanics. Prototypical applications are chosen in order to represent important challenges in our fields of application.

The main ingredients of our numerical approach are adaptive finite element discretizations combined with multilevel solvers and hierarchical modeling. We develop different kind of finite element methods, such as discontinuous (DGFEM) and stabilized finite element methods (SFEM), either based on continuous or non-conforming finite element spaces (NCFEM).

The availability of such tools is also a prerequisite for testing advanced physical models, concerning for example turbulence, compressibility effects, and realistic models of viscoelastic
flows. In the case of polymer liquids, the numerical approximation of these flows is a challenging problem, due to the intrinsic physical properties (nonlinear viscoelastic behavior, high
viscosity, low thermal viscosity) and due to the internal coupling between the viscoelasticity of the liquid and the flow, which is quantified by the dimensionless Weissenberg number
We. The commercial codes are only able to deal with
Weup to 10, which is insufficient for many practical purposes.

Our long-term goals are described in the following. Having appropriate software tools at our disposal, we may tackle questions going beyond forward numerical simulations: parameter identification, design optimization, and questions related to interaction between numerical simulation and physical experiments.

Nowadays it appears that many questions in the field of complex flow problems can neither be solved by experiments nor by simulations alone. In order to improve the experiment, the software has to be able to provide information beyond the results of simple simulation. Here, information on sensitivities with respect to selected measurements and parameters is required. The parameters could in practice be as different in nature as a diffusion coefficient and a velocity boundary condition. It is our long-term objective to develop the necessary computational framework and to contribute to the rational interaction between simulation and experiment.

The development of CFD software may benefit in an important measure from in-house experiments. To do so, we emphasize that there exists a test facility of confined inert flows developed in another research program at UPPA. Its flow geometry and the metrology are adequate for the purpose of comparison with numerical simulations. The interdisciplinary collaboration between fluid mechanics, numerical analysis, and computer science as well as the interaction between software development and experiments is crucial for this project. The composition of the project team consists of mathematicians and physicists, and we develop collaborations with computer scientists.

The purposes of this project being to develop, analyze, and test new algorithms at hand of relevant configurations, collaboration with industrial partners is crucial. Technology transfer in form of integration of new methods into existing industrial codes is intended and could be the goal of future projects.

**Development of the modular C++-library Concha**

This library is the fundamental tool for our work on numerical simulations. It is designed in such a way that the implementation and testing of new algorithms in such
different fields as hyperbolic equations and viscoelastic flows can be done with a minimum amount of work. It has been shown that our preliminary code for the Giesekus model of polymer
flows has advantages over the commercial code PolyFlow
^{®}with respect to computing time and robustness. For more details see Section
.

**Habilitation thesis of Robert Luce**

Robert Luce has defended his habilitation thesis on the subject of a new class of finite elements on general quadrilateral and hexahedral meshes, called 'pseudo-conforming' .

**Quasi-optimality of adaptive finite elements**

We have extended convergence results with complexity estimates to mixed and nonconforming finite element discretizations, as well as to finite elements on quadrilateral meshes and the Stokes equations. Our approach is based on a new adaptive marking strategy, and the proofs required the development of new techniques, such as a reduction property of the error estimator under local refinement and a complexity estimate for meshes with hanging nodes, see Section for more details. These theoretical results are expected to provide insight necessary for the improvement of practical algorithms.

First, we describe some typical difficulties in our fields of application which require the improvement of established and the development of new methods. Then we give the research directions underlying our project for the development of new software tools. They are summarized under the three headlines 'High-order methods', 'Adaptivity', and 'Parameter identification and numerical sensitivities'.

Accurate predictions of physical quantities are of great interest in fluid mechanics, for example in order to analyze instabilities in reacting flows, predict forces acting on a body, estimate the flow through an orifice, or predict thermal conductivity coefficients. Due to the complex and highly nonlinear equations to be solved, it is difficult to know in advance how fine the spatial or temporal resolution should be and how detailed a given physical model has to be represented. We propose to develop a systematic approach to these questions based on auto-adaptive methods.

We note that most of the physical problems under consideration have a three-dimensional character and involve the coupling of models and extremely varying scales. This makes the development of fast numerical methods and efficient implementation a question of feasibility.

The modeling of reactive flows , and turbulent reactive flows , implies a number of difficulties.

Physical coupling

The coupling between the variables describing the flow field and those describing the chemistry is in general stiff. Our efforts will therefore be concentrated on coupled implicit solvers based on Newton-type algorithms. A good speed-up of the algorithms requires a clever combination of iteration and splitting techniques based on the structure of the concrete problem under consideration.

Reaction mechanisms

The modeling of chemistry in reactive flow is still a challenging question. On the one hand, even if complex models are used, estimated physical constants are frequently involved, which requires an algorithm for their calibration. On the other hand, models with detailed chemistry are often prohibitive, and there exists a zoo of simplified equations, starting with flame-sheet-type models. The question of model reduction is of great interest for reacting flows, and different approaches have been developed , .

Although first attempts exist for generalization of a posteriori error estimators to model adaptation , , and , it remains a challenging question to develop numerical approaches using a hierarchy of models in a automatic way, especially combined with mesh adaptation.

All-Mach regimes

The development of solvers able to deal with different Mach regimes simultaneously is a challenging subject. A robust and efficient methodology, which works for all the regimes, requires combination of the best techniques in the field of compressible and incompressible solvers.

Turbulence and chemistry

The flows under consideration are in general turbulent. This is a major difficulty from the computational point of view, since the resolution of the finest scales still requires a prohibitive number of unknowns in the flow field alone, at least in the case of realistic geometries. Therefore, turbulence models such as RANS and LES are used in practice. We note that special difficulties are due to coupling of the flow with chemistry.

In addition to the classical challenges of DNS in very simple situations, the extension to more involved geometries and boundary conditions raises new questions in this field.

Size of discrete problems The resolution required to attain the smallest scales of a turbulent flow field implies a tremendous number of mesh points and time steps, which has restricted DNS simulations for a long time to extremely simple geometries.

Solution of discrete problems The minimum amount of work to be done in each step of time iteration is the solution of the discrete pressure equation. Due to the mesh resolution, a fast solver with linear complexity is required for this elliptic second-order equation. Our approach is based on nonconforming finite elements which lead to a favorable structure of the pressure equation.

General hexahedral meshes Standard staggered finite difference schemes, which are closely related on cartesian meshes to our approach, may lead to loss of accuracy on distorted hexahedral meshes. However, such meshes are unavoidable for more involved geometries.

Anisotropic meshes In order to resolve boundary layers, the use of anisotropic meshes is mandatory. This leads to the question of stability of the discretization on such meshes. The solution of the pressure equation requires additional care compared to the isotropic case.

Higher accuracy Higher-order schemes have been shown to be successful in certain situations. We intend to generalize our second-order scheme to higher-order, including general hexahedra and curved boundaries.

Recently, it has been observed that certain turbulence models have similarities to finite element stabilization techniques for the Navier-Stokes equations. Such variational multi-scale methods lead to adaptive turbulence modeling, which are however far from being a standard tool for turbulent flow simulations nowadays. We intend to contribute to this development, and the development of our DNS code is an important ingredient.

Despite numerous efforts, numerical simulation of polymer flows is still a very challenging research area. There exist only relatively few commercial codes for the simulation of these flows
(PolyFlow
^{®}, Flow3D or Rem3D). Two reasons seem to be responsible for this :

First, the intrinsic properties of the polymeric liquids : nonlinear viscoelastic rheological behavior, dominant viscosity (of about a million times higher than water's viscosity) and small thermal conductivity (of about a hundred times smaller than steel's conductivity).

Second, it is still an open problem how to compute the internal coupling in realistic situations. The internal coupling between the viscoelasticity of the liquid and the
flow is quantified by the dimensionless Weissenberg number
We, defined as the product between the relaxation time and the shear rate. Note that the relaxation time increases with the elastic character of the polymer, whereas the shear rate
translates the intensity of the flow.

A major issue to be addressed is the breakdown in convergence of the algorithms at critical values of the Weissenberg number. The commercial codes are only able to deal with Weissenberg numbers up to 10, which means that the behavior of the fluid is not very elastic. This limit is too low to describe the polymer flow in a processing machine and is often explained by difficulties related to the numerical schemes. In particular, it has been widely believed that the high Weissenberg number problem is attributed to the loss of the positivity of the conformation tensor C at the discrete level. Note that even if the positive-definiteness of C is always true at the continuous level, it is very difficult to extend it to the discrete counterpart since the conformation tensor is not a direct unknown of the approximated problem.

Another approach, which has attracted much attention, is the introduction of the logarithm of the conformation tensor by Fattal & Kupferman . The main idea is, that the matrix-exponential will always yield a symmetric positive-definite matrix, even if a non-monotone scheme is used for its approximation. However, in order to do so, the constitutive law has first to be expressed in terms of the logarithm of the conformation tensor. Although preliminary computational studies indicate a gain in stability, it is not so clear, what the impact of this nonlinear transformation, which can be viewed as a scale compression, on the numerical approximation is.

Besides this fundamental difficulty related to the Weissenberg number, several other aspects are to be taken into account when simulating polymer flows: large number of unknowns (pressure, velocity and stress at least), nonlinear character, treatment of the convection terms (especially in the constitutive law), strong thermo-mechanical coupling, and three-dimensionality of flows. We note that most studies on numerical tools for viscoelastic flows deal with isothermal two-dimensional flow (planar or axisymmetric). It remains a major issue to find stable and robust numerical methods capable to deal with 3D anisothermal flows at Weissenberg numbers greater than 10, in the frame of realistic models.

The discontinuous Galerkin finite element method (DGFEM) offers interesting perspectives, since it gives a framework for the combination of techniques developed in the incompressible finite-element (well-founded treatment of incompressibility constraints, pressure approximation, and stabilization for high-Reynolds-number flows) and the compressible finite-volume community (entropy solutions, Riemann solvers and flux limiters).

In addition, the order limit of finite volume discretizations is broken by the variational formulation underlying DGFEM, which makes it possible to develop discretization schemes with local
mesh refinement and local variation of the polynomial degree (
hp-methods). At the same time, the well-established finite-element knowledge for saddle-point problems can be set on work.

Noting that different approaches based on discontinuous Galerkin methods have been used in recent years for the solution of challenging flow problems. Since the project team members have experience with these and other stabilized finite element methods, a combination of the different techniques is expected to be beneficial in order to gain efficiency.

It is generally accepted that an important advantage of DGFEM beside its flexibility is the fact that it is locally conservative. At the same time, its drawback is its relatively high
numerical cost. For example, compared to continuous
P^{1}finite elements on a triangular mesh, the number of unknowns are increased by a factor of 6 (and a factor of 2 with respect to the Crouzeix-Raviart space); considering the system matrix
even leads to a more disadvantageous count. Concerning higher-order spaces, standard DGFEM has a negligible overhead for polynomial orders starting from
p= 5, which is probably not the most employed in practice. The question of how to increase efficiency of DGFEM is an important topic of recent research. Our approach
in this field is based on comparison with stabilized FE methods.

There are many similarities between DGFEM and SDFEM (streamline-diffusion FEM) based on piecewise linear elements for the transport equation from a theoretical point of view, see for example the classical text book . Recently, attempts have been made to shed brighter light on the relations between these methods , . A better understanding of the relations between these methods will contribute to the development of more efficient schemes with desired properties. As outlined before, the goal is to cut down the computational overhead of standard DGFEM, while retaining its robustness and conservation properties.

Formulation of discretization schemes based on discontinuous finite element spaces is nowadays standard. However, some important questions remain to be solved:

How to combine possibly higher-order spaces with special numerical integration in order to obtain fast computation of residuals and matrices ? How to stabilize such higher-order DGFEM ?

Treatment of quadrilateral and hexahedral meshes: Hexahedral meshes are economical for simple geometries. However, arbitrary hexahedra (the image of the unit cube under a trilinear transformation) lead to challenging questions of discretization. For example, straightforward generalization of some standard methods such as mixed finite elements may lead to bad convergence behavior .

Time-discretization: In order to be fully conservative, the time discretization has to be implicit: for example for the transport equation it seems reasonable not to distinguish between time and space variables and it is therefore natural to discretize both time and space with discontinuous finite elements. The choice of DG time-discretization is natural in view of its good stability and conservation properties. However, the higher-order members of this family lead to coupled systems which have to be solved in each time-step.

Solution of the discrete systems: The computing time largely depends on the way the discrete nonlinear and linear systems are solved. Concerning the solution of the nonlinear systems, we observe that the systems arising in our applications require special solvers, using homotopy methods, time-stepping and specially tuned Newton algorithms. In each step of the algorithm, the solution of the linear systems is a major bottleneck for adaptive high-order method. In order to gain efficiency, the hierarchical structure of the discretization should be exploited, which requires a close connection between numerical schemes and linear solvers.

The possible benefits of local mesh refinement for fluid dynamical problems is nowadays uncontested; the obvious arguments are the presence of singularities, shocks, and combustion fronts. The use of variable polynomial approximation is more controversial in CFD, since the literature does not deliver a clear answer concerning its efficiency. At least at view of some model problems, the potential gain obtained by the flexibility to locally adapt the order of approximation is evident. It remains to investigate if this estimation stays true for the applications to be considered in the project.

The design and analysis of auto-adaptive methods as described above is a recent research topic, and only very limited theoretical results are known. Concerning the convergence of adaptive
methods for mesh refinement, only recently there has been made significant progress in the context of the Poisson problem
,
, based on two-sided a posteriori error estimators
. The situation is completely open for
p-adaptivity, model-adaptivity or the DWR method

A more praxis-oriented approach to error estimation is the DWR method, developed in ; see also the overview paper , application to laminar reacting flows in , and application to the Euler equations in . The idea of the DWR method is to consider a given, user-defined physical quantity as a functional acting on the solution space. This allows the derivation of a posteriori error estimates which directly control the error in the approximation of the functional value. This approach has been applied to local mesh-refinement for a wide range of model problems . Recently, it has been extended to the control of modeling errors . The estimator of the DWR method requires the computation of an auxiliary linear partial differential equation. So far, relatively few research has been done in order to use possibly incomplete information from, e.g., coarse discretization of this equation.

Numerical simulations generally involve parameters of different nature. Some parameters reflect physical properties of the materials under consideration, or describe the way they interact. In addition to these parameters the values of which are often determined by experiments and sometimes only known with accuracy under certain conditions, the development of a computational model involves additional quantities, which could for example be related to boundary and initial conditions.

The generalization of the DWR method to parameter identification problems has been developed in , and for time-dependent equations. The case of finite-dimensional parameters, which is theoretically less challenging than the infinite-dimensional case and has therefore been less treated in the literature, is of particular interest in view of the presented applications (for example the estimation of a set of diffusion velocities).

The goal of numerical simulations are in general the computation of given output values
Iwhich are obtained from the approximated physical fields by additional computations, often termed
*post-processing*. The DWR method places these output values in the center of interest and aims at providing reliable and efficient computations of these quantities.

In the context of calibration of parameter values with experiments, it seems to be natural to go one step beyond the sole computation of
I. Indeed, the computation of numerical sensitivities or condition numbers
I/
q_{i}where
q_{i}denotes a single parameter can be expected to be of practical and theoretical interest, either in order to improve the design of experiments, or in order to help to analyze the outcome
of an experiment.

It turns out, that similar techniques as those employed for parameter identification can be used in order to obtain information on parameter sensitivities and corresponding a posteriori error analysis .

We consider three generic configurations, which are typical for numerical simulations related to propulsion devices.

**Supersonic jet**

Supersonic under-expanded free jets are typically present in the case of the accidental boring of a combustion chamber. The capability of accurately simulating their main properties is of great interest in the framework of the engine certification procedure. These generic configurations are particularly useful to test in due course the complete range of numerical tools developed in the project since it can be dealt with by solving the Euler equations, the Navier-Stokes equations as well as Reynolds Average Navier-Stokes (RANS) or large-eddy simulations (LES) based equations.

**Subsonic jet**

Subsonic jets in cross-flow are encountered in many cooling systems for combustion chamber walls of jet or helicopter engines. They are also found in micro-combustors for which the mixing between fuel and oxidizer is a crucial issue since it has to be extremely rapid due the short residence time of the flow in the combustor. The experimental bank MAVERIC has been developed by Pascal Bruel at LMAP. It is used to study inert flows related to cooling of combustion chamber walls and is well-suited for the comparison between numerical simulations and experiments of turbulent flows in future work.

**Combustion**

A combustion zone fed by two channel flows of propane+air, stabilized by a sudden expansion, is representative of lean premixed pre-vaporized (LPP) combustors. This configuration has been chosen for the development of numerical methods and software able to cope with the simulation of low Mach number reacting flows with large density variations and which present some similarity with those present in a real combustion chamber.

Polymeric fluids are, from a rheological point of view, viscoelastic non-Newtonian fluids, see Figure . Their non-Newtonian behavior can be observed in a variety of physical phenomena, which are unseen with Newtonian liquids and which cannot be predicted by the Navier-Stokes equations. The better known examples include the rod climbing Weissenberg effect, die swell and extrusion instabilities (cf. fig. 1). The rheological behavior of polymers is so complex that many different constitutive equations have been proposed in the literature in order to describe these phenomena, see for instance . The choice of an appropriate constitutive law is still a central problem. We consider realistic constitutive equations such as the Giesekus model. In comparison to the classical models used in CFD, such as UCM or Oldroyd B fluids, the Giesekus model is characterized by a quadratic stress term.

Our aim is to develop new algorithms for the discretization of polymer models, which should be efficient and robust for
We>10. For this purpose, we will develop a mathematical approach based on recent ideas on discretizations preserving the positivity of the conformation tensor. This
property is believed to be crucial in order to avoid numerical instabilities associated with large Weissenberg numbers. In order to develop monotone numerical schemes, we shall use recent
discretization techniques such as stabilized finite element and discontinuous Galerkin methods. We intend to validate the codes to be developed at hand of academic benchmark problems in
comparison with the commercial code PolyFlow
^{®}.

Turbulent flows are ubiquitous in industrial applications.

Direct numerical simulation (DNS), which aims at complete resolution of the flow field up to the Kolmogorov scale, has historically been limited to very simple geometries. However, the increase of computational power and many improvements of specialized numerical methods open the door to a wider range of applications.

Our objectives are first to develop a tool for simulation of incompressible turbulent flows in simple geometries such as the flow around simple objects and the flow through holes and later extend it to compressible flows. Due to the required spacial and temporal resolution, the software has to be extremely efficient with respect to computation speed and memory usage. To this end we use non-conforming finite elements with multi-grid solvers on block-structured, possibly anisotropic, meshes.

The objectives of our library Conchaare to offer a flexible and extensible software which is able to integrate the methods under consideration such as adaptive mesh refinement, anisotropic meshes, hierarchical meshes, stabilized, conforming, nonconforming and discontinuous finite element methods. At the same time, it has to be able to deal with the physics of complex flow problems.

The software architecture is designed in such a way that a group of core developers can contribute in an efficient manner, and that independent development of different physical applications is possible. Further, in order to accelerate the integration of new members and in order to provide a basis for our educational purposes (see Section ), the software proposes different entrance levels. The basic structure consists of a common block, and several special libraries which correspond to the different fields of applications described in Section and Section : Hyperbolic solvers, Low-Mach number flow solvers, DNS, and viscoelastic flow. A more detailed description of each special library may be found below.

A graphical user-interface facilitate the use of the C++-library. It has been developed by Guillaume Baty in collaboration with Pierre Puiseux (assistant professor at LMAP). All members of the team have been involved in the testing of the interface. The objective is to provide an easy way of installation and to facilitate the usage. To this end we use the python language with Qt in order to take advantage of higher level libraries, a more complete framework and designer ease which allow us to reduce development time.

Although the user community is small at this stage of the project, we are confronted with a very heterogenous background and level of implication in the development of its users. It seems therefore crucial to be able to respond to the different needs. Our aim is to facilitate the development of the library, and at the same time, to make it possible that our colleagues involved in physical modeling can have access to the functionality of the software with a reasonable investment of time. Two graphical user interfaces have been developed: one for the installation of the library and another one for the building and execution of projects. They are based on common database and scripts written in python. The scripts can also be launched in a shell. In Figure the user interface of the install tool is shown. The option panel allows to choose the components for conditional compilation and the compilation type (debug and release).

In order to coordinate the cooperative development of the library, Concha is based on the INRIA-Gforge. The tools offered by this development platform are adapted to our needs by our ingénieur associé Guillaume Baty. He has also develops tools for the automatic testing of components of the library using the cmake and ctest tools from Kitware.

Based on the library Conchawe have develop a solver for hyperbolic PDE's based on DGFEM. So far different standard solvers for the Euler equations such as Lax-Friedrichs, HLL, Steger-Worming,... have been implemented for test problems. The structure of the program permits rapid generalization to more complex models.

We have programmed different finite-element methods for the solution of the stationary and time-dependent Navier-Stokes equations for incompressible flows: conforming, non-conforming, stabilized, and discontinuous finite element methods (see Section ) on triangular, quadrilateral, tetrahedral and hexahedral meshes. The aim is to have a flexible code which could easily switch between the different discretizations, in order to provide a toolbox for rapid testing of new ideas. At the same time, these codes serve as a basis for the more advanced application as polymer flows, see Section , and reacting flow problems as described in Section .

For the direct numerical simulation of incompressible turbulent flows, we have started to develop a special solver based on structured meshes with a fast multigrid algorithm incorporating projection-like schemes. The main idea is to use non-conforming finite elements for the velocities with piecewise constant pressures, leading to a special structure of the discrete Schur complement, when an explicit treatment of the convection and diffusion term is used. This development is done in view of the application to turbulent flows from Section .

Based on the library
Conchawe have implemented a three-field formulation with unknowns
(
u,
p,
)for the two-dimensional Navier-Stokes equations, based on nonconforming finite
elements. The extension to the Giesekus-model for polymers has been achieved, see Section
. In the case of Newtonian flows, the extra-tensor can be eliminated in order to
reduce storage and computing time. This procedure serves as a pre-conditioner in the general case. The aim is to provide software tools for the problems in Section
.

We intend to compare computations based on
Conchawith other codes at hand of the prototypical test problems described above. This allows us to evaluate the potential of our numerical schemes
concerning accuracy, computing time and other practical expects such as integration with mesh generators and post-processing. At the same time, this, unfortunately very time-consuming,
benchmarking activity allows us to validate our own library. The following commercial and research tools might be considered:
*Aéro3d (INRIA-Smash), AVBP (CERFACS), ELSA (ONERA), Fluent (ANSYS), and OpenFoam (OpenCfd), and Polyflow
^{®}(ANSYS)*. So far, we have compared our code for the Giesekus model of polymer flows with the commercial software Polyflow

Adaptive finite element methods are becoming a standard tool in numerical simulations, and their application in CFD is one of the main topics of CONCHA, see Section
. Such methods are based on a posteriori error estimates of the discretization,
avoiding dependance on the continuous solution, as known from a priori error estimates. The estimator is used in an adaptive loop by means of a local mesh refinement algorithm. The mathematical
theory of these algorithms has for a long time be bounded to the proof of upper and lower bounds, but has made important improvements in recent years. For illustration, a typical sequence of
adaptively refined meshes on an
L-shaped domain is shown in Figure
.

The theoretical analysis of mesh-adaptive methods, even in the most standard case of the Poisson problem, is in its infancy. The first important results in this direction concern simply the convergence of the sequence of solution generated by the algorithm (the standard a priori error analysis does not apply since the global mesh-size does not necessarily go to zero). In order to the so, an unavoidable data-oscillation term has to be treated in addition to the error estimator . These result do not say anything about the convergence speed, that is the number of unknowns required to achieve a given accuracy. Such complexity estimates are the subject of active research, the first fundamental result in this direction is .

Our first contribution to this field has been the introduction of a new adaptive algorithm which makes use of an adaptive marking strategy, which refines according to the data oscillations only if they are by a certain factor larger then the estimator. This algorithm allows us to prove geometric convergence and quasi-optimal complexity, avoiding additional iteration as used before .

We have extended our results to the case of mixed FE , as well as nonconforming FE . In these cases, a major additional difficulty arises from the fact that the orthogonality relation known from continuous FEM does not hold, either due to the saddle-point formulation or due to the non-nested discrete spaces. In addition, we have considered the case of incomplete solution of the discrete systems. To this end, we have developed a simple adaptive stopping criterion based on comparison of the iteration error with the discretization error estimator .

A further generalization has been to AFEM on quadrilateral meshes with local refinement allowing for hanging nodes
. Three major difficulties had to be overcome. First, the normal
derivative of a bilinear function is not constant on an edge, and this makes the standard lower bound estimate, used for example in
,
, unavailable. We have replaced this crucial ingredient by an
estimate on the decrease of the estimator under mesh refinement. A further technical point is the fact that the laplacian of an iso-parametric
Q^{1}finite element function is not zero in the interior of the elements. Finally, the complexity estimate for the adaptive solution algorithm relies on a complexity estimate for the local
refinement algorithm (notice that additional triangles/quadrilaterals have to be refined in order to fulfill certain criteria). Such an estimate seemed so far only available for the so-called
'newest vertex algorithm', which uses iterated bisection. We have obtained a similar estimate for local refinement of quadrilateral meshes with hanging nodes. The refinement algorithm is
constrained to fulfill the regularity assumption that the difference in refinement level of quadrilaterals surrounding a given node is not larger then one.

Recently, we have been able to extend the above mentioned result on quasi-optimality to the Stokes equations (under review). These results have been presented in , .

Our theoretical studies, which are motivated by the aim to develop better adaptive algorithms, have been accompanied by software implementation with the Concha library, see Section . It hopefully opens the door to further theoretical an experimental studies. We are actually concerned with generalizations to constant-free estimators, hyperbolic equations, and goal-oriented error estimation .

An accurate discretization method for incompressible elasticity or for the Stokes problem with varying, piecewise constant viscosity has been developed in
. This work is based on the NXFEM approach, initially developed in
and
for elliptic interface-problems and compressible elasticity, which
gives a rigorous formulation of the very popular XFEM method known from crack-simulations in elasticity. In collaboration with Peter Hansbo, Chalmers Technical University (Sweden), and Erik
Burman, University of Sussex (UK), we have been able to establish the inf-sup condition, necessary in the incompressible case, using stabilized
P^{1}-
P^{0}finite elements. A typical computation with our method is shown in Figure
.

This research topic, which we wish to expand in different directions, such as robustness with respect to constants, adaptivity, and fast implementation, is related to the objectives of Concha within several respects. At a mature state, we expect that the proposed technology will be able to handle many problems with strongly heterogenous coefficients and data; it should also lead to a variational formulation of the so-called immersed-boundary method, allowing for rigorous error analysis and optimization algorithms.

One actual research direction is the development of shock-capturing methods for compressible flow problems based on NXFEM. The potential of this approach lies in the fact that local mesh-refinement could (at least partially) avoided, which is especially interesting for moving shocks. The free jet problem is an ideal test problem for this case.

We have developed a new discontinuous Galerkin scheme for the Stokes equations and corresponding three-field equations. In this work, which is part of the Phd Thesis of Julie Joie, we introduce a modification of the stabilization term in the standard DG-IP method. This allows for a cheaper implementation and has a more robust behavior with respect to the stabilization parameter; we have shown convergence towards the solution of non-conforming finite element methods for linear, quadratic and cubic polynomial degrees. This scheme has been extended to the three-field formulation of the Stokes problem, which is a further step towards the polymer project of Section . Since it is well known that the non-conforming finite element approximations do not verify the discrete Korn inequality, an appropriate further stabilization term is introduced. We have analyzed different techniques to do so. Our results have been presented in , . The methods have been implemented in the activities of Section and are available for testing.

Our activities with respect to numerical simulations in this field have been two-fold in order to respond to the difficulties related to simulation of industrial polymer flows outlined in Section .

First, we have been concerned with the theoretical understanding of the properties of the Giesekus model. As outlined above, energy estimates are crucial for the development of robust numerical schemes, see also the recent work on similar questions in the EPI MICMAC , .

Second, we have implemented a mixed non-conforming/DG method for the Giesekus model in the lowest order case; the result of a computation of a 4:1-contraction, comparing Newtonian flow with
Giesekus model, is shown in Figure
. In the same figure, a comparison of the computed profile in the channel with
the one obtained by the PolyFlow
^{®}, both on a relatively coarse mesh, is shown. A precise study shows that the results are in good agreement for moderate Weissenberg numbers
We; the computation time is by a factor of two smaller for the preliminary version of our code based on triangular meshes. For
We>20, we were not able to get a converged solution with the commercial code, whereas our program yields stationary solutions up to
.

In view of the extension to three space-dimensions, we are actually porting our approach to quadrilateral meshes. Further improvements are expected from the use of adaptivity, as well as from the implementation of adequate iterative solvers.

The long-term goal is to successively build up robust and efficient software tools in order to tackle design problems, such as the design of mixing devices. Our results have been presented in , , .

The construction of finite element methods on quadrilateral, and particularly, hexahedral meshes can be a complicated task; especially the development of mixed and non-conforming methods is an active field of research. The difficulties arise not only from the fact that adequate degrees of freedom have to be found, but also from the non-constantness of the element Jacobians; an arbitrary hexahedron, which we define as the image of the unit cube under a tri-linear transformation, does in general not have plane faces, which implies for example, that the normal vector is not constant on a side.

We have built a new class of finite element functions (named pseudo-conforming) on quadrilateral and hexahedral meshes. The degrees of freedom are the same as those of classical iso-parametric finite elements but the basis functions are defined as polynomials on each element of the mesh. On general quadrilaterals and hexahedra, our method leads to a non-conforming method; in the particular case of parallelotopes, the new finite elements coincide with the classical ones , . This approach is a first step towards higher-order methods on arbitrary hexahedral meshes, see Section .

A special feature of these meshes is the possibility of relatively simple hierarchical local refinement, under the condition that hanging nodes are introduced. We have analyzed such an adaptive methods on quadrilateral meshes , see Section .

A coupled 2.5D reservoir-1.5D wellbore model with heat transfer is implemented and analyzed in
. The flow equations are Darcy-Forchheimer in the porous media and
compressible Navier-Stokes equations in the fluid; the result of a typical computation is shown in Figure
. The thermomechanical coupling of a petroleum reservoir with a vertical
wellbore, both written in 2D axisymmetric form, has also been considered. The motivation is to interpret recorded temperatures in the wellbore as well as a flowrate history at the surface and
thus to better characterize the reservoir. The reservoir is assumed to be a monophasic multi-layered porous medium, described by the Darcy-Forchheimer equation together with a non standard
energy balance which includes viscous dissipation and compressibility effects. Concerning the wellbore, which is a compressible fluid medium based on the Navier-Stokes equations, a 1,5D model
is derived as a conforming approximation of the 2D axisymmetric one, in order to take into account the privileged flow direction and also to reduce the computational cost. The coupling is then
achieved by imposing transmission conditions at the perforations and yields, at each time step, a mixed formulation whose operator is mathematically non standard. A global solving of the
coupled problem is implemented. The spatial discretization employs lowest-order Raviart-Thomas elements for the heat and mass fluxes, piecewise constant elements for the pressure and the
temperature and
Q_{1}continuous elements for the fluid's velocity; finally, the Lagrange multipliers on the interface are taken piecewise constant. The density is updated by means of a thermodynamic module
and the convective terms are treated by appropriated upwind schemes. The well-posedness of the time-discretized coupled problem is proven, at both the continuous and the discrete level.
Numerical tests including real cases are carried out, for the separate reservoir and wellbore codes and for the coupled one. The numerical modeling of multi-component multi-phase flows in
petroleum reservoirs with heat transfer has been studied in
This work is supported by TOTAL.

Optimal is a research project evaluated by the cluster Aerospace Valley concerning the cooling of the stator of a turbomachinery. This project has three industrial partners (Liebherr, Epsilon, and SIBI) and three academic partners (Universities of Pau, Poitiers, and Toulouse).

The flow problem to be studied in this project involves a compressible viscous flow, see Section with heat transfer. Our contribution will be based on the tools to be developed in Section . Special attention will be paid to the stability of our method with respect to the small Mach number situation as required for the considered flow configuration. Comparison with experimental data will be investigated with respect to the Nusselt number.

The experimental part of the product is conducted in collaboration with Mathieu Mory, professor at UPPA, and the post-doctoral position of Stéphane Soubacq, who started to work in 10/2009, is financed by the project. The modeling and numerical simulation is done in collaboration with Abdellah Saboni, professor at UPPA. The project contains a phd-thesis which is going to start in spring 2010.

The objective of this project is the development of a robust simulation tool for polymer flows. The special focus is on the understanding of the high-Weissenberg number problem, and to consequently derive numerical schemes with improved robustness, see Section .

The Phd-fellowship of Julie Joie is financed by this project. The objective of this work is to initiate the development of robust solvers for polymer liquids and it contributes to Section .

The objective of this project is the development of an adaptive simulation tool for compressible fluid flow. It is intended to be used for the evaluation of the potential of these recent methods, which are not a standard part of commercial software.

The post-doc position of this project has been given to Mingxia Li. The Phd-fellowship has been deferred to september 2009, since only then an adequate candidate was available.

This project is related to the adaptive methods described in Section .

The LMA has proposed a new Master program starting in 2007, which is called MMS (Mathématiques, Modélisation et Simulation) and has a focus on analysis, modeling, and numerical computations in PDEs; Robert Luce and R. Becker are co-responsables of this Master program. The core of this education is formed by lectures in four fields : PDE-theory, mechanics, numerical analysis, and simulation tools.

This master program includes lectures on physical applications, one of the three proposed application fields is CFD; lectures are provided by the members of the project; especially the following lectures have been given:

Analyse numérique fondamentale, D. Capatina, Robert Luce and Eric Dubach,

Simulation numérique 1, Robert Luce and Eric Dubach,

Analyse numérique des EDP, D. Capatina,

Simulation numérique 2, Robert Luce and Eric Dubach,

Méthodes numériques pour les EDP, M. Amara and R. Becker,

Mécanique des fluides, D. Capatina and Robert Luce,

Mécanique des milieux continus, D. Capatina and Gérard Gagneux,

Simulation numérique 3, D. Capatina and Pierre Puiseux

Mécanique des Fluides et Turbulence, Eric Schall (M2 Physique)

The second semester of the second year is devoted to internships either in industry (which defines a practical means of collaboration with our industrial partners such as CERFACS, ONERA, TOTAL, and Turbomeca) or in research laboratories. In springtime 2009, the members of Concha have supervised two internships:

Elodie Estecahandy, Convergence of adaptive nonconforming finite elements, supervised by David Trujillo,

D. Trujillo and R. Becker have organized an introduction to the usage of the Concha library in September 2009. This three-days training has been visited by members of several laboratories of the University of Pau.

The members of Concha have participated in the following international and national conferences and workshops:

Mamern 09 (J. Joie, N. Seloula) ,

Numerical approximations of hyperbolic systems with source terms and applications (V. Perrier)

The members of Concha had during 2009 activities as referees in the following international journals:

*Comm. Numer. Methods Engrg.*(R. Becker),
*Comput. Meth. Appl. Mech. Engrg.*(R. Becker),
*Computers and Fluids*(R. Becker and V. Perrier),
*J. Comput. Phys.*(V. Perrier),
*J. Sci. Comput.*(D. Capatina and D. Trujillo),
*Math. Comput. Simulation*(D. Capatina and D. Trujillo)
*Numer. Math.*(D. Capatina),
*SIAM J. Control Optim.*(R. Becker and R. Luce),
*SIAM J. Numer. Anal.*(R. Becker and R. Luce),