Numerical models are very useful for environmental applications. Several difficulties must be handled simultaneously, in a multidisciplinary context. For example, in geophysics, media are highly heterogeneous and only few data are available. Stochastic models are thus necessary. Some computational domains are complex 3D geometries, requiring adapted space discretization. Equations modeling flow and transport are transient, requiring also adapted time discretization. Moreover, these equations can be coupled together or with other equations in a global nonlinear system. These large-scale models are very time and memory consuming. High performance computing is thus required to run these types of scientific simulations. Supercomputers and clusters are quite powerful, provided that the numerical models are written with a parallel paradigm.

The team SAGE undertakes research on environmental applications and high performance computing and deals with two subjects:

numerical algorithms, involving parallel and grid computing,

numerical models applied to hydrogeology and physics.

These two subjects are highly interconnected: the first topic aims at designing numerical algorithms, which lead to high efficiency on parallel and grid architectures; these algorithms are applied to geophysical models.

Moreover, the team SAGE, in collaboration with other partners, develops a software platform for groundwater numerical simulations in heterogeneous subsurface.

The focus of this topic is the design of efficient and robust numerical parallel algorithms for computational engineering. The objective is to deal with large scale numerical simulations. High performance computing is required in order to tackle large scale problems. Algorithms and solvers are applied to problems arising from hydrogeology and geophysics ( ).

A problem at the kernel of most scientific applications
consists in solving large linear systems of equations
Ax=
b, where the matrix
Ahas a sparse structure (many coefficients are zero).
The target is Giga-systems with billions (
10
^{9}) of unknowns.

Direct methods, based on the factorization
A=
LU, induce fill-in in matrices
Land
U. Reordering techniques can be used to reduce this
fill-in, hence memory requirements and floating-point
operations
.

More precisely, direct methods involve two steps, first
*factoring*the matrix
Ainto the product
A=
P_{1}LUP_{2}where
P_{1}and
P_{2}are permutation matrices,
Lis lower triangular, and
Uis upper triangular, then solving
P_{1}LUP_{2}x=
bby processing one factor at a
time. The most time consuming and complicated step is the
first one, which is further broken down into the following
steps :

Choose
P_{1}and diagonal matrices
D_{1}and
D_{2}so that
P_{1}D_{1}AD_{2}has a “large diagonal.” This helps to assure
accuracy of the final solution.

Choose
P_{2}so that the
Land
Ufactors of
P_{1}AP_{2}are as sparse as possible.

Perform
*symbolic analysis*, i.e. identify the locations
of nonzero entries of
Land
U.

Factorize
P_{1}AP_{2}into
Land
U.

The team worked on parallel sparse direct solvers and compares existing direct and iterative solvers.

The two main classes of iterative solvers are Krylov methods and multigrid methods.

A Krylov subspace is for example
. If the matrix is symmetric positive definite, the
Krylov method of choice is the Conjugate Gradient; for
symmetric undefinite matrices, there are mainly three
methods, SYMMLQ, MINRES and LSQR. For unsymmetric matrices,
it is not possible to have both properties of minimization
and short recurrences. The GMRES method minimizes the error
but must be restarted to limit memory requirements. The
BICGSTAB and QMR methods have short recurrences but do not
guarantee a decreasing residual
,
. All iterative methods require
preconditioning to speed-up convergence : the system
M^{-1}Ax=
M^{-1}bis solved, where
Mis a matrix close to
Asuch that linear systems
Mz=
care easy to solve. A family of
preconditioners uses incomplete factorizations
A=
LU+
R, where
Ris implicitely defined by the level of fill-in
allowed in
Land
U. Other types of preconditioners include an
algebraic multigrid approach, an approximate inverse or a
domain decomposition
.

Multigrid methods can be used as such or as a preconditioner. They can be either geometric or algebraic .

The team studies preconditioners for Krylov methods
,
and uses multigrid methods. The
team works also on the development of parallel software for
iterative solvers (PCG, GMRES, subdomain method),
least-squares solvers (
QRfactorization).

Domain decomposition methods are hybrid methods or semi-iterative methods using iterative and direct techniques. They can be based on alternating Schwarz method when domain overlap or on Schur complement method whithout overlapping . Schwarz methods can be used as preconditioners of Krylov methods or directly with an acceleration based on Aitken extrapolation. Schur methods lead to a reduced system, solved by a preconditioned Krylov method.

The team studies these various aspects of domain decomposition methods.

For linear least-squares problems
, direct methods are based on the normal equations
A^{T}Ax=
A^{T}b, using either a Cholesky
factorization of
A^{T}Aor a
QRfactorization of
A, whereas the most common Krylov iterative method is
LSQR. If the discrete problem is ill-posed, regularization
like Tychonov or a Truncated Singular Value Decomposition
(TSVD) is required
,
. For large matrices, the
so-called complete factorization is also useful. The first
step is a pivoted QR factorization, followed by a second
factorization
where
Uand
Vare orthogonal matrices and
Eis a matrix neglectable with respect to the chosen
threshold. Such a decomposition is a robust rank-revealing
factorization and it provides for free the Moore-Penrose
Generalized Inverse. Recently, efficient
QRfactorization software libraries became available
but they do not consider column or row permutations based
on numerical considerations since the corresponding
orderings often end up with a non tractable level of
fill-in.

The team studies iterative Krylov methods for
regularized problems, as well as rank-revealing
QRfactorizations.

Nonlinear methods to solve
F(
x) = 0include fixed-point
methods, nonlinear stationary methods, secant method,
Newton method
,
,
. The team studies
Newton-Krylov methods, where the linearized problem is
solved by a Krylov method
, Broyden methods, Proper
Orthogonalization Decomposition methods.

Another subject of interest is time decomposition methods. The idea is to divide the time interval into subintervals, to apply a timestep in each subinterval and to apply a nonlinear correction at both ends of subintervals. This can be applied to explosive or oscillatory problems.

Let us consider the problem of computing some extremal
eigenvalues of a large sparse and symmetric matrix
A. The Davidson method is a subspace method that
builds a sequence of subspaces, which the initial problem
is projected on. At every step, approximations of the
sought eigenpairs are computed : let
V_{m}be an orthonormal basis of the subspace at step
mand let
(
,
z)be an eigenpair of the matrix
H_{m}=
V_{m}^{T}AV_{m} ; then the Ritz pair
(
,
x=
V
_{m}
z)is an approximation of an
eigenpair of
A. The specificity of the method comes from how the
subspace is augmented for the next step. In contrast to the
Lanczos method, which is the method to refer to, the
subspaces are not Krylov subspaces, since the new vector
t=
x+
ywhich will be added to the
subspace is obtained by an acceleration procedure : the
correction
yis obtained by an exact Newton step (Jacobi-Davidson
method) or an inexact Newton step (Davidson method). The
behavior of the Davidson method is studied in
while the Jacobi-Davidson
method is described in
. These methods bring a
substantial improvement over the Lanczos method when
computing the eigenvalues of smallest amplitude. For that
reason, the team considered Davidson method to compute the
smallest singular values of a matrix
Bby applying them to the matrix
B^{T}B
.

In several applications, the eigenvalues of a nonsymmetric matrix are often needed to decide whether they belong to a given part of the complex plane (e.g. half-plane of the negative real part complex numbers, unit disc). However, since the matrix is not exactly known (at most, the precision being the precision of the floating point representation), the result of the computation is not always guaranteed, especially for ill-conditioned eigenvalues. Actually, the problem is not to compute the eigenvalues precisely, but to characterize whether they lie in a given region of the complex field. For that purpose the notion of -spectrum or equivalently the notion of pseudospectrum was simultaneously introduced by Godunov and Trefethen . Several teams proposed softwares to compute pseudospectra, including the SAGE team with the software PPAT , described in Section .

In our applications, we use stochastic modelling in order to take into account geophysical variability. From a numerical point of view, it amounts to run multiparametric simulations. The objective is to use the power of heterogeneous parallel and distributed architectures.

The team has chosen a particular domain of application, which is geophysics. In this domain, many problems require to solve large scale systems of equations, arising from the discretization of coupled models. Emphasis is put on hydrogeology, but the team investigates also geodesy, submarine acoustics, geological rock formation and heat transfer in soil. One of the objectives is to use high performance computing in order to tackle 3D large scale computational domains with complex physical models.

This is joint work with Geosciences Rennes, University of Le Havre and CDCSP at University of Lyon. It is also done in the context of GdR Momas and Andra grant.

Many environmental studies rely on modelling geo-chemical and hydrodynamic processes. Some issues concern aquifer contamination, underground waste disposal, underground storage of nuclear wastes, land-filling of waste, clean-up of former waste deposits. Simulation of contaminant transport in groundwater is a highly complex problem, governed by coupled linear or nonlinear PDAEs. Moreover, due to the lack of experimental data, stochastic models are used for dealing with heterogeneity. The main objective of the team is to design and to implement efficient and robust numerical models, including Uncertainty Quantification methods.

Recent research showed that rock solid masses are in general fractured and that fluids can percolate through networks of inter-connected fractures. Rock media are thus interesting for water resources as well as for the underground storage of nuclear wastes. Fractured media are by nature very heterogeneous and multi-scale, so that homogenisation approaches are not relevant. The team develops a numerical model for fluid flow and contaminant transport in three-dimensional fracture networks.

The output is a parallel scientific platform running on clusters, grids and machines available in computer centers.

PPAT (Parallel PATh following software) is a parallel
code, developed by D. Mezher, W. Najem (University of
Saint-Joseph, Beirut, Lebanon) and B. Philippe. This tool can
follow the contours of a functional from
to
. The present version is adapted for determining the
level curves of the function
f(
z) =
_{min}(
A-
zI)which gives the pseudospectrum of
matrix
A.

The algorithm is reliable: it does not assume that the curve has a derivative everywhere. The process is proved to terminate even when taking into account roundoff errors. The structure of the code spawns many independent tasks which provide a good efficiency in the parallel runs.

The software can be downloaded under the GPL licence from:
http://

Doing linear algebra with sparse and dense matrices is
somehow difficult in scientific computing. Specific libraries
do exist to deal with this area (
*e.g.*BLAS and LAPACK for dense matrices, SPARSKIT for
sparse ones) but their use is often awful and tedious, mainly
because of the large number of arguments which must be used.
Moreover, classical libraries do not provide dynamic
allocation. Lastly, the two types of storage (sparse and
dense) are so different that the user must know in advance
the storage used in order to declare correctly the
corresponding numerical arrays.

MUESLI is designed to help in dealing with such structures
and it provides the convenience of coding in Fortran with a
matrix-oriented syntax; its aim is therefore to speed-up
development process and to enhance portability. It is a
Fortran 95 library split in two modules: (i) FML (Fortran
Muesli Library) contains all necessary material to
numerically work with a dynamic array (dynamic in size, type
and structure), called
`mfArray`; (ii) FGL (Fortran Graphics Library)
contains graphical routines (some are interactive) which use
the
`mfArray`objects.

MUESLI includes some parts of the following numerical libraries: Arpack, Slatec, SuiteSparse, Triangle, BLAS and LAPACK.

Linux is the platform which has been used for developing
and testing MUESLI. Whereas the FML part (numerical
computations) should work on any platform (
*e.g.*Win32, Mac OS X, Unix), the FGL part is intended
to be used only with X11 (
*i.e.*under all UNIXes).

Last version of MUESLI is 2.3.0 (8 october 2010). More
information can be found at:
http://

When dealing with non-linear free-surface flows, mixed Eulerian-Lagrangian methods have numerous advantages, because we can follow marker particles distributed on the free-surface and then compute with accuracy the surface position without the need of interpolation over a grid. Besides, if the liquid velocity is large enough, Navier-Stokes equations can be reduced to a Laplace equation, which is numerically solved by a Boundary Element Method (BEM); this latter method is very fast and efficient because computing occur only on the fluid boundary. This method is applied to the spreading of a liquid drop impacting on a solid wall and to the droplet formation at a nozzle; applications take place, among others, in ink-jet printing processes.

The code used (CANARD) has been developped with Jean-Luc Achard (LEGI, Grenoble) for fifteen years and is used today mainly through collaborations with Carmen Georgescu at UPB (University Polytechnica of Bucarest, Romania), and with Alain Glière (CEA-LETI, Grenoble).

Website:
http://

The software-platform H2OLab is developed in collaboration with J.-R. de Dreuzy, from Geosciences, university of Rennes 1, with A. Beaudoin, from the University of Le Havre and with D. Tromeur-Dervout, from the University of Lyon.

The platform H2OLab (previously Hydrolab) aims at modeling flow and transport of solute in highly heterogeneous porous or fractured media. Numerical models currently include steady-state flow in saturated media and transport by advection-diffusion. Physical models can be either a porous medium or a network of fractures. For flow equations, H2OLab uses a mixed finite element method or a finite volume method and it includes a particle tracker for transport equations. The platform is organized in software components and relies as far as possible on existing free libraries, such as sparse linear solvers. Because the target is large computational domains, the platform makes use of high performance computing and several modules have a parallel version. The target is currently parallel architectures with distributed memory. The code is written in C++ and uses the MPI library for parallel computing. Most modules are fully generic so that they can be used by any application within the platform.

The platform is currently implemented on Windows, Linux and AIX Power 6 (IDRIS) systems as well. The objective is to develop a free software available on the Web; it is managed using Gforge of Inria. The platform is composed of software and databases. Currently, four software packages are registered at the APP: PARADIS, MP_FRAC, GW_NUM, GW_UTIL.

A benchmark book is currently under development with the aim of gathering many test cases, aiming at showing the platform possibilities, as well as at testing/comparing results with those of the scientific community. This benchmark book is developed with UFZ (Leipzig, Germany) and UPC (Barcelona, Spain).

The platform has been improved at several levels. For simulations in porous media, the package PARADIS has new modules:

A fully parallel generation of a statistic field (permeability, porosity,...) on 2D or 3D grids has been developed. It offers the choice between different statistic distributions, correlated or not. The new method is based on a spectral method and makes use of the FFTW2 library.

The output results have been organized in a different way. Now, results of the statistical computations are created dynamically when computed. This new feature remains consistent with the data base structure.

Outputs of relevant quantities can use the VTK format. Now vizualisation can be done with Paraview software.

For the simulation in fractured media, three new projects have been added to the package MP_FRAC:

MORTAR, to deal with a non conforming mesh at the intersections between fractures.

D3 _FLOW _SOLVER, to test different solvers on different input matrices.

SOLVER _SCHUR, to solve linear problem using a Schur domain decomposition method (based on Cholmod).

Non-regression tests have been added for the different launchers in order to check, at each addition of new functionalities, that all existing functions in the code still give the expected results:

tests are performed in 32bits and 64bits;

tests are both sequential and parallel;

tests are linked with the benchmark book.

To improve the use of the platform, we developed a set of rules governing the input and output data. There is a lot of input parameters for the simulations, but there exists some constraints between them. Indeed, some parameters are unused in some case, for example 3D related parameters when doing 2D simulations, and some associations are forbidden. We used xml to define validation rules. Those rules have to be defined by the developers when they add new parameters. They are used to generate an html form with the adequate part only (i.e. if we are doing a 2D simulation, 3D parameters do not appear). While filling the form, the rules are dynamically checked. The user get some warnings if the entries are not valid, with some hints to make them valid. After submitting the form, the user gets a valid parameters file. It is also possible to load an existing file in order to check its validity or to modify it. This work was done with François Hamonic, an L3 MIAGE student at the University of Rennes 1 during a 3 months internship.

GPREMS is a numerical library for solving general linear systems on distributed memory computers. There is no assumption on the structure of the input matrix. The solver automatically performs the required permutation to distribute the data to the working processors. The solution of the linear system is iteratively found by performing several cycles of GMRES(m). This process is preconditioned at each step by one iteration of the multiplicative Schwarz method.

GPREMS was registered at APP and will be distributed with a free licence.

That work is done in the context of the Cinemas2 and the Libraero contracts, and . It is pursued in collaboration with the INRIA team Grand Large.

Following our work around the parallel hybrid solver based on GMRES preconditioned by the multiplicative Schwarz preconditioner, we have written a report describing the main techniques that are used. In this report, we present in detail the hybrid approach based on the direct/iterative solution of the linear system being solved. The two levels of parallelism defined in the solver arise naturally from the algebraic domain decomposition that is used. We have reported several results that prove the robustness of this solver compared to others similar solvers , . So far, the resulted software library named as GPREMS is now hosted on INRIA GFORGE, see .

This work is done in the context of the Cinemas2 and the Libraero contracts, and . It is also done in collaboration with the joint INRIA/ NCSA laboratory on petascale computing.

In this work, we consider the parallel restarted GMRES preconditioned either by the additive or the multiplicative Schwarz preconditioner. The main observation is that during the iterative process, the residual norm stagnates if the size of the Krylov basis is not large enough. This behavior also appears when a large number of subdomains is used to express the domain decomposition preconditioner. Our aim in this work is therefore to accelerate the convergence of the GMRES method by using spectral information gathered during the iterative process; this approach, known as deflated GMRES have been implemented in GPREMS and PETSc. Numerical experiments have shown valuable results on real test cases provided in the context of the LIBRAERO project. These results have been discussed during the third and the fourth workshop of the joint laboratory INRIA/NCSA for petascale computing , .

This work is done in collaboration with N. Nassif, from the American University of Beirut, Lebanon.

We have developed a Ratio-based Parallel Time Integration (RaPTI) algorithm for solving initial value problems, in a time-parallel way. RaPTI algorithm uses a time-slicing and rescaling technique, with some resulting similarity properties, for generating a coarse grid and providing ratio-based predictions of the starting values at the onset of every time-slice. The correction procedure is performed on a fine grid and in parallel, yielding some gaps on the coarse grid. Then, the predictions are updated and the process is iterated, until all the gaps are within a given tolerance. RaPTI algorithm is applied to three problems: a membrane problem, a reaction-diffusion problem and a satellite trajectory in a J2-perturbed motion. In some rare cases of invariance, it yields a perfect parallelism. In the more general cases of similarity, it yields good speed-ups , .

This work is done in the context of the Micas project and the Hemera project .

In hydrogeology, the description of the underground properties is very poor, mainly due to its complex heterogeneity and to the lack of measures. As a consequence, we rely on stochastic models of geometrical and physical properties , . We have identified three levels of distributed and parallel computing. At the simulation level, we choose to define distributed memory algorithms and to rely on the MPI library for communications between processors. The kernel of flow simulations consists in solving a sparse linear system . The intermediate level is the Uncertainty Quantification non intrusive method, currently Monte-Carlo. We have designed a facility for running the set of random simulations by choosing either a parallel approach with MPI or a distributed approach with a grid middleware. At the multiparametric level, we choose a distributed approach as is done in most projects on computational grids. We have done some numerical experiments with the first two levels, using MPI . This application is one of the scientific challenge of Hemera project.

This work is done in collaboration with M. Moakher, from ENIT, Tunisia.

The geoid is the level surface of the earth attraction at the sea level. We aim at finding an equivalent mass system which can generate a given geoid. The mathematical definition of the problem is expressed as a non-linear least- squares problem in the Hilbert space of harmonic functions. We pursue numerical simulations. A paper is in preparation.

This subject takes place in the ARPHYMAT (Archaeology, Physics, Mathematics) interdisciplinary project, linked to the archeological/human sciences program: "Man and fire: towards a comprehension of the evolution of thermal energy control and its technical, cultural and paleo-environmental consequences". Both physical and numerical approach is used to understand the functioning mode and the thermal history of the studied structures. The main topic of this project concerns the simulation of forced evaporation of water in a saturated soil.

2D and 3D-axisymmetric configurations of this physical problem have been solved, using the Apparent Capacity Method. Emphasis is put on performance: the Jacobian matrix is stored in a sparse structure and the Newton iterations (inside the BDF method) are solved by the UMFPACK part of the SuiteSparse package. All these modifications are done inside MUESLI, giving an easy-to-use programming interface for the user. We have proposed a new global approach to solve the system of coupled equations .

In addition, we have investigated the heat conduction in
a real 3D saturated porous medium. In term of numerics, the
discretization is based on the hybrid mixed finite element
method in space and a semi-implicit scheme in time. To
solve this problem we have modified
`TRACES`(Transport of RadioActive Elements in
Subsurface, 2004, P. Ackerer and H. Hoteit, IMFS,
Université de Strasbourg) which is a computer program for
the simulation of flow reactive transport in saturated
porous media. The model has been applied to prehistoric
fires
.

Besides, we have introduced a robust numerical strategy to estimate the temperature dependent thermal capacity, the thermal conductivity and the porosity of a saturated porous medium, based on the knowledge of heating curves at selected points in the medium. In order to solve the inverse problem, the least squares criterion (in which the sensitivity coefficients appear) has been used.

Recently, the proposed method for the forward problem has been applied on evaporation in heterogeneous porous media: 1D and 2D simulations have been obtained, where we have supposed the the soil was constituted by blocks of different permeability .

This work is done in the framework of a project funded by the Region Bretagne. A PhD thesis (Merline Djouwe) began in February 2009, coadvised with Patrick Richard, who is from the Physics Institute at the University of Rennes (IPR).

The study concerns the rheology of granular media flowing out of a silo. The two objectives are (i) to understand the rheological properties of such kind of granular flows and especially the effect of the micro mechanical characteristics, and (ii) to determine the most efficient ways to decompact and to unblock these systems. We expect that the results will help the understanding of the jamming transition which is of fundamental interest for discrete matter.

A first code, based on molecular dynamics (particles interaction), has been adapted to the silo geometry; it can process up to 20 000 particles, but it is not sufficient to obtain an accurate description of the granular flow. Furthermore, we currently implement another numerical code (finite differences on staggered grids) based on a continuous physical model. This second approach avoids the limitation of the particles number and we expect that it will be both efficient and accurate.

This work is done in the context of the LIRIMA laboratory ( ). A PhD thesis (Fateh Saci) began in january 2010, coadvised with Fatma-Zohra Nouri, professor at the Badji Mohktar Annaba University (Algeria), in the Mathematics Department.

This work concerns the numerical simulation of fluid flows (both linear Stokes and nonlinear Navier-Stokes) in geometries with small deformation, based on analytical variable transformations, in order to solve the equations on a simpler geometrical domain. The selected approach combines a collocation method with an asymptotic development method that is based on a small parameter.

This work is done in the context of the MOMAS GNR ( ) and the contract with Andra ( ).

Reactive transport models are complex nonlinear Partial Differential Algebraic Equations (PDAE), coupling the transport engine with the reaction operator. We consider here chemical reactions at equilibrium. We have pursued our work on a global approach, based on a method of lines and a DAE solver , , . We started also to analyze SNIA and SIA approaches, to find out conditions on the time step.

This work is done in collaboration with A. Beaudoin, from the University of Le Havre (who moved to the University of Poitiers in September 2010), J.-R. de Dreuzy, from the department of Geosciences at the University of Rennes 1 (who is on leave for three years at UPC, Barcelona, Spain) and G. Pichot (who was at the University of Le Havre until september 2010). It is done in the context of the Micas project ( ).

We have pursued our work for simulating flow and solute transport in 2D domains, where the permeability field is highly heterogeneous and is a random field We have worked on a new method to generate the permeability field based on spectral simulation. This method allows to improve the criteria required for the simulation of the permeability field. For a log-normal exponentially correlated field, in the two dimensional case, we have defined several conditions on the domain size and the mesh size to ensure a satisfying permeability field generation. We are currently working on establishing those criteria for 3D simulations. A paper is in preparation.

Stochastic computations are performed by using a Monte-Carlo method. Concerning the numerical analysis of the Monte-Carlo method, it has been done in the case of an isotropic and constant dispersion tensor, under stronger assomptions, proving the convergence of the method and yielding an upper bound for the errors comitted by estimating the spreading and the macro-dispersion of the solute. In the case of the spreading we have

where
Nis the number of realizations of the permeability
field,
Mthe number of particles for each realization of the
permeability field,
tthe mesh of the time discretisation for the
computation of the particles trajectories and
hthe mesh of the spatial discretization for the
computation of the velocity field. A paper is in
preparation.

Our on-going research concerns also the numerical study of convergence when we vary the numerical parameters in ( ). The main goal is to provide optimal parameters for studying the macro-dispersion. Large scale simulations are used as a reference for tuning the parameters. We run 100 Monte-Carlo simulations which take each 3.2 hours with 128 processors on the IBM Power 6 at IDRIS. We observe numerically that convergence of Monte-Carlo iterations is quite fast; we study how the ergodic properties of the random permeability field can explain this behavior. A paper is in preparation.

We have also extended the transport model to include a hydrodynamic dispersion effect . We study the different numerical approaches for dealing with discontinuities in the dispersion coefficient.

This work is done in collaboration with J.-R. de Dreuzy, from the department of Geosciences at the University of Rennes 1 (who is on leave for three years at UPC, Barcelona, Spain) and G. Pichot (who was at the University of Le Havre until september 2010). It is done in the context of the Micas project ( ).

We simulate flow within fractures lying in an impervious rock matrix. Discrete Fracture Networks are complex 3D structures with 2D domains intersecting each other. A first challenge comes with the meshing of such networks, where the mesh must be of good quality and must not contain too many cells. In a previous work, we have designed a method to generate a conforming mesh of good quality. In fractured media, flow is highly channelled in a small number of fractures. These fractures need to be finely meshed, while others can be coarsened. In order to reduce the number of cells, each fracture is meshed independently, resulting in a non-conforming mesh at the intersections. We addressed this difficulty within the numerical method by developing a Mortar method. For networks where intersections do not cross nor overlap, this Mortar method is based on pairwise relations , . The next step was to generalize the previous method to any stochastic network. This was a challenging task, since fractures can be highly intricated. We developed a new approach based on a combination of pairwise Mortar relations with additional relations for the overlapping part. A paper has been submitted. The Mortar method has been implemented in the H2OLAB platform in the module called MORTAR (see section ).

Once the network is meshed and a mixed element method is applied, flow computations consists in solving a large sparse linear system. To test different solvers, we have developed an interface, called D3 _FLOW _SOLVER (see section ). A second challenge is to take advantage of the matrix structure to use domain decomposition methods and efficient parallel solving. Indeed, the matrix is naturally partitioned into blocks such that a Schur complement can be defined. Then the reduced system can be solved by a Preconditioned Conjugate Gradient method. An efficient method is the so-called Neumann Neumann preconditioner. However, floating subdomains can arise and lead to singular blocs. We have developed two strategies to overcome this difficulty. First, the subdomain decomposition consists in defining connected groups of fractures, so that the kernel of each block is at most of dimension 1. This is achieved by using a graph partitioning implemented in the Schotch library Scotch). Second, we defined an algebraic update of the blocks associated with floating subdomains. Moreover, this approach allows to mutualize the factorizations of the blocks in both the preconditioning matrix and the Schur matrix . We have developed a Matlab prototype implementing this method. It has been validated with a set of various fracture networks. We currently develop a C++ library called SOLVER _SCHUR, integrated in the platform H2OLab (see section ). Parallel computations are done with a AIX platform (IBM Power 6) at IDRIS.

This work is done in collaboration with A. Debussche, from ENS-Cachan-Rennes and Ipso INRIA team. It is done in the context of the Micas project ( ). It is also done in collaboration with Z. Mghazli, from the university of Kenitra, Morocco, in the context of the Co-Advise and Hydromed projects ( , ).

In our applications described above, we use stochastic models and rely on uncertainty quantification methods. We have pursued the work on elliptic partial differential equations with random coefficients. We focused on the case of a lognormal homogeneous permeability field. This work applies in the particular case of an exponential covariance, which is one of the most frequently used model. We have then to deal with a permeability field whose trajectories do not have the usual regularity and are neither uniformly bounded from above nor from below.

We are in particular interested in numerical methods
based on the approximation of the random permeability field
using a truncated Karhunen-Loëve expansion. In the research
rapport
, after proving that the
solution belongs to
L^{p}(
,
H1
_{0}(
D))for any finite
p, we provide both strong and weak error result
estimates for the error on the solution resulting from the
truncature. Moreover we give bounds for the spectral
collocation error and the finite elements error. This work
has been presented in several workshops and seminars:
,
,
,
,
, and has been pursued with an
improved finite element error estimate. This work is now
extended to the numerical analysis of the multi-grid Monte
Carlo method, in collaboration with Robert Scheichl, Aretha
Theckentrup and Ivan Graham, from the university of Bath,
GB. A paper is in preparation.

We have defined a numerical method based on the truncated Karhunen-Loëve expansion of the inverse of the random permeability field. This technique allows to approximate the mean of the solution by a projection. It is very efficient for 1D problems. A paper has been submitted.

This work is done in collaboration with J.-R. de Dreuzy, from Geosciences Rennes, and A. ben Abda, from LAMSIN, Tunisia. It is done in the context of the Hydromed and Co-Advise projects ( , ).

We study two types of inverse problems in hydrogeology. The direct transient model is governed by classical flow equations and relates transmissivity with hydraulic head. We assume a constant known porosity.

The first type of problem is a so-called data completion problem, whith missing data on some part of the boundary and overdefined data on the other part. We have investigated methods based on an energy norm . We have also defined a method based on a fictituous domain decomposition technique , , .

The second type of problem concerns the identification of the transmissivity in saturated aquifers. We have studied a methodology based on pilot points.

Contract with ANDRA

Time: three years from October 2010.

Title: Numerical methods for reactive transport.

It is quite challenging to develop a numerical model for deep storage of nuclear waste. The time interval is very large (several thousands years), models are coupled and simulations must be accurate enough to be used for risk assessment. In most cases, chemistry must be included in models of deep geological storage. In the team Sage, we have developed a method coupling transport and chemistry by a Newton-type algorithm. A first objective is to reduce computational time, in order to run experiments with 3D domains and a large number of grid points. A second objective is to compare the behaviour of several mehods, the so-called SNIA, SIA, DSA methods, with our method, called DAE. The study will include properties such as numerical stability, nonlinear convergence, algorithmic complexity, CPU and memory costs. In the current version, it is not possible to include precipitation dissolution reactions where the mineral can disappear and reappear during the simulation. A third objective is to extend the model by considering a complementary problem formulation.

Contract with Région Rhône Alpes.

Time: three years from May 2007, extended until October 2010.

Title: Conception Interactive par simulation Numérique des Ecoulements couplées à des Méthodes d'optimisation par Algorithmes Spécifiques.

Coordinator: Ecole Centrale de Lyon.

Partners: INSA Lyon, University of Lyon, Plastic Omnium, Valeo, Renault Trucks.

This work is done in the context of the Région Rhône-Alpes initiative called Rhône-Alpes Automotive CLUSTER, and the competitiveness cluster called Lyon Urban Truck and Bus (LUTB). The global objective is to design a new methodology in CFD to reduce drastically computational time in an optimization process. The partners FLUOREM and LMFA have developed the software Turb-Opty based on parametrization. The key part of Sage team is to study sparse linear solvers applied to CFD systems arising in Turb-Opty applications. We use our software GPREMS to solve linear systems provided by the industrial partners.

Webpage:
http://

The working group MOMAS includes many partners from CNRS, INRIA, universities, CEA, ANDRA, EDF and BRGM. It covers many subjects related to mathematical modeling and numerical simulations for nuclear waste disposal problems. We coordinate the project entitled “numerical models and simulations for transport by advection diffusion of chemical species with kinetic and equilibrium reactions.”

Contract with ANR, program CIS

Time: four years from January 2008.

Title: Modelling and Intensive Computation for Aquifer Simulations.

Coordinator: Sage.

Partners: Geosciences Rennes, University of Le Havre, University of Lyon 1.

Web page:
http://

The project is designed to solve great challenges in hydrogeology and to develop free generic software. Numerical modelling is an important key for the management and remediation of groundwater resources. The objectives of MICAS are to get outstanding results in seven well-identified topics: 1. Macro-dispersion in 3D heterogeneous porous media. 2. Steady flow in 3D Discrete Fracture Networks (DFN). 3. Well test interpretation in 2D and 3D heterogeneous porous media and in DFN. 4. Flow in 2D and 3D fractured porous media. 5. Large scale multilevel sparse linear solvers. 6. Stochastic models and algorithms for dealing with lack of observation and heterogeneity. 7. Deployment of multi-parametric simulations on a computational grid. A last topic is devoted to software for integrating all the modules developed in the project. Our commitment is to develop the H2OLAB platform.

Contract with ANR, program RNTL

Time: three years from October 2007.

Title: Large Information Base for the Research in AEROdynamics.

Coordinator: FLUOREM, Lyon.

Partners: LMFA, Ecole Centrale de Lyon; CDCSP, University of Lyon; Sage team.

This work is done in the context of the CINEMAS2 project, described above. The main objective for the team Sage is to design efficient algorithms adapted to industrial configurations using the Turb-Opty software developed by Fluorem and LMFA. The challenge is to solve many linear systems of large size. In 2010, we have worked on multilevel parallelism and deflation methods. See section , .

Webpage:
http://

To run large simulations, we defined a project, based on H2OLab and GPREMS, accepted by Genci. We got accounts on the cluster IBM Power 6 located at IDRIS: 3.584 cores Power6, 17,5 Tos of memory, 67,3 Tflops total peak performance, InfiniBand Network x4 DDR. In 2010, we obtained 64000 hours which were entirely used (see sections and ).

Title: Hemera

Time: from September 2010.

Coordinator: C. Perez, GRAAL team.

Partners: 22 INRIA teams.

Webpage:
http://

Hemera is an INRIA Large Wingspan project, started in 2010, that aims at demonstrating ambitious up-scaling techniques for large scale distributed computing by carrying out several dimensioning experiments on the Grid5000 infrastructure, at animating the scientific community around Grid5000 and at enlarging the Grid5000 community by helping newcomers to make use of Grid5000.

The team Sage is the leader of the Scientific Challenge Hydro: Multi-parametric intensive stochastic simulations for hydrogeology. The objective is to run multiparametric large scale simulations. We will use software and middleware of the H2OLab platform and the Grid'5000 infrastructure.

Type of project: COADVISE Project supported by the European Commission [Seventh Framework Programme - Marie Curie Actions 'People' International Research Staff Exchange Scheme (IRSES)].

Time: It started in February 2009 for a duration of 36 months.

The project aims at supporting and strenghtening the different existing collaboration actions between Europe and Mediterranean Partner Countries. The structuring action of the programme consists in co-advising PhD students between the two sides of the Mediterranean Sea. The project is coordinated by INRIA Centre de Recherche Sophia Antipolis. There are 5 partners in Tunisia, 2 partners in Morocco, 1 partner in Algeria, 1 partner in Italy, 1 partner in Spain and 1 partner in France.

In 2010, two PhD students visited the Sage team during 3 months each: Sinda Khalfallah, Tunisia; Mestapha Oumouni, Morocco. Also, Bernard Philippe and Jocelyne Erhel stayed two weeks each in Tunis, Tunisia.

Title: Inverse problems in hydrogeology

Time: 2009 - 2011

Coordination: LAMSIN, Tunis, Tunisia.

Partners: Rabat (Morocco), Kenitra (Morocco), Annaba (Algeria), Tunis (Tunisia), Naples (Italy), Barcelona (Spain), Paris and Rennes.

Webpage:
http://

The project deals with the numerical simulation of groundwater flow and the transport of pollutants. A workshop was organized in Tunis, in December 2010.

Title: Modélisation Mathématique et Applications

Time: four years from 2010

Partner: University of Yaounde, Cameroon.

The project deals with linear algebra.

Emmanuel Kamgnia (professor at university of Yaounde, Cameroon), visited the team, one month, November. Bernard Nguenang (Ph-D student at university of Yaounde, Cameroon), stayed two months, November and December. Bernard Philippe went to Yaounde, two weeks, April.

Title: Calcul Scientifique pour des Problèmes en Environnement

Time: four years from 2010

Partner: University of Annaba, Algeria.

The project deals with the numerical simulation of fluid flows, see section . Fateh Saci (Ph-D student at university of Annaba, Algeria) stayed two months, mid-October until mid-December. Fatma-Zohra Nouri (professor at university of Annaba, Algeria) visited the team, one week, December. Bernard Philippe went to Annaba, one week, June. Edouard Canot went to Annaba, one week, December.

Webpage:
http://

The team Sage participated in the workshops organized in June at Bordeaux (France) and in November at Urbana (USA). Désiré Nuentsa Wakam visited UIUC during one month in November.

The team works on deflation methods and their integration into the software PETSc.

B. Philippe is one of the four chief-editors of the electronic journal ARIMA (revue Africaine de la Recherche en Informatique et Mathématiques Appliquées).

B. Philippe is managing editor of the electronic journal ETNA (Electronic Transactions on Numerical Analysis).

B. Philippe was the coordinator of the track “Scientific computing and parallelism” in the program committee of the CARI international conference (Yamassoukro, Ivory Coast, Oct. 18-21, 2010).

B. Philippe was a member of the program committee of the conference “High Performance Scientific Computing” (West-Lafayette, Oct. 11-12, 2010).

J. Erhel is member of the editorial board of ETNA.

J. Erhel is member of the editorial board of Interstices.

J. Erhel is co-editor of the proceedings of PARCFD'2008 .

J. Erhel was a referee of the CARI international conference (Yamassoukro, Ivory Coast, Oct. 18-21, 2010).

J. Erhel organized a mini-symposium at the international conference MFD (Rennes, France, June 2010).

É. Canot is member of the CUMI (Commission des Utilisateurs de Moyens Informatiques), of INRIA-Rennes, from September 2007.

É. Canot is member of the CHS (Commission Hygiène et Sécurité), of INRIA-Rennes, from September 2007.

J. Erhel is member and secretary of the Comité de Gestion Local of AGOS at INRIA-Rennes.

J. Erhel is member of Comité Technique Paritaire and Comité de Concertation of INRIA.

J. Erhel is member of Conseil d'Administration of INRIA.

J. Erhel: participation with contribution in the workshop STIC ANR, Paris, January.

J. Charrier: presentation at the workshop on applications of mathematics, ENS Cachan Bretagne, Bruz, February.

S. Khalfallah: participation with contribution in the PICOF conference, Carthagène, Spain, April.

B. Poirriez: participation with contribution in the Copper Mountain conference on iterative methods, Colorado, USA, April.

B. Philippe: participation with contribution in the workshop on Algorithmic Differentiation, Optimization, and Beyond, Nice, April.

M. Muhieddine and E. Canot: participation with contribution in the ICTEA conference, Marrakech, Morocco, May.

J. Charrier: participation with contribution in the CANUM 2010 conference, Bordeaux, May.

J. Charrier: participation with contribution in the workshop on uncertainty quantification, Edinburgh, May.

J. Erhel: participation with contribution in the meeting GNR Momas / GDR Calcul, Paris, May.

D. Nuentsa Wakam: participation with contribution in the third workshop of Joint laboratory INRIA/UIUC, Bordeaux, June.

J. Erhel: participation with contribution in the ICPA10 conference, Marseille, June.

J. Erhel: participation with contribution in the MFD 2010 conference, Rennes, June.

J. Erhel: participation with contribution in the SHPCO2 conference, Saint-Rémy-les-Chevreuses, June.

J. Erhel: participation with contribution in the meeting “Journées INSA MMS”, Rennes, June.

S. Khalfallah: participation with contribution in the CMWR conference, Barcelona, June.

J. Charrier: participation with contribution in the conference NASPDE 2010, Freiburg, Germany, September.

B. Philippe: participation in the conference on High Performance Scientific Computing: Architectures, Algorithms, and Applications (in honor of A. Sameh), West Lafayette, USA, October.

D. Nuentsa Wakam and B. Philippe: participation with contribution in the CARI conference, Yamassoukro, Ivory coast, October.

D. Nuentsa Wakam: participation with contribution in the fourth workshop of Joint laboratory INRIA/UIUC, Urbana, November.

J. Charrier: presentation at the Bath numerical analysis seminar, Great Britain, November.

J. Charrier: presentation at the numerical analysis day of IRMAR, Rennes, November.

É. Canot: participation with contribution in the workshop Hydromed, Tunis, December.

B. Philippe visited ENIT, Tunis, Tunisia, two weeks, January-February.

J. Erhel visited ENIT, Tunis, Tunisia, two weeks, March.

B. Philippe visited the University of Yaounde I, Cameroon, two weeks, April.

B. Philippe visited the University of Annaba, Algeria, one week, June.

B. Philippe visited Purdue University, West-Lafayette (IN), USA, one week, October.

J. Charrier visited the university of Bath, Great Britain, two weeks, November.

D. Nuentsa Wakam visited the Urbana Champaign University (joint laboratory INRIA/UIUC), USA, four weeks, November.

É. Canot visited the University of Annaba, Algeria, one week, December.

É. Canot visited ENIT, Tunis, Tunisia, one week, December.

The team has invited the following persons:

Efstratios Gallopoulos, Patras Grece, one week in February.

Ahmed Sameh, Purdue University, USA, one week in February.

Zoubida Mghazli, University of Kenitra, Morocco, one week in April.

Nabil Nassif, American University of Beirut, Lebanon, one week in July.

Fateh Saci, Annaba, Algeria, two months in October-December.

Emmanuel Kamgnia, Yaoundé, Cameroon, one month in November.

Louis-Bernard Nguenang, Yaoundé, Cameroon, two months in November-December.

Rachida Bouhlila, Tunis, Tunisia, one week in November.

Fatma-Zohra Nouri, Annaba, Algeria, one week in December.

The team participated also in scientific discussions with visitors at CAREN:

D. Tartakovski (San Diego, USA), one week, April

M. Dentz (UPC, Barcelona, Spain), one week, April

J. Carrera (UPC, Barcelona, Spain), one week, December

A. Abdelmoula is teaching assistant (permanent position) in computer science at the University of Tunis, Tunisia.

N. Makhoul-Karam is teaching assistant (temporary position) in mathematics at the american University of Beirut.

J. Charrier is teaching assistant (monitrice) in mathematics at ENS-Cachan-Rennes.

D. Nuentsa Wakam is teaching assistant (moniteur) in computer science at faculty of law, University of Rennes 1.

B. Poirriez was teaching assistant (moniteur) in computer science at IFSIC, University of Rennes 1, until August. He is now teaching assistant (ATER) in computer science at INSA, University of Rennes 1, from September.

M. Muhieddine was teaching assistant (ATER) in mathematics, University of Rennes 2, until August.

É. Canot and J. Erhel taught about
Applied Mathematics (MAP) for DIIC, IFSIC, Rennes 1
(second year). Lecture notes on
http://

É. Canot taught one module in the Master 2, Scientific Computing, University of Annaba, Algeria.

B. Philippe taught in Tunis (Tunisia) a Master course: “Eigenvalue solvers” (two weeks in January and February).

J. Erhel taught in Tunis (Tunisia) a Master course: “Linear solvers for geosciences” (two days in March).

B. Philippe taught in Yaounde (Cameroon) a Master course: “Reliable eigenvalue computation” (two weeks in April).

B. Philippe taught in Annaba (Algeria) a Master course: “Large linear systems solvers”, one day in June.

F. Hamonic has completed a three months internship in team Sage (L3, Ifsic, University of Rennes), under supervision of B. Poirriez.

J. Charrier: lectures in high schools for the operation “À la découverte de la recherche”, Cesson-Sévigné and Montfort-sur-Meu, April.

J. Charrier: animation of mathematical games for the operation “Festival des sciences”, Orgères, October.

J. Erhel: contribution to Cahiers de l'ANR no 3, le calcul intensif : technologie clé pour le futur. ANR Micas project, page 153 (with A. Beaudoin, J.-R. de Dreuzy and D. Tromeur-Dervout).

J. Erhel: contribution to the article " L'eau souterraine mise en équations ", information letter of émergences, no 11, INRIA Rennes.

J. Erhel: participation in the local organization of the INRIA workshop on sustainable development, Rennes, March.

J. Erhel: participation in the INRIA workshop on scientific dissemination, Paris, March.

G. Pichot: presentation at the press briefing with INRIA and CEMAGREF, Paris, September.

B. Philippe: contribution to the Encyclopedia of Parallel Computing .

B. Philippe: contribution to a paper in "La Recherche, les cahiers de l'INRIA" about research in Africa .

J. Charrier, D. Nuentsa Wakam and B. Poirriez: participation in training for teaching assistants.

D. Nuentsa Wakam: participation in the summer school: “Sustainable High Performance Computing” (CEA-EDF-INRIA), 28 june-9 july, Cadarache, France

É. Canot: participation in the summer school: “Flow and Transport in Porous and Fractured Media”, 16-28 august, Cargèse, France

J. Charrier: participation in the summer school: Simulation of hybrid dynamical systems and applications to molecular dynamics (CEA-EDF-INRIA), September, Paris, France.