Team sage

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Numerical algorithms and high performance computing

Parallel GMRES with a multiplicative Schwarz preconditioner

Participants : Jocelyne Erhel, Désiré Nuentsa Wakam, Bernard Philippe.

That work is done in the context of the Cinemas2 and the Libraero contracts, 7.1.2 and 8.1.3 . It is pursued in collaboration with the INRIA team Grand Large.

Following our work around the parallel hybrid solver based on GMRES preconditioned by the multiplicative Schwarz preconditioner, we have written a report [43] describing the main techniques that are used. In this report, we present in detail the hybrid approach based on the direct/iterative solution of the linear system being solved. The two levels of parallelism defined in the solver arise naturally from the algebraic domain decomposition that is used. We have reported several results that prove the robustness of this solver compared to others similar solvers [31] , [32] . So far, the resulted software library named as GPREMS is now hosted on INRIA GFORGE, see 5.5 .

Parallel deflated GMRES with domain decomposition preconditioners

Participants : Jocelyne Erhel, Désiré Nuentsa Wakam.

This work is done in the context of the Cinemas2 and the Libraero contracts, 7.1.2 and 8.1.3 . It is also done in collaboration with the joint INRIA/ NCSA laboratory on petascale computing.

In this work, we consider the parallel restarted GMRES preconditioned either by the additive or the multiplicative Schwarz preconditioner. The main observation is that during the iterative process, the residual norm stagnates if the size of the Krylov basis is not large enough. This behavior also appears when a large number of subdomains is used to express the domain decomposition preconditioner. Our aim in this work is therefore to accelerate the convergence of the GMRES method by using spectral information gathered during the iterative process; this approach, known as deflated GMRES have been implemented in GPREMS and PETSc. Numerical experiments have shown valuable results on real test cases provided in the context of the LIBRAERO project. These results have been discussed during the third and the fourth workshop of the joint laboratory INRIA/NCSA for petascale computing [26] , [27] .

Parallel-in-time integration

Participants : Jocelyne Erhel, Noha Makhoul.

This work is done in collaboration with N. Nassif, from the American University of Beirut, Lebanon.

We have developed a Ratio-based Parallel Time Integration (RaPTI) algorithm for solving initial value problems, in a time-parallel way. RaPTI algorithm uses a time-slicing and rescaling technique, with some resulting similarity properties, for generating a coarse grid and providing ratio-based predictions of the starting values at the onset of every time-slice. The correction procedure is performed on a fine grid and in parallel, yielding some gaps on the coarse grid. Then, the predictions are updated and the process is iterated, until all the gaps are within a given tolerance. RaPTI algorithm is applied to three problems: a membrane problem, a reaction-diffusion problem and a satellite trajectory in a J2-perturbed motion. In some rare cases of invariance, it yields a perfect parallelism. In the more general cases of similarity, it yields good speed-ups [36] , [10] .

High Performance Computing for hydrogeology

Participant : Jocelyne Erhel.

This work is done in the context of the Micas project 8.1.2 and the Hemera project 8.1.5 .

In hydrogeology, the description of the underground properties is very poor, mainly due to its complex heterogeneity and to the lack of measures. As a consequence, we rely on stochastic models of geometrical and physical properties [24] , [23] . We have identified three levels of distributed and parallel computing. At the simulation level, we choose to define distributed memory algorithms and to rely on the MPI library for communications between processors. The kernel of flow simulations consists in solving a sparse linear system [22] . The intermediate level is the Uncertainty Quantification non intrusive method, currently Monte-Carlo. We have designed a facility for running the set of random simulations by choosing either a parallel approach with MPI or a distributed approach with a grid middleware. At the multiparametric level, we choose a distributed approach as is done in most projects on computational grids. We have done some numerical experiments with the first two levels, using MPI [29] . This application is one of the scientific challenge of Hemera project.


previous
next

Logo Inria