Section: Scientific Foundations
Keywords : sparse matrices, LU factorization, QR factorization, Krylov subspace, iterative method, preconditioning, Newton method, eigenvalues, singular values, Lanczos, Arnoldi, Davidson, pseudospectrum.
Numerical algorithms
Participants : Guy Antoine Atenekeng Kahou, Jocelyne Erhel, Frédéric Guyomarc'h, Laura Grigori, Noha Makhoul, Bernard Philippe, Damien TromeurDervout, Petko Yanev, Mohammed Ziani.
The focus of this topic is the design of efficient and robust numerical algorithms in linear algebra. The main objective is to solve large systems of equations Ax = b , where the matrix A has a sparse structure (many coefficients are zero). High performance computing ( 3.2 ) is required in order to tackle large scale problems. Algorithms and solvers are applied to problems arising from hydrogeology and geophysics ( 4.1 ).
Direct linear solvers
Direct methods, based on the factorization A = LU , induce fillin in matrices L and U. Reordering techniques can be used to reduce this fillin, hence memory requirements and floatingpoint operations [67] .
More precisely, direct methods involve two steps, first factoring the matrix A into the product A = P_{1}LUP_{2} where P_{1} and P_{2} are permutation matrices, L is lower triangular, and U is upper triangular, then solving P_{1}LUP_{2}x = b by processing one factor at a time. The most time consuming and complicated step is the first one, which is further broken down into the following steps :

Choose P_{1} and diagonal matrices D_{1} and D_{2} so that P_{1}D_{1}AD_{2} has a ``large diagonal.'' This helps to assure accuracy of the final solution.

Choose P_{2} so that the L and U factors of P_{1}AP_{2} are as sparse as possible.

Perform symbolic analysis , i.e. identify the locations of nonzero entries of L and U.

Factorize P_{1}AP_{2} into L and U.
The team works on parallel sparse direct solvers [4] .
Iterative linear solvers
The most efficient iterative methods build a Krylov subspace, for example . If the matrix is symmetric positive definite, the method of choice is the Conjugate Gradient; for symmetric undefinite matrices, there are mainly three methods, SYMMLQ, MINRES and LSQR. For unsymmetric matrices, it is not possible to have both properties of minimization and short recurrences. The GMRES method minimizes the error but must be restarted to limit memory requirements. The BICGSTAB and QMR methods have short recurrences but do not guarantee a decreasing residual [78] , [76] . All iterative methods require preconditioning to speedup convergence : the system M^{1}Ax = M^{1}b is solved, where M is a matrix close to A such that linear systems Mz = c are easy to solve. A family of preconditioners uses incomplete factorizations A = LU + R , where R is implicitely defined by the level of fillin allowed in L and U. Other types of preconditioners include an algebraic multigrid approach or an approximate inverse [64] .
The team studies preconditioners for Krylov methods [1] , [7] .
Linear leastsquares problems
For linear leastsquares problems , direct methods are based on the normal equations A^{T}Ax = A^{T}b , using either a Cholesky factorization of A^{T}A or a QR factorization of A, whereas the most common Krylov iterative method is LSQR. If the discrete problem is illposed, regularization like Tychonov or a Truncated Singular Value Decomposition (TSVD) is required [72] , [63] . For large matrices, the socalled complete factorization is also useful. The first step is a pivoted QR factorization, followed by a second factorization where U and V are orthogonal matrices and E is a matrix neglectable with respect to the chosen threshold. Such a decomposition is a robust rankrevealing factorization and it provides for free the MoorePenrose Generalized Inverse. Recently, efficient QR factorization software libraries became available but they do not consider column or row permutations based on numerical considerations since the corresponding orderings often end up with a non tractable level of fillin.
The team studies iterative Krylov methods for regularized problems, as well as rankrevealing QR factorizations.
Nonlinear problems and time integration
Nonlinear methods to solve F(x) = 0 include fixedpoint methods, nonlinear stationary methods, secant method, Newton method [77] , [66] , [75] . The team studies NewtonKrylov methods, where the linearized problem is solved by a Krylov method [2] , Broyden methods, Proper Orthogonalization Decomposition methods.
Another subject of interest is parallelization of ODE in time. The idea is to divide the time interval into subintervals, to apply a timestep in each subinterval and to apply a nonlinear correction at both ends of subintervals.
Eigenvalue problems
Let us consider the problem of computing some extremal eigenvalues of a large sparse and symmetric matrix A. The Davidson method is a subspace method that builds a sequence of subspaces, which the initial problem is projected on. At every step, approximations of the sought eigenpairs are computed : let V_{m} be an orthonormal basis of the subspace at step m and let (, z) be an eigenpair of the matrix H_{m} = V_{m}^{T}AV_{m} ; then the Ritz pair (, x = V_{m}z) is an approximation of an eigenpair of A. The specificity of the method comes from how the subspace is augmented for the next step. In contrast to the Lanczos method, which is the method to refer to, the subspaces are not Krylov subspaces, since the new vector t = x + y which will be added to the subspace is obtained by an acceleration procedure : the correction y is obtained by an exact Newton step (JacobiDavidson method) or an inexact Newton step (Davidson method). The behavior of the Davidson method is studied in [3] while the JacobiDavidson method is described in [79] . These methods bring a substantial improvement over the Lanczos method when computing the eigenvalues of smallest amplitude. For that reason, the team considered Davidson method to compute the smallest singular values of a matrix B by applying them to the matrix B^{T}B [3] .
Robust algorithms for characterizing spectra
In several applications, the eigenvalues of a nonsymmetric matrix are often needed to decide whether they belong to a given part of the complex plane (e.g. halfplane of the negative real part complex numbers, unit disc). However, since the matrix is not exactly known (at most, the precision being the precision of the floating point representation), the result of the computation is not always guaranteed, especially for illconditioned eigenvalues. Actually, the problem is not to compute the eigenvalues precisely, but to characterize whether they lie in a given region of the complex field. For that purpose the notion of spectrum or equivalently the notion of pseudospectrum was simultaneously introduced by Godunov [71] and Trefethen [80] . Several teams proposed softwares to compute pseudospectra, including the SAGE team with the software PPAT [6] , described in Section 5.2 .