## Section: Scientific Foundations

### Algorithms and Arithmetics

Today, scientific computing needs not only floating-point arithmetic or multi-precision arithmetic. On the one hand, when validated results or certified enclosures of a solution are needed,
*interval arithmetic* is the arithmetic of choice. It enables to handle uncertain data, such as physical measures, as well as to determine a global optimum of some criterion or to solve a set of constraints. On the other hand, there is an increasing demand for exact solutions to problems
in various areas such as cryptography, combinatorics or algorithmic geometry. Here, symbolic computation is used together with
*exact arithmetic* .

General purpose computing environments such as Matlab or Maple now offer all these types of arithmetic and it is even possible to switch from one to another in the middle of a computation. Of course, such capabilities are quite useful and, in general, users already can enhance the quality of the answers to small problems.

However, most general purpose environments are still poorly suited for large computations and interfacing with other existing softwares remains an issue. Our goal is thus to provide high-performance easy-to-reuse software components for interval, mixed interval/multi-precision, finite
field, and integer arithmetics. We further aim to study the impact of these arithmetics on algorithms for exact
*linear algebra* and constrained as well as unconstrained
*global optimization* .

#### Numerical Algorithms using Arbitrary Precision Interval Arithmetic

When validated results are needed, interval arithmetic can be used. New problems can be solved with this arithmetic which computes with sets instead of numbers. In particular, we target the global optimization of continuous functions. A solution to obviate the frequent overestimation of results is to increase the precision of computations.

Our work is twofold. On the one hand, efficient software for arbitrary precision interval arithmetic is developed, along with a library of algorithms based on this arithmetic. On the other hand, new algorithms that really benefit from this arithmetic are designed, tested, and compared.

#### Computational Algorithms for Exact Linear Algebra

The techniques for solving linear algebra problems exactly have been evolving rapidly in the last few years, substantially reducing the complexity of several algorithms. Our main focus is on matrices whose entries are integers or univariate polynomials over a field. For such matrices, our main interest is how to relate the size of the data (integer bit lengths or polynomial degrees) to the cost of solving the problem exactly. A first goal is to design asymptotically faster algorithms for the most basic tasks (determinant, matrix inversion, matrix canonical forms, ...), to reduce problems to matrix multiplication in a systematic way, and to relate bit complexity to algebraic complexity. Another direction is to make these algorithms fast in practice as well, especially since applications yield very large matrices that are either sparse or structured. The techniques used to achieve our goals are quite diverse: they range from probabilistic preconditioning via random perturbations to blocking, to the baby step /giant step strategy, to symbolic versions of the Krylov-Lanczos approach, and to approximate arithmetic.

Within the LinBox international project (see § 5.6 and § 8.3 ) we work on a software library that corresponds to our algorithmic research mentioned above. Our goal is to provide a generic library that allows to plug external components in a plug-and-play fashion. The library is devoted to sparse or structured exact linear algebra and its applications; it further offers very efficient implementations for dense linear algebra over finite fields. The library is being developed and improved, with a special emphasis on the sensitivity of computational costs to the underlying arithmetic implementations. The target matrix entry domains are finite fields and their algebraic extensions, integers and polynomials.