## Section: Scientific Foundations

### Arithmetic

We consider here the following arithmetics: integers, rational numbers,
integers modulo a fixed modulus n , finite fields, floating-point
numbers and p -adic numbers. We can divide those numbers in two
classes: *exact numbers* (integers, rationals, modular computations
or finite fields), and *inexact numbers* (floating-point and
p -adic numbers).

Algorithms on integers (respectively floating-point numbers) are very similar to those on polynomials, respectively Taylor or Laurent series. The main objective in that domain is to find new algorithms that make operations on those numbers more efficient. These new algorithms may use an alternate number representation.

In the case of integers, we are interested in multiple-precision arithmetic. Various algorithms are to be used, depending on the sizes of the objects, starting with the most simple “schoolbook” methods to the most advanced, asymptotically fast algorithms. The latter are often based on Fourier transforms.

The case of modular arithmetic and finite fields is the first where the representation of the elements has to be chosen carefully. Depending on the type of operations one wants to perform, one must choose between a classical representation, the Montgomery representation, a look-up table, a polynomial representation, a normal basis representation, ... Then appropriate algorithms must be chosen.

With p -adic numbers, we get the first examples of non-exact representations. In that setting, one has to keep track of the precision all along a computation. The mechanisms to handle that issue can vary: since the precision losses are not too difficult to control, one can work with a fixed global precision, or one can choose to have each element carrying its precision. Additionally, there are several choices for representing elements, in particular when dealing with algebraic extensions of the p -adics (ramified or unramified).

Last but not least, we are interested in the arithmetics of real numbers
of floating-point type. Again, we
have a notion of approximation. It is therefore necessary to decide
of a *format* that defines a set of representable numbers. Then, when
the result of an arithmetical operation on two representable numbers is not
representable, one should define a way to *round* it to a meaningful
representable number. The purpose of the IEEE-754 standard is to give a
uniform answer to these questions in order to guarantee the reliability
and portability of floating-point computations. The revised
standard 754-2008 is no more
restricted to the 4 basic field operations and the square root,
but recommends correct rounding for some mathematical functions,
and also recommends how to extend the default available formats.
This leads to efficiency questions,
in particular to guarantee that the result of an operation has been
correctly rounded in arbitrary precision.

Within the context of integer arithmetic, we are also interested in putting the problem on its head, and notably by the study of the converse operation to integer multiplication, that is, integer factoring. Being the most competitive algorithm for this task, the Number Field Sieve algorithm comes naturally as a context where several parts of our work find a natural continuation, in all of the three axes above.