## Section: Scientific Foundations

### Formal proof and validation

**Specification of arithmetic operators.**
Very mediatized problems such as the Pentium bug
show that arithmetic correctness is
sometimes difficult to obtain on a computer. Few tools handle rigorous
proofs on floating-point data. However, thanks to the IEEE-754
standard, the arithmetic operations are completely specified, which
makes it possible to build proofs of algorithms and properties. But it
is difficult to present a proof including the long list of special
cases generated by these calculations. The formalization of the
standard that we started in 2000
makes it possible to use a proof assistant such
as Coq to guarantee that each particular case
is considered and handled correctly. Thanks to funding from CNRS and
NASA, the same specification is now also available in PVS.

However the IEEE-754 standard deals with neither elementary functions nor multiple precision. It does not provide language bindings either. For instance, a C implementation may use the extended precision (when available) for computations on double-precision types. Moreover, even when an x86 processor is configured to round on 53 bits (the mantissa size of the double precision), the exponent range is still larger than the one of the double-precision format. As a consequence, the proofs may need to be written for various specifications, depending on the considered targets.

**Proof assistants.**
Systems such as Coq and PVS make it possible to define new objects and
to derive formal consequences of these definitions. Thanks to higher
order logic, we establish properties in a very general form. The proof
is built in an interactive way by guiding the assistant with high
level tactics. At the end of each proof, Coq builds an internal
object, called a proof term, which contains all the details of
derivations and guarantees that the theorem is valid. PVS is usually
considered less reliable because it does not build any proof term.

**Certification of numerical codes.**
Certifying a numerical code is error-prone. The use of a proof assistant
will ensure the code correctly follows its specification. Part of the
floating-point unit of AMD processors were formally proved, and so was
an implementation of the exponential function.
Another example is the certification of the arithmetic on Taylor models,
which is an extension of interval arithmetic.
This certification work,
however, is usually a long and tedious work, even for experts. Moreover,
it is not adapted to an incremental development, as a small change to
the algorithm may invalidate the whole formal proof. A promising
approach is the use of automatic tools to generate the formal proofs of
numerical codes with little help from the user.

Instead of writing code in some programming language and trying to prove it, we can design our own language, well-suited to proofs (e.g., close to a mathematical point of view, and allowing metadata related to the underlying arithmetics such as error bounds, ranges, and so on), and write tools to generate code. Targets can be a programming language without extensions, a programming language with some given library (e.g., MPFR if one needs a well-specified multiple-precision arithmetic), or a language internal to some compiler: the proof may be useful to give the compiler some knowledge, thus helping it to do particular optimizations. Of course, the same proof can hold for several targets.

We worked in particular also on the way of giving a formal proof for our correctly rounded elementary function library. We have always been concerned by a precise proof of our implementations that covers also details of the numerical techniques used. Such proof concern is mostly absent in IBM's and Sun's libraries. In fact, many misroundings were found in their implementations. They seem to be mainly due to coding mistakes that could have been avoided with a formal proof in mind. In CRlibm we have replaced more and more hand-written paper proofs by Gappa verified proof scripts that are partially generated automatically by other scripts. Human error is better prevented.