Overall Objectives
Research Program
Application Domains
Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
XML PDF e-pub
PDF e-Pub

Section: Research Program

Floating-point arithmetic

Properties of floating-point arithmetic

Thanks to the IEEE 754-2008 standard for floating-point arithmetic, we now have an accurate definition of floating-point formats and operations. The behavior of a sequence of operations becomes at least partially predictable. We therefore can build algorithms and proofs that use these specifications. Some of these algorithms are new, some others have been known for years, but only for radix-2 systems. Also, their proofs are not exempt from flaws: some algorithms do not work, for instance, when subnormal numbers appear. We wish to give rigorous proofs, including the exact domain of validity of the corresponding algorithms, and to extend when possible these algorithms and proofs to new formats specified by the recent floating-point standard (decimal formats, large precision formats).

Error-free transformations and compensated algorithms

To achieve a prescribed accuracy for the result of a given computation, it is often necessary to increase the precision of the intermediate operations beyond the highest precision available in hardware. On superscalar processors, an efficient solution is to compute, at runtime, the error due to critical floating-point operations, in order to later compensate for them. Such compensated algorithms have been studied for the summation of n>2 floating-point numbers and for polynomial evaluation. They are based on error-free transformations (EFT): small, efficient algorithms, based on the specifications of the IEEE 754-2008 standard, that compute the sum or product of two floating-point number exactly. The result of an EFT is represented exactly as two floating-point numbers, one holding the rounded result, and the other holding the error term. We will keep investigating EFTs, and study compensated algorithms improving the accuracy of other computing kernels (such as matrix-vector and matrix-matrix products) in the context of vector floating-point units and multicore architectures.