Team Arénaire

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Keywords : floating-point arithmetic, multiplication, Newton-Raphson iteration, rounding of algebraic functions.

Tools for Efficient Floating-Point Arithmetic

Participants : N. Brisebarre, J.-M. Muller, C. Lauter.

Multiplication by constants

In 2005, we have presented at the ARITH symposium an algorithm for correctly rounded multiplication by ``exact'' constants (that is, by numbers that are not necessarily representable in finite binary precision). That extremely simple algorithm works for all possible floating-point inputs, provided that the constants satisfies conditions that we are able to check using methods we have introduced. We have extended our algorithm to the four rounding modes of the IEEE 754 standard, and to the frequent case where an internal format wider than the target precision is available. The paper presenting these new results is under minor revision for the journal IEEE Transactions on Computers.

Newton-Raphson iteration

P. Kornerup (Southern Danish University) and J.-M. Muller have improved and published [24] last year's results on Newton-Raphson iteration. They aim at finding the best possible seed values when computing a1/p using the Newton-Raphson iteration in a given interval. A natural choice of the seed value would be the one that best approximates the expected result. It turns out that in most cases, the best seed value can be quite far from this natural choice.

More generally, when one evaluates a monotone function f(a) in an interval by building the sequence xn defined by the Newton-Raphson iteration, the natural choice consists in choosing x0 equal to the arithmetic mean of the endpoint values. This minimizes the maximum possible distance between x0 and f(a) . And yet, if we perform n iterations, what matters is to minimize the maximum possible distance between xn and f(a) . In several examples, the value of the best starting point varies rather significantly with the number of iterations.

Multiple precision using floating-point arithmetic

Some floating-point applications require an intermediate precision slightly higher than native precision. It is possible to emulate to some extend higher precision formats by representing numbers as an unevaluated sum of native precision numbers. We call these formats double-double or triple-double precision. We have shown that a usage of a combination of them speeds up the evaluation of correct rounding elementary functions for double precision by a factor 10 [17] .

In [42] we extend this to other applications, such as Taylor models. Using native precision in Taylor models and accounting for round-off errors in the interval remainder may lead to too inaccurate results for some applications. We present techniques for actually computing the error of some types floating-point operations and for bounding precisely the round-off error induced by a double-double or triple-double base format. Here, we focus on adjusting the trade-off between the required accuracy and the cost of emulating higher precision.


previous
next

Logo Inria