Section: New Results
FloatingPoint and Numerical Programs
 Computer Arithmetic and Formal Proofs: Verifying Floatingpoint Algorithms with the Coq System

S. Boldo and G. Melquiond published a book that provides a comprehensive view of how to formally specify and verify tricky floatingpoint algorithms with the Coq proof assistant. It describes the Flocq formalization of floatingpoint arithmetic and some methods to automate theorem proofs. It then presents the specification and verification of various algorithms, from errorfree transformations to a numerical scheme for a partial differential equation. The examples cover not only mathematical algorithms but also C programs as well as issues related to compilation [32].
 Automating the Verification of FloatingPoint Programs.

The level of proof success and proof automation highly depends on the way the floatingpoint operations are interpreted in the logic supported by backend provers. C. Fumex, C. Marché and Y. Moy addressed this challenge by combining multiple techniques to separately prove different parts of the desired properties. They use abstract interpretation to compute numerical bounds of expressions, and use multiple automated provers, relying on different strategies for representing floatingpoint computations. One of these strategies is based on the native support for floatingpoint arithmetic recently added in the SMTLIB standard. The approach is implemented in the Why3 environment and its frontend SPARK 2014. It is validated experimentally on several examples originating from industrial use of SPARK 2014 [37], [21].
 Roundoff Error Analysis of Explicit OneStep Numerical Integration Methods.

S. Boldo, A. Chapoutot, and F. Faissole provided bounds on the roundoff errors of explicit onestep numerical integration methods, such as RungeKutta methods. They developed a finegrained analysis that takes advantage of the linear stability of the scheme, a mathematical property that vouches the scheme is wellbehaved [14].
 Robustness of 2Sum and Fast2Sum.

S. Boldo, S.Graillat, and J.M. Muller worked on the 2Sum and Fast2Sum algorithms, that are important building blocks in numerical computing. They are used (implicitly or explicitly) in many compensated algorithms or for manipulating floatingpoint expansions. They showed that these algorithms are much more robust than it is usually believed: the returned result makes sense even when the rounding function is not roundtonearest, and they are almost immune to overflow [10].
 Formal Verification of a FloatingPoint Expansion Renormalization Algorithm.

Many numerical problems require a higher computing precision than the one offered by standard floatingpoint formats. A common way of extending the precision is to use floatingpoint expansions. S. Boldo, M. Joldes, J.M. Muller, and V. Popescu proved one of the algorithms used as a basic brick when computing with floatingpoint expansions: renormalization that “compresses” an expansion [15].