Overall Objectives
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography
 PDF e-Pub

Section: New Results

Certified computing and computer algebra

Polynomial system solving

Polynomial system solving is a core topic of computer algebra. While the worst-case complexity of this problem is known to be hopelessly large, the practical complexity for large families of systems is much more reasonable. Progress has been made in assessing precise complexity estimates in this area.

Next, a fundamental problem in computer science is to find all the common zeroes of $m$ quadratic polynomials in $n$ unknowns over ${F}_{2}$. The cryptanalysis of several modern ciphers reduces to this problem. Up to now, the best complexity bound was reached by an exhaustive search in $4{log}_{2}n\phantom{\rule{0.166667em}{0ex}}{2}^{n}$ operations. In [1] , M. Bardet (U. Rouen), J.-C. Faugère (PolSys team), B. Salvy, and P.-J. Spaenlehauer (CARAMEL team) gave an algorithm that reduces the problem to a combination of exhaustive search and sparse linear algebra. This algorithm has several variants depending on the method used for the linear algebra step. Under precise algebraic assumptions, they showed that the deterministic variant of their algorithm has complexity bounded by $O\left({2}^{0.841n}\right)$ when $m=n$, while a probabilistic variant of the Las Vegas type has expected complexity $O\left({2}^{0.792n}\right)$. Experiments on random systems showed that the algebraic assumptions are satisfied with probability very close to 1. They have also given a rough estimate for the actual threshold between their method and exhaustive search, which is as low as 200, and thus very relevant for cryptographic applications.

Linear differential equations

In [2] , B. Salvy proved with A. Bostan (SpecFun team) and K. Raschel (U. Tours) that the sequence ${\left({e}_{n}^{𝔖}\right)}_{n\ge 0}$ of excursions in the quarter plane corresponding to a nonsingular step set $𝔖\subseteq {\left\{0,±1\right\}}^{2}$ with infinite group does not satisfy any nontrivial linear recurrence with polynomial coefficients. Accordingly, in those cases, the trivariate generating function of the numbers of walks with given length and prescribed ending point is not D-finite. Moreover, they displayed the asymptotics of ${e}_{n}^{𝔖}$. This completes the classification of these walks.

With F. Johansson and M. Kauers (RISC, Linz, Austria), M. Mezzarobba presented in [23] a new algorithm for computing hyperexponential solutions of ordinary linear differential equations with polynomial coefficients. The algorithm relies on interpreting formal series solutions at the singular points as analytic functions and evaluating them numerically at some common ordinary point. The numerical data is used to determine a small number of combinations of the formal series that may give rise to hyperexponential solutions.

Exact linear algebra

Transforming a matrix over a field to echelon form, or decomposing the matrix as a product of simpler matrices that reveal the rank profile, is a fundamental building block of computational exact linear algebra. For such tasks the best previously available algorithms were either rank sensitive (i.e., of complexity expressed in terms of the exponent of matrix multiplication and the rank of the input matrix) or in place (i.e., using essentially no more memory that what is needed for matrix multiplication). In [6] C.-P. Jeannerod, C. Pernet, and A. Storjohann (U. Waterloo, Canada) have proposed algorithms that are both rank sensitive and in place. These algorithms required to introduce a matrix factorization of the form $A=CUP$ with $C$ a column echelon form giving the row rank profile of the input matrix $A$, $U$ a unit upper triangular matrix, and $P$ a permutation matrix.

Certified multiple-precision evaluation of the Airy Ai function

The series expansion at the origin of the Airy function $Ai\left(x\right)$ is alternating and hence problematic to evaluate for $x>0$ due to cancellation. S. Chevillard (APICS team) and M. Mezzarobba showed in [20] how an arbitrary and certified accuracy can be obtained in that case. Based on a method recently proposed by Gawronski, Müller, and Reinhard, they exhibited two functions $F$ and $G$, both with nonnegative Taylor expansions at the origin, such that $Ai\left(x\right)=G\left(x\right)/F\left(x\right)$. The sums are now well-conditioned, but the Taylor coefficients of $G$ turn out to obey an ill-conditioned three-term recurrence. They then used the classical Miller algorithm to overcome this issue. Finally, they bounded all errors and proposed an implementation which, by allowing an arbitrary and certified accuracy, can be used for example to provide correct rounding in arbitrary precision.

Parallel product of interval matrices

The problem considered here is the multiplication of two matrices with interval coefficients. Parallel implementations by N. Revol and Ph. Théveny [10] compute results that satisfy the inclusion property, which is the fundamental property of interval arithmetic, and offer good performances: the product of two interval matrices is not slower than 15 times the product of two floating-point matrices.

Numerical reproducibility

What is called numerical reproducibility is the problem of getting the same result when the scientific computation is run several times, either on the same machine or on different machines. In [43] , the focus is on interval computations using floating-point arithmetic: N. Revol identifies implementation issues that may invalidate the inclusion property, and presents several ways to preserve this inclusion property. This work has also been presented at several conferences [30] , [29] , [31] .