Team Arénaire

Overall Objectives
Scientific Foundations
Application Domains
New Results
Contracts and Grants with Industry
Other Grants and Activities

Section: New Results

Linear Algebra and Lattice Basis Reduction

Participants : Guillaume Hanrot, Claude-Pierre Jeannerod, Nicolas Louvet, Ivan Morel, Andrew Novocin, Xavier Pujol, Gilles Villard.

Elimination-free Algorithms for Cauchy- and Vandermonde-like matrices

In [47] Claude-Pierre Jeannerod, Christophe Mouilleron, and Gilles Villard have studied asymptotically fast algorithms for generating matrix inverses of families of structured matrices like those of Cauchy and Vandermonde type. To control the growth of intermediate generators, such algorithms typically rely on a compression step based on fast Gaussian elimination. In practice, this step makes the analysis more complicated and requires some care at the implementation level. The main contribution of this study is to show that this step is in fact unnecessary for Cauchy- and Vandermonde-like matrices. For such matrices, this allowed to propose new asymptotically fast inversion algorithms that are simpler to analyze and implement.

LLL-Reduction and Numerical Analysis

In 1982, Arjen Lenstra, Hendrik Lenstra Jr. and Laszló Lovász introduced an efficiently computable notion of reduction of basis of a Euclidean lattice that is now commonly referred to as LLL-reduction. The precise definition involves the R-factor of the QR factorisation of the basis matrix. A natural mean of speeding up the LLL reduction algorithm is to use a (floating-point) approximation to the R-factor. Xiao-Wen Chang (McGill University), Damien Stehlé and Gilles Villard [44] have investigated the accuracy of the factor R of the QR factorisation of an LLL-reduced basis. This has already proved very useful to devise LLL-type algorithms relying on floating-point approximations, as this is a key ingredient for the results of the following two subsections.

In parallel, a survey was written by Ivan Morel, Damien Stehlé and Gilles Villard, on the state-of-the art results about the use of numerical analysis within the LLL algorithm [21] .

Floating-Point LLL-Reduction

Ivan Morel, Damien Stehlé and Gilles Villard [32] introduced a new LLL-type algorithm, H-LLL, that relies on Householder transformations to approximate the underlying Gram-Schmidt orthogonalizations. The latter computations are performed with floating-point arithmetic. They proved that a precision essentially equal to the dimension suffices to ensure that the output basis is reduced. H-LLL resembles the L2 algorithm of Nguyen and Stehlé that relies on a floating-point Cholesky algorithm. However, replacing Cholesky's algorithm by Householder's is not benign, as their numerical behaviors differ significantly. Broadly speaking, the new correctness proof is more involved, whereas the new complexity analysis is more direct. Thanks to the new orthogonalization strategy, H-LLL is the first LLL-type algorithm that admits a natural vectorial description, which leads to a complexity upper bound that is proportional to the progress performed on the basis (for fixed dimensions).

Improving the LLL-Reducedness of an LLL-Reduced Basis

The LLL algorithm allows one to reduce a basis to an LLL-reduced basis in polynomial time. The quality of the obtained reduction is directly related to a parameter $ \delta$ (the 3/4 factor in the original algorithm): the higher $ \delta$ , the better the reduction. It was suggested by LaMacchia that one could gradually reduce the basis by first using a small value of $ \delta$ and then increasing the value of $ \delta$ to reach the maximum quality. Ivan Morel, Damien Stehlé and Gilles Villard designed an algorithm for the second phase, which is further reducing a basis that is already reduced. They take advantage of the knowledge of the lattice obtained from the first reduction to perform the second one. The results corresponding to this work are currently being written down.

Tweaking LLL for Particular Inputs

Mark van Hoeij and Andrew Novocin [51] introduced a new lattice algorithm specifically designed for some classical applications of lattice reduction. The applications are for lattice bases with a generalized knapsack-type structure, where the target vectors are boundably short. For such applications, the complexity of the algorithm improves traditional lattice reduction by replacing some dependence on the bit-length of the input vectors by some dependence on the bound for the output vectors. If the bit-length of the target vectors is unrelated to the bit-length of the input, then the algorithm is only linear in the bit-length of the input entries, which is an improvement over the quadratic complexity of floating-point LLL algorithms. To illustrate the usefulness of this algorithm, it is showed that a direct application to factoring univariate polynomials over the integers leads to the first complexity bound improvement since 1984. A second application is algebraic number reconstruction, where a new complexity bound is obtained as well.

Solving the Shortest Lattice Vector Problem

Xavier Pujol and Damien Stehlé [50] improved a Monte Carlo algorithm recently proposed by Daniele Micciancio and Panagiotis Voulgaris (to be published at SODA 2010), which finds a shortest non-zero vector of a given lattice in time 23.199n where n is the lattice dimension. Pujol and Stehlé modified the algorithm so that they could use the birthday paradox in the last stage of the algorithm. They achieve a time complexity bound of 22.465n .


Logo Inria