Team CACAO

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Other Grants and Activities
Dissemination
Bibliography

Section: Scientific Foundations

Linear Algebra and Lattices

With “linear algebra and lattices”, we denote two classes of problems of interest: computing vectors of the kernel of a large sparse matrix defined over a finite field, and studying algorithms to handle lattices that are given by a vector basis.

Huge linear systems are frequently encountered as last steps of “index-calculus” based algorithms for factoring or discrete logarithm computations. Those systems correspond to a particular presentation of the underlying group by generators and relations; they are thus always defined on a base ring which is Im2 $\#8484 $ modulo the exponent of the group, typically Im3 ${\#8484 /2\#8484 }$ in the case of factorization, Im4 ${\#8484 /(q^n-1)\#8484 }$ when trying to solve a discrete logarithm problem over Im5 $\#120125 _{q^n}^*$ . Those systems are often extremely sparse, so that specialized algorithms (Lanczós, Wiedemann) relying only on the evaluation of matrix-vector products essentially have a quadratic complexity, instead of being cubic with the classical Gaussian elimination.

The sizes of the matrices that are handled in record computations are such that they do not fit in the central memory of a single machine, even using a representation adapted to their sparse nature. Some parallelism is then required, yielding various difficulties that are different from the ones encountered in the classical linear algebra problems linked to numerical analysis. Specifically, dealing with matrices defined over finite fields precludes direct adaptation of numerical methods based on the notion of convergence and fixed-point theorems.

The second main topic is algorithmic lattice theory. Lattices are key tools in numerous problems in computer algebra, algorithmic number theory and cryptology. The typical questions one wants to solve are to find the shortest nonzero vector in a lattice and to find the closest lattice vector to a given vector. A more general concern is to find a better lattice basis than the one provided by the user; by “better” we mean that it consists of short, almost orthogonal vectors. This is a difficult problem in general, since finding the shortest nonzero vector is already NP-hard, under probabilistic reductions. In 1982, Lenstra, Lenstra, and Lovász  [26] defined the notion of a LLL-reduced basis and described an algorithm to compute such a basis in polynomial time. Although not always sufficient, the LLL-reduction is sometimes enough for the application. Some stronger notions of reduction exist, such as Hermite-Korkine-Zolotarev (HKZ) reduction  [23] , which require exponential or super-exponential time but solve the shortest vector problem in an exact way. Schnorr  [28] introduced a complete hierarchy of reductions ranging from LLL to HKZ both in quality and in complexity, the so-called k -BKZ reductions.


previous
next

Logo Inria