## Section: New Results

### Algebraic codes

Participants : Daniel Augot, Morgan Barbier.

#### List decoding of Reed–Solomon codes

This is a new activity of the TANC project-team, whose aim is to accelerate decoding algorithms of Reed–Solomon codes (with the Guruswami–Sudan algorithm), and of Algebraic Geometric codes. With Alexander Zeh, Daniel has found a relation between so-called key equations, which are the standard tool for decoding algebraic codes, and the new interpolation based algorithms [13] . The connection is established, and the next step is to use efficient algorithms, that are used for key equations, in the context of the Guruswami–Sudan algorithm.

#### List decoding of Algebraic Geometry codes

This is also a new activity for the TANC project team, started with the arrival of Guillaume Quintin, a new PHD student, supervised by Daniel Augot and Grégoire Lecerf (from the university of Versailles Saint-Quentin). These AG codes are a generalization of Reed–Solomon codes. Quintin did a first implementation of the factorisation step in magma , to understand the algorithms, and the needed material. He is starting to rewrite the algorithm within the mathemagix framework.

#### List decoding of binary codes

Another new topic that begins with the arrival of Morgan Barbier is to study list decoding algorithms for codes defined over small alphabets. It was a challenging open problem until the publication of Wu [64] , which achieves a high decoding radius for BCH codes, which are subfield subcodes of Reed–Solomon codes. This opens a new field of applications of these algorithms, and we have in mind to apply Wu's algorithm for steganography, using the ideas of Fontaine and Galand [37] . They used Reed–Solomon codes, and it seems very natural to use the same ideas with BCH codes. Implementing Wu's algorithm and applying it to steganography is the plan of Barbier's thesis.

If the number of errors in a received word is less than the code's
error correction capacity, the decoding algorithm is guaranteed to
return a single codeword. This property led to the term
*unique decoding* , which has been (and still mostly is) the standard
decoding method. However, in the last decade much attention has
been given to so-called *list decoding* methods which can
correct far more errors, at the expense of losing the uniqueness
of the decoded word.

While the concept of list decoding code dates back from the fifties, the first interesting algorithm only appeared in 1995, when Madhu Sudan introduced a list decoding algorithm for Reed–Solomon codes able to correct up to errors, where is the code rate. Building upon this work, Sudan and his student Venkatesan Guruswami then designed an improvement to Sudan's algorithm correcting errors. Since then, a few other algorithms were proposed but the Guruswami–Sudan is still considered as the reference for list decoding.

As previously mentioned, list decoding trades the unicity of the corrected
codeword for larger correction capabilities. Needless to say, if more
errors are allowed, the list of returned codeword candidates will be
larger. An important bound in list decoding is that of Johnson. Basically,
if n_{e} , the number of errors allowed, is less than the Johnson bound
J_{q}(n, d) , the size of the candidate list will grow polynomially with
n_{e} .

For a linear code defined over , of length n , dimension k and minimal distance d , the Johnson bound is given by

Traditionally, we distinguish the binary codes, defined over , from the general case. For binary codes, the Johnson bound takes the simpler form

In the general case, provided , we approximate J_{q}(n, d)
by

The Johnson bound in the binary case is more interesting, since we are able to correct more errors for a given length and distance than in the general case.

#### Homomorphic encryption

Gentry's breakthrough paper [46] has realized
*fully homomorphic encryption* , albeit in a quite theoretical way. The
properties of such schemes are that operations on the ciphertexts
correspond to the same operations on the plaintext. This enables
powerful applications, including querying encrypted databases.
But Gentry's scheme, although widely publicised, appears to be
quite unpractical, since it implies huge ciphertexts.

Daniel Augot, Ludovic Perret, and Frederik Armknecht have devised a code-based encryption scheme based on evaluation codes, which has been given a particular instance with q -ary Reed–Muller codes. Although our scheme is secret-key, it still enables the desirable applications envisioned by Gentry, and is much more efficient with respect to ciphertext size and computional complexity of encryption operations. A paper has been submitted to the Eurocrypt 2010 conference [18] .