**The overall objective of AriC is, through computer
arithmetic, to improve computing at large, in terms of performance,
efficiency, and reliability.** We work on arithmetic algorithms (integer and floating-point arithmetic, complex arithmetic, multiple-precision arithmetic, finite-field arithmetic) and their implementation, approximation methods, Euclidean lattices and cryptology, certified computing and computer algebra.
Specifically, we focus on the following domains:

**Floating-point arithmetic:** The IEEE 754-2008 standard specifies the behavior of floating-point arithmetic.
We are interested in preparing future evolutions of the standard, in implementing it efficiently on embedded processors,
in exploring its “low level” properties for better numerical analysis (for instance by finding certified and tight error bounds
of numerical algorithms), and in building correctly rounded mathematical function programs. We are also interested in
designing efficient algorithms and software for multiple-precision arithmetic and complex arithmetic.

**Certified computing and computer algebra:** We are interested in computing certified approximations
using computer algebra and formal proof systems, in analyzing the fundamental algorithms of semi-numerical computation,
in finding best or nearly best approximations under special constraints, and in designing efficient algorithms for exact linear algebra.
Also, we are working on the development and standardization of interval arithmetic.

**Cryptography and lattices:** Lattice-based cryptography (LBC) is a fast developing field, raising fascinating questions
both on cryptography and lattices. Lattice algorithmics is an established research area that is being revived by the amazing
application that is LBC and by the new tools and concepts that it introduced. We aim at contributing to a major technological switch,
from conventional to lattice-based cryptography. This will help suppress the main limitation to the expansion of the cloud economy
that are the privacy concerns. Further, thanks to the ubiquity of lattices, our work may significantly impact several other fields,
including coding, computer algebra, and computer arithmetic.

Lattice-based cryptography (LBC) is an utterly promising, attractive (and competitive) research ground in cryptography, thanks to a combination of unmatched properties:

**Improved performance.** LBC primitives have low asymptotic costs, but remain cumbersome in practice (e.g., for parameters achieving security against computations of up to 2100 bit operations). To address this limitation, a whole branch of LBC has evolved where security relies on the restriction of lattice problems to a family of more structured lattices called *ideal lattices*. Primitives based on such lattices can have quasi-optimal costs (i.e., quasi-constant amortized complexities), outperforming all contemporary primitives. This asymptotic performance sometimes translates into practice, as exemplified by NTRUEncrypt.

**Improved security.** First, lattice problems seem to remain hard even for quantum computers. Moreover, the security of most of LBC holds under the assumption that standard lattice problems are hard in the worst case. Oppositely, contemporary cryptography assumes that specific problems are hard with high probability, for some precise input distributions. Many of these problems were artificially introduced for serving as a security foundation of new primitives.

**Improved flexibility.** The master primitives (encryption, signature) can all be realized based on worst-case (ideal) lattice assumptions. More evolved primitives such as ID-based encryption (where the public key of a recipient can be publicly derived from its identity) and group signatures, that were the playing-ground of pairing-based cryptography (a subfield of elliptic curve cryptography), can also be realized in the LBC framework, although less efficiently and with restricted security properties. More intriguingly, lattices have enabled long-wished-for primitives. The most notable example is homomorphic encryption, enabling computations on encrypted data. It is the appropriate tool to securely outsource computations, and will help overcome the privacy concerns that are slowing down the rise of the cloud.

We will work on three directions, detailed now.

All known lattice reduction algorithms follow the same design principle: perform a sequence of small elementary steps transforming a current basis of the input lattice, where these steps are driven by the Gram-Schmidt orthogonalisation of the current basis.

In the short term, we will fully exploit this paradigm, and hopefully lower the cost of reduction algorithms with respect to the lattice dimension. We aim at asymptotically fast algorithms with complexity bounds closer to those of basic and normal form problems (matrix multiplication, Hermite normal form). In the same vein, we plan to investigate the parallelism potential of these algorithms.

Our long term goal is to go beyond the current design paradigm, to reach better trade-offs between run-time and shortness of the output bases. To reach this objective, we first plan to strengthen our understanding of the interplay between lattice reduction and numerical linear algebra (how far can we push the idea of working on approximations of a basis?), to assess the necessity of using the Gram-Schmidt orthogonalisation (e.g., to obtain a weakening of LLL-reduction that would work up to some stage, and save computations), and to determine whether working on generating sets can lead to more efficient algorithms than manipulating bases. We will also study algorithms for finding shortest non-zero vectors in lattices, and in particular look for quantum accelerations.

We will implement and distribute all algorithmic improvements, e.g., within the fplll library. We are interested in high performance lattice reduction computations (see application domains below), in particular in connection/continuation with the HPAC ANR project (algebraic computing and high performance consortium).

Our long term goal is to demonstrate the superiority of lattice-based cryptography over contemporary public-key cryptographic approaches. For this, we will 1- Strengthen its security foundations, 2- Drastically improve the performance of its primitives, and 3- Show that lattices allow to devise advanced and elaborate primitives.

The practical security foundations will be strengthened by the improved understanding of the limits of lattice reduction algorithms (see last section). On the theoretical side, we plan to attack two major open problems: Are ideal lattices (lattices corresponding to ideals in rings of integers of number fields) computationally as hard to handle as arbitrary lattices? What is the quantum hardness of lattice problems?

Lattice-based primitives involve two types of operations: sampling from discrete Gaussian distributions
(with lattice supports), and arithmetic in polynomial rings such as

Our main objective in terms of cryptographic functionality will be to determine the extent to which lattices can help securing cloud services. For example, is there a way for users to delegate computations on their outsourced dataset while minimizing what the server eventually learns about their data? Can servers compute on encrypted data in an efficiently verifiable manner? Can users retrieve their files and query remote databases anonymously provided they hold appropriate credentials? Lattice-based cryptography is the only approach so far that has allowed to make progress into those directions. We will investigate the practicality of the current constructions, the extension of their properties, and the design of more powerful primitives, such as functional encryption (allowing the recipient to learn only a function of the plaintext message). To achieve these goals, we will in particular focus on cryptographic multilinear maps.

This research axis of ARIC is gaining strength thanks to the recruitment of Benoit Libert. We will be particularly interested in the practical and operational impacts, and for this reason we envision a collaboration with an industrial partner.

Diophantine equations. Lattice reduction algorithms can be used to solve diophantine equations, and in particular to find simultaneous rational approximations to real numbers. We plan to investigate the interplay between this algorithmic task, the task of finding integer relations between real numbers, and lattice reduction. A related question is to devise LLL-reduction algorithms that exploit specific shapes of input bases. This will be done within the ANR DynA3S project.

Communications. We will continue our collaboration with Cong Ling on the use of lattices in communications. We plan to work on the wiretap channel over a fading channel (modeling cell phone communications in a fast moving environment). The current approaches rely on ideal lattices, and we hope to be able to find new approaches thanks to our expertise on them due to their use in lattice-based cryptography. We will also tackle the problem of sampling vectors from Gaussian distributions with lattice support, for a very small standard deviation parameter. This would significantly improve current schemes for communication schemes based on lattices, as well as several cryptographic primitives.

Cryptanalysis of variants of RSA. Lattices have been used extensively
to break variants of the RSA encryption scheme, via Coppersmith's method to
find small roots of polynomials. We plan to work with Nadia Heninger (U. of Pennsylvania)
on improving these attacks, to make them more practical. This is an excellent test case
for testing the practicality of LLL-type algorithm. Nadia Heninger has a strong
experience in large scale cryptanalysis based on Coppersmith's method (http://

We plan to focus on the generation of certified and efficient approximations for solutions of linear differential equations. These functions cover many classical mathematical functions and many more can be built by combining them. One classical target area is the numerical evaluation of elementary or special functions. This is currently performed by code specifically handcrafted for each function. The computation of approximations and the error analysis are major steps of this process that we want to automate, in order to reduce the probability of errors, to allow one to implement “rare functions”, to quickly adapt a function library to a new context: new processor, new requirements – either in terms of speed or accuracy.

In order to significantly extend the current range of functions under consideration, several methods originating from approximation theory have to be considered (divergent asymptotic expansions; Chebyshev or generalized Fourier expansions; Padé approximants; fixed point iterations for integral operators). We have done preliminary work on some of them. Our plan is to revisit them all from the points of view of effectivity, computational complexity (exploiting linear differential equations to obtain efficient algorithms), as well as in their ability to produce provable error bounds. This work is to constitute a major progress towards the automatic generation of code for moderate or arbitrary precision evaluation with good efficiency. Other useful, if not critical, applications are certified quadrature, the determination of certified trajectories of spatial objects and many more important questions in optimal control theory.

As computer arithmeticians, a wide and important target for us is the design of efficient and certified linear filters in digital signal processing (DSP). Actually, following the advent of Matlab as the major tool for filter design, the DSP experts now systematically delegate to Matlab all the part of the design related to numerical issues. And yet, various key Matlab routines are neither optimized, nor certified. Therefore, there is a lot of room for enhancing numerous DSP numerical implementations and there exist several promising approaches to do so.

The first important challenge that we want to address is the development and the implementation of optimal methods for rounding the coefficients involved in the design of the filter. If done in a naive way, this rounding may lead to a significant loss of performance. We will study in particular FIR and IIR filters.

There is a clear demand for hardest-to-round cases, and several computer manufacturers recently contacted us to obtain new cases. These hardest-to-tound cases are a precious help for building libraries of correctly rounded mathematical functions. The current code, based on Lefèvre algorithm, will be rewritten and formal proofs will be done. We plan to use uniform polynomial approximation and diophantine techniques in order to tackle the case of the IEEE quad precision and analytic number theory techniques (exponential sums estimates) for counting the hardest-to-round cases.

The main theme here is the study of fundamental operations (“kernels”) on a hierarchy of symbolic or numeric data types spanning integers, floating-point numbers, polynomials, power series, as well as matrices of all these. Fundamental operations include basic arithmetic (e.g., how to multiply or how to invert) common to all such data, as well as more specific ones (change of representation/conversions, GCDs, determinants, etc.). For such operations, which are ubiquitous and at the very core of computing (be it numerical, symbolic, or hybrid numeric-symbolic), our goal is to ensure both high-performance and reliability.

On the symbolic side, we have so far obtained fast algorithms for basic operations on both polynomial matrices and structured matrices, but in a rather independent way. Both types turn out to have much in common, but this is sometimes not reflected by the complexities obtained, especially for applications in cryptology and coding theory. Our long term goal in this area is thus to explore these connections further, to provide a more unified treatment and bridge these complexity gaps, and to produce associated efficient implementations. A first step towards this goal will be the design and implementation of enhanced algorithms for various generalizations of Hermite-Padé approximation; in the context of list decoding, this should in particular make it possible to improve over the structured-matrix approach, which is so far the fastest known.

On the numerical side, we will continue to revisit and improve the classical error bounds of numerical analysis in the light of all the subtleties of IEEE floating-point arithmetic. These aspects will be developed jointly with the “symbolic floating-point” approach presented in the next paragraph. A complementary approach will also be studied, based on the estimation (possibly via automatic differentiation) of condition numbers in order to identify inputs leading to large backward errors. Finally, concerning interval arithmetic, a thorough analysis of the accuracy of several representations, such as mid-rad, is also to be done.

Our work on the analysis of algorithms in floating-point arithmetic leads us to manipulate floating-point data in their greatest generality, that is, as symbolic expressions in the base and the precision. A long-term goal here is to develop theorems as well as efficient data structures and algorithms for handling such quantities by computer rather than by hand as we do now. This is a completely new direction, whose main outcome will be a “symbolic floating-point toolbox” distributed in computer algebra systems like Sage and or Maple. In particular, such a toolbox will provide a way to check automatically the certificates of optimality we have obtained on the error bounds of various numerical algorithms. A PhD student has started on this subject in September 2014.

Many numerical problems require higher precision than the conventional floating-point (single, double) formats. One solution is to use multiple precision libraries such as GNU MPFR, which allow the manipulation of very high precision numbers, but their generality (they are able to handle numbers with millions of digits), is a quite heavy alternative when high performance is needed. Our objective is to design a multiple precision arithmetic library that would allow to tackle problems where a precision of a few hundred bits is sufficient, but which have strong performance requirements. Applications include the process of long-term iteration of chaotic dynamical systems ranging from the classical Henon map to calculations of planetary orbits. The designed algorithms will be formally proved. We are in close contact with Warwick Tucker (Uppsala University, Sweden) and Mioara Joldes (LAAS, Toulouse) on this topic. A PhD student funded by a Région Rhône-Alpes grant has started on this topic in September 2014.

We will work on the interplay between floating-point and integer arithmetics, and especially on how to make the best use of both integer and floating-point basic operations when designing floating-point numerical kernels for embedded devices. This will be done in the context of the Metalibm ANR project and of our collaboration with STMicroelectronics. In addition, our work on the IEEE 1788 standard leads naturally to the development of associated reference libraries for interval arithmetic. A first direction will be to implement IEEE 1788 interval arithmetic using the fixed-precision hardware available for IEEE 754-2008 floating-point arithmetic. Another one will be to provide efficient support for multiple-precision intervals, in mid-rad representation and by developing MPFR-based code-generation tools aimed at handling families of functions.

So far, we have investigated how specific instructions like the fused multiply-add (FMA) impact the accuracy of computations, and have proposed several highly accurate FMA-based algorithms. The FMA being available on several recent architectures, we now want to understand its impact on such algorithms in terms of practical performances. This should be a medium term project, leading to FMA-based algorithms with best speed/accuracy/robustness tradeoff. On the other hand (and on the long term), a major issue is how to exploit the various levels of parallelism of recent and upcoming architectures to ensure simultaneously high performance and reliability. A first direction will be to focus on SIMD parallelism, offered by instruction sets via vector instructions. This kind of parallelism should be key for small numerical kernels like elementary functions, complex arithmetic, or low-dimensional matrix computations. A second direction will be at the multi-core processor level, especially for larger numerical or algebraic problems (and in conjunction with SIMD parallelism when handling sub-problems of small enough dimension). Finally, we will work on aspects of automatic adaptation (auto-tuning) to such architectural features, not only for speed, but also for accuracy. This could be done via the design and implementation of heuristics capable of inserting more accurate codes, based for example on error-free transforms, whenever needed.

The application domains of hardware arithmetic operators are

digital signal processing;

image processing;

embedded applications;

reconfigurable computing;

cryptography.

Our expertise on validated numerics is useful to analyze and improve, and guarantee the quality of numerical results in a wide range of applications including:

scientific simulation;

global optimization;

control theory.

Much of our work, in particular the development of correctly rounded elementary functions, is critical to the

reproducibility of floating-point computations.

Lattice reduction algorithms have direct applications in

public-key cryptography.

Another interesting field of application is

communications theory.

AriC software realizations are
accessible from the web page
http://

GNU MPFR is an efficient multiple-precision floating-point library with
well-defined semantics (copying the good ideas from the IEEE-754 standard),
in particular correct rounding in 5 rounding modes. GNU MPFR provides about
80 mathematical functions, in addition to utility functions (assignments,
conversions...). Special data (*Not a Number*, infinities, signed
zeros) are handled like in the IEEE-754 standard.

MPFR was one of the main pieces of software developed by the old SPACES team at Loria. Since late 2006, with the departure of Vincent Lefèvre to Lyon, it has become a joint project between the Caramel (formerly SPACES then CACAO) and the AriC (formerly Arénaire) project-teams. MPFR has been a GNU package since 26 January 2009.

An MPFR-MPC developers meeting took place from 20 to 22 January 2014 in Nancy. There was no new release this year, but various developments were done in the trunk.

The main work done in the AriC project-team:

Changed the behavior of the `mpfr_set_exp` function to avoid
undefined behavior in some cases (this change mainly impacted the
internal usage).

Bug fixes and various improvements (portability, efficiency, etc.).

The `mpfr_sum` function is being rewritten (`new-sum`
branch); see Section .

**URL:**
http://

GNU MPFR is on the Black Duck Open Hub community platform for free and
open source software:
https://

ACM: D.2.2 (Software libraries), G.1.0 (Multiple precision arithmetic), G.4 (Mathematical software).

AMS: 26-04 Real Numbers, Explicit machine computation and programs.

APP: no longer applicable (copyright transferred to the Free Software Foundation).

License: LGPL version 3 or later.

Type of human computer interaction: C library, callable from C or other languages via third-party interfaces.

OS/Middleware: any OS, as long as a C compiler is available.

Required library or software: GMP.

Programming language: C.

Documentation: API in texinfo format (and other formats via conversion); algorithms are also described in a separate document.

The search for the worst cases for the correct rounding
(hardest-to-round cases) of mathematical functions (

The Perl scripts have been improved (in particular, for the interaction with Grid Engine).

fplll contains several algorithms on lattices that rely on floating-point computations. This includes implementations of the floating-point LLL reduction algorithm, offering different speed/guarantees ratios. It contains a “wrapper” choosing the estimated best sequence of variants in order to provide a guaranteed output as fast as possible. In the case of the wrapper, the succession of variants is oblivious to the user. It also includes a rigorous floating-point implementation of the Kannan-Fincke-Pohst algorithm that finds a shortest non-zero lattice vector, and the BKZ reduction algorithm.

The fplll library is used or has been adapted to be integrated within several mathematical computation systems such as Magma, Sage, and PariGP. It is also used for cryptanalytic purposes, to test the resistance of cryptographic primitives.

This year, several improvements to the BKZ (block Korkine Zolotarev) algorithm
have been implemented. Further, the library is now hosted on `github`.

**URL:**
https://

ACM: D.2.2 (Software libraries), G.4 (Mathematical software)

APP: Procedure started

License: LGPL v2.1

Type of human computer interaction: C++ library callable, from any C++ program.

OS/Middleware: any, as long as a C++ compiler is available.

Required library or software: MPFR and GMP.

Programming language: C++.

Documentation: available in html format on
**URL:**
https://

Sipe is a mini-library in the form of a C header file, to perform radix-2 floating-point computations in very low precisions with correct rounding, either to nearest or toward zero. The goal of such a tool is to do proofs of algorithms/properties or computations of tight error bounds in these precisions by exhaustive tests, in order to try to generalize them to higher precisions. The currently supported operations are addition, subtraction, multiplication (possibly with the error term), fused multiply-add/subtract (FMA/FMS), and miscellaneous comparisons and conversions. Sipe provides two implementations of these operations, with the same API and the same behavior: one based on integer arithmetic, and a new one based on floating-point arithmetic.

New in 2014:

`sipe_to_mpfr` function;

support for `__float128` from GCC/libquadmath (implementing
the binary128 format);

some corrections.

**URL:**
https://

ACM: D.2.2 (Software libraries), G.4 (Mathematical software).

AMS: 26-04 Real Numbers, Explicit machine computation and programs.

License: LGPL version 2.1 or later.

Type of human computer interaction: C header file.

OS/Middleware: any OS.

Required library or software: GCC compiler.

Programming language: C.

Documentation: comment at the beginning of the code and Research report Inria RR-7832.

Gfun is a Maple package for the manipulation of linear recurrence or differential equations. It provides tools for guessing a sequence or a series from its first terms; for manipulating rigorously solutions of linear differential or recurrence equations, using the equation as a data-structure. This year, the implementation effort was focused on speeding up the guessing routines in the case of sequences with symbolic parameters that come up in general hypergeometric identities.

Linear (order-one) function evaluation schemes, such as bipartite and multipartite tables, are usually effective for low precision approximations. For high output precision, the lookup table size is often too large for practical use. Dong Wang and Milos Ercegovac (UC Los Angeles) and Nicolas Brisebarre and Jean-Michel Muller investigate the so-called

Many numerical problems require a higher computing precision than that offered by common floating point (FP) formats.
One common way of extending the precision is to represent numbers in a *multiple component* format.
With so-called *floating point expansions*, numbers are represented as the unevaluated sum of standard machine precision FP numbers.
This format offers the simplicity of using directly available and highly optimized FP operations and is used by multiple-precisions libraries such as
Bailey's `Q'D or the analogue Graphics Processing Units tuned version, GQD.
Mioara Joldes (LAAS), Jean-Michel Muller, and Valentina Popescu introduced a new algorithm for computing
the reciprocal FP expansion

The accuracy analysis of complex floating-point multiplication done by
Brent, Percival, and Zimmermann [*Math. Comp.*, 76:1469–1481, 2007]
is extended by Peter Kornerup (Odense Univ. Denmark), Claude-Pierre Jeannerod, Nicolas Louvet, and Jean-Michel Muller
to the case where a fused multiply-add (FMA) operation is available.
Considering floating-point arithmetic with rounding to nearest and unit roundoff

In their book *Scientific Computing on Itanium-based Systems*, Cornea, Harrison, and Tang introduced an accurate algorithm for evaluating
expressions of the form *to nearest even* then the simpler bound *to nearest away*, then there exist floating-point inputs

Stef Graillat (Paris 6 University), Vincent Lefèvre and Jean-Michel Muller improved the usual relative error bound for the computation of

When computing matrix factorizations and solving linear systems in floating-point arithmetic,
classical rounding error analyses provide backward error bounds whose leading terms have the form

Rounding error analyses of numerical algorithms are most often carried out via repeated applications
of the so-called standard models of floating-point arithmetic. Given a round-to-nearest function

In collaboration with Christoph Lauter and Marc Mezzarobba (LIP6 laboratory, Paris), Nicolas Brisebarre and Jean-Michel Muller introduce an algorithm to compare a binary floating-point (FP) number and a decimal FP number, assuming the “binary encoding” of the decimal formats is used, and with a special emphasis on the basic interchange formats specified by the IEEE 754-2008 standard for FP arithmetic. It is a two-step algorithm: a first pass, based on the exponents only, quickly eliminates most cases, then, when the first pass does not suffice, a more accurate second pass is performed. They provide an implementation of several variants of our algorithm, and compare them .

Vincent Lefèvre has designed a new algorithm to compute the correctly rounded sum of several floating-point numbers, each having its own precision and the output having its own precision, as in GNU MPFR. At the same time, the `mpfr_sum` function is being reimplemented (not finished yet). While the old algorithm was just an application of Ziv's method, thus with exponential time and memory complexity in the worst case such as the sum of a huge number and a tiny number, the new algorithm does the sum by blocks (reiterations being needed only in case of cancellations), taking such holes between numbers into account.

The IEEE 1788 working group is devoted to the standardization of interval arithmetic. V. Lefèvre and N. Revol are very active in this group. This year has been devoted to a ballot on the whole text of the standard , and to editorial work to make it compliant with IEEE rules. The final, remaining step, is the so-called “Sponsor ballot” and it should be completed in 2015.

For the product of matrices with interval coefficients, fast approximate algorithms have been developed by Philippe Théveny: they compute an enclosure of the exact product. These algorithms rely on the representation of intervals by their midpoints and radii. This representation allows one to use optimized routines for the multiplication of matrices with floating-point coefficients. In , the quality of the approximation of several algorithms is established, which accounts for roundoff errors and not only method's errors. A new algorithm is proposed, which requires even less (only 2) calls to a floating-point routine and still offers a good approximation quality, for a well specified type of input matrices. Three of the studied algorithms are implemented on a multi-core architecture. To avoid problems listed in and to offer good performances, Philippe Théveny developed optimizations. The resulting implementations exhibit good performances: guaranteed results are obtained with an overhead less than 3, high numerical intensity and good scalability.

What is called *numerical reproducibility* is the problem of getting the same
result when the scientific computation is run several times, either on the same machine
or on different machines.
In , the focus is on interval computations using floating-point arithmetic:
Nathalie Revol and Philippe Théveny identified implementation issues that may invalidate the inclusion property,
and presented several ways to preserve this inclusion property.
This work has also been replaced in the larger context of numerical validation .

Muhammad Chowdhury (U. Western Ontario), Claude-Pierre Jeannerod, Vincent Neiger (ENS de Lyon), Éric Schost (U. Western Ontario), and Gilles Villard proposed in a fast algorithm for interpolating multivariate polynomials with multiplicities. This algorithm relies on the reduction to a problem of simultaneous polynomial approximations, which is then solved using fast structured linear algebra techniques. This algorithm leads to the best known complexity bounds for the interpolation step of the list-decoding of Reed-Solomon codes, Parvaresh-Vardy codes or folded Reed-Solomon codes. In the special case of Reed-Solomon codes, it allows to accelerate the interpolation step of Guruswami and Sudan’s list-decoding by a factor (list size)/(multiplicity).

M. Bardet (U. Rouen), J.-C. Faugère (PolSys team) and B. Salvy studied the complexity of Gröbner bases computation, in particular in the generic situation where the variables are in simultaneous Noether position with respect to the system. They gave a bound on the number of polynomials of each degree in a Gröbner basis computed by Faugère’s

Colleagues from the LAAS (Toulouse) and B. Salvy provided a new method for computing the probability of collision between two spherical space objects involved in a short-term encounter. In this specific framework of conjunction, classical assumptions reduce the probability of collision to the integral of a 2-D normal distribution over a disk shifted from the peak of the corresponding Gaussian function. Both integrand and domain of integration directly depend on the nature of the short-term encounter. Thus the inputs are the combined sphere radius, the mean relative position in the encounter plane at reference time as well as the relative position covariance matrix representing the uncertainties. The method they presented is based on an analytical expression for the integral. It has the form of a convergent power series whose coefficients verify a linear recurrence. It is derived using Laplace transform and properties of D-finite functions. The new method has been intensively tested on a series of test-cases and compares favorably to other existing works .

Most lattice-based cryptographic schemes are built upon the assumed hardness of the Short Integer Solution (SIS) and Learning With Errors (LWE) problems. Their efficiencies can be drastically improved by switching the hardness assumptions to the more compact Ring-SIS and RingLWE problems. However, this change of hardness assumptions comes along with a possible security weakening: SIS and LWE are known to be at least as hard as standard (worst-case) problems on euclidean lattices, whereas Ring-SIS and Ring-LWE are only known to be as hard as their restrictions to special classes of ideal lattices, corresponding to ideals of some polynomial rings. Adeline Langlois and Damien Stehlé defined the Module-SIS and Module-LWE problems, which bridge SIS with Ring-SIS, and LWE with Ring-LWE, respectively. They proved that these average-case problems are at least as hard as standard lattice problems restricted to module lattices (which themselves bridge arbitrary and ideal lattices). As these new problems enlarge the toolbox of the lattice-based cryptographer, they could prove useful for designing new schemes. Importantly, the worst-case to average-case reductions for the module problems are (qualitatively) sharp, in the sense that there exist converse reductions. This property is not known to hold in the context of Ring-SIS/Ring-LWE: Ideal lattice problems could reveal easy without impacting the hardness of Ring-SIS/Ring-LWE .

Cong Ling (Imperial College, UK), Laura Luzzi (ENSEA), Jean-Claude Belfiore (Telecom ParisTech) and Damien Stehlé proposed a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel. The key tool in their security proof is the flatness factor which characterizes the convergence of the conditional output distributions corresponding to different messages and leads to an upper bound on the information leakage. They not only introduced the notion of secrecy-good lattices, but also proposed the flatness factor as a design criterion of such lattices. Both the modulo-lattice Gaussian channel and the genuine Gaussian channel are considered. In the latter case, they proposed a novel secrecy coding scheme based on the discrete Gaussian distribution over a lattice, which achieves the secrecy capacity to within a half nat under mild conditions. No a priori distribution of the message is assumed, and no dither is used in their proposed schemes .

The Garg-Gentry-Halevi (GGH) Graded Encoding Scheme, based on ideal
lattices, is the first plausible approximation to a cryptographic
multilinear map. Unfortunately, the scheme requires very large
parameters to provide security for its underlying encoding
re-randomization process. Adeline Langlois, Damien Stehlé and Ron
Steinfeld (Monash University, Australia) formalized, simplified and improved the
efficiency and the security analysis of the re-randomization process
in the GGH construction. This results in a new construction that they
called GGHLite. In particular, they first lowered the size of a standard
deviation parameter of the GGH re-randomization process from
exponential to polynomial in the security parameter. This first
improvement is obtained via a finer security analysis of the
so-called drowning step of re-randomization, in which they applied the Rényi
divergence instead of the conventional statistical distance as a
measure of distance between distributions. Their second improvement is
to reduce the number of randomizers needed to 2, independently
of the dimension of the underlying ideal lattices. These two
contributions allowed them to decrease the bit size of the public
parameters to

Let

San Ling (NTU, Singapore), Duong Hieu Phan (LAGA), Damien Stehlé and
Ron Steinfeld (Monash University, Australia) introduced the *et al.* improved and extended it
to an LWE to *et al.* introduced the notion of
projective sampling family in which each sampling function is keyed
and, with a projection of the key on a well chosen space, one can
simulate the sampling function in a computationally indistinguishable
way. The construction of a projective sampling family from

Support of membership revocation is a desirable functionality for any
group signature scheme. Among the known revocation approaches,
verifier-local revocation (VLR) seems to be the most flexible one,
because it only requires the verifiers to possess some up-to-date
revocation information, but not the signers. All of the contemporary
VLR group signatures operate in the bilinear map setting, and all of
them will be insecure once quantum computers become a reality. Adeline
Langlois, San Ling, Khoa Nguyen and Huaxiong Wang (NTU, Singapore)
introduced the first lattice-based VLR group
signature , and thus, the first such
scheme that is believed to be quantum-resistant. In comparison with
existing lattice-based group signatures, this scheme has several
noticeable advantages: support of membership revocation,
logarithmic-size signatures, and weaker security assumption. In the
random oracle model, our scheme is proved to be secure based on the
hardness of the Shortest Independent Vector Problem with approximation
factor

Julien Devigne (Orange Labs), Eleonora Guerrini (Univ. Montpellier 2, LIRMM) and Fabien Laguillaumie adapt the primitive of proxy re-encryption which allows a user to decide that in case of unavailability, one (or several) particular user, the delegatee, will be able to read his confidential messages. They modify it so that a sender can choose who among many potential delegatees will be able to decrypt his messages, and propose a simple and efficient scheme which is secure under chosen plaintext attack under standard algorithmic assumption in a bilinear setting. They also investigate the possibility to add a traceability of the proxy so that one can detect if it has leaked some re-encryption keys .

Ronan Lashermes (SAS-ENSMSE, PRISM), Marie Paindavoine, Nadia El Mrabet (Univ. P8, LIASD), Jacques Fournier (SAS-ENSMSE) and Louis Goubin (UVSQ, PRISM) describe practical implementations of fault attacks against the Miller algorithm, which computes pairing evaluations on algebraic curves. These implementations validate common fault models used against pairings. In the light of the implemented fault attacks, they show that some blinding techniques proposed to protect the algorithm against Side-Channels Analyses cannot be used as countermeasures against the implemented fault attacks .

Verifiability is central to building protocols and systems with
integrity. Initially, efficient methods employed the Fiat-Shamir
heuristics. Since 2008, the Groth-Sahai techniques have been the most
efficient in constructing non-interactive witness indistinguishable
and zero-knowledge proofs for algebraic relations in the standard
model. For the important task of proving membership in linear
subspaces, Jutla and Roy (Asiacrypt 2013) gave significantly more
efficient proofs in the quasi-adaptive setting (QA-NIZK). For
membership of the row space of a *constant* number group elements
– regardless of the number of equations or the number of variables –
and additionally proved them *unbounded* simulation-sound. Unlike
previous unbounded simulation-sound Groth-Sahai-based proofs, their
construction does not involve quadratic pairing product equations and
does not rely on a chosen-ciphertext-secure encryption
scheme. Instead, they built on structure-preserving signatures with
homomorphic properties. They applied their methods to design new and
improved CCA2-secure encryption schemes. In particular, they built the
first efficient threshold CCA-secure keyed-homomorphic encryption
scheme (*i.e.*, where homomorphic operations can only be carried
out using a dedicated evaluation key) with publicly verifiable
ciphertexts.

Threshold cryptography is a fundamental distributed computational
paradigm for enhancing the availability and the security of
cryptographic public-key schemes. It does it by dividing private keys
into *i.e.*, it adds quorum control to traditional cryptographic
services and introduces redundancy. Originally, most practical
threshold signatures have a number of demerits: They have been
analyzed in a static corruption model (where the set of corrupted
servers is fixed at the very beginning of the attack), they require
interaction, they assume a trusted dealer in the key generation phase
(so that the system is not fully distributed), or they suffer from
certain overheads in terms of storage (large share sizes).

To gain strong confidence in the security of a public-key scheme, it
is most desirable for the security proof to feature a tight reduction
between the adversary and the algorithm solving the under-lying hard
problem. Recently, Chen and Wee (Crypto '13) described the first
Identity-Based Encryption scheme with almost tight security under a
standard assumption. Here, “almost tight” means that the security
reduction only loses a factor

Two studies were conducted for Bosch (Stuttgart) on the numerical aspects of embedded computing. In the first one, Florent de Dinechin and Jean-Michel Muller dealt with the issue of the choice of an adequate representation of numbers (fixed-point or floating-point) for embedded systems. In the second one, Claude-Pierre Jeannerod reported on the stability and accuracy issues of linear system solving in finite-precision arithmetic.

INTEL made a $20000 donation in recognition of our work on the correct rounding of functions.

Nicolas Brunie has been supported by a CIFRE PhD grant (from 15/04/2011 to 14/04/2014) from Kalray. The purpose was the study of a tightly coupled reconfigurable accelerator to be embedded in the Kalray multicore processor.

Marie Paindavoine is supported by an Orange Labs PhD Grant (from October 2013 to November 2016). She works on privacy-preserving encryption mechanisms.

The PhD grant of Valentina Popescu is funded by Région Rhône-Alpes through the ARC6 programme.

“High-performance Algebraic Computing” (HPAC) is a four year ANR
project that started in January 2012.
The Web page of the project is
http://

The overall ambition of HPAC is to provide international reference high-performance libraries for exact linear algebra and algebraic systems on multi-processor architecture and to influence parallel programming approaches for algebraic computing. The central goal is to extend the efficiency of the LinBox and FGb libraries to new trend parallel architectures such as clusters of multi-processor systems and graphics processing units in order to tackle a broader class of problems in lattice-based cryptography and algebraic cryptanalysis. HPAC conducts researches along three axes:

A domain specific parallel language (DSL) adapted to high-performance algebraic computations;

Parallel linear algebra kernels and higher-level mathematical algorithms and library modules;

Library composition, their integration into state-of-the-art software, and innovative high performance solutions for cryptology challenges.

Dyna3s is a four year ANR project that started in October 2013. The Web page of the project is http://

The aim is to study algorithms that compute the greatest common divisor (gcd) from the point of view of dynamical systems. A gcd algorithm is considered as a discrete dynamical system by focusing on integer input. We are mainly interested in the computation of the gcd of several integers. Another motivation comes from discrete geometry, a framework where the understanding of basic primitives, discrete lines and planes, relies on algorithm of the Euclidean type.

FastRelax stands for “Fast and Reliable Approximation”. It is a four year ANR project started in October 2014.
The web page of the project is http://

The aim of this project is to develop computer-aided proofs of numerical values, with certified and reasonably tight error bounds, without sacrificing efficiency. Applications to zero-finding, numerical quadrature or global optimization can all benefit from using our results as building blocks. We expect our work to initiate a “fast and reliable” trend in the symbolic-numeric community. This will be achieved by developing interactions between our fields, designing and implementing prototype libraries and applying our results to concrete problems originating in optimal control theory.

“Quarenum” is an abbreviation for *Qualité et Reproductibilité Numériques dans le Calcul Scientifique Haute Performance*.
This project focuses on the numerical quality of scientific software, more precisely
of high-performance numerical codes.
Numerical validation is one aspect of the project, the second one regards numerical reproducibility.

QOLAPS (Quantifier elimination, Optimization, Linear Algebra and Polynomial Systems) is an Associate Team between the Symbolic Computation Group at North Carolina State University (USA), the PolSys team at LIP6, Paris 6, and the AriC team. Participants: Clément Pernet, Nathalie Revol, Gilles Villard.

Our international academic collaborators are from Courant Institute of Mathematical Sciences (USA), Hamburg University of Technology (Germany), Imperial College (UK), Macquarie University (Australia), Mc Gill University (Canada), Monash University (Australia), Nanyang Technological University (Singapore), North Carolina State University (USA), Technical University of Cluj-Napoca (Romania), University of California, Los Angeles (USA), University of Delaware (USA), University of Southern Denmark (Denmark), University of Western Ontario (Canada), University of Waterloo (Canada), Uppsala University (Sweden).

We also collaborate with Intel (Portland, USA).

PICS CANTaL (Cryptography, Algorithmic Number Theory and Lattices). This is a collaborative project involving several AriC members (Nicolas Brisebarre, Guillaume Hanrot, Fabien Laguillaumie, Adeline Langlois and Damien Stehlé), and collaborators in several Australian universities: Christophe Doche (Macquarie University), Igor Shparlinski (UNSW) and Ron Steinfeld (Monash University). It was funded by the International office of the CNRS, for 2012, 2013 and 2014.

IEEE P1788 working group for the standardization of interval arithmetic.
We contributed to the creation in 2008 of this working group
http://

Vincent Lefèvre actively participated in various discussions, either in the mailing-list or in small subgroups.

Many colleagues from all other the world visit us regularly for seminars and collaborations. We list only long visits here.

Jie Chen (assistant professor at ECNU, China) visited us for a month, in November. He collaborated with Fabien Laguillaumie, Benoît Libert and Damien Stehlé on functional encryption.

Jung Hee Cheon (professor at SNU, South Korea) and Changmin Lee (PhD student at SNU, South Korea) visited us for a month, in August. They collaborated with Damien Stehlé on the approximate greatest common divisor problem and its applications in homomorphic cryptography.

Mihai-Ioan Popescu (ENS de Lyon) did a Master 1 internship from May to July, under the supervision of Damien Stehlé. He worked on heuristic algorithms for short lattice vector enumeration.

François Colas (U. Grenoble) did a Master 2 internship from March to June, under the supervision of Damien Stehlé. He worked on lattice-based homomorphic encryption.

Catalin Cocis (ENS de Lyon) did a Master 2 internship from February to June under the supervision of Fabien Laguillaumie. He worked on the implementation of multilinear maps.

Laura Chira (Technological U. of Cluj, Romania) did an L3 Summer internship from July to September 2014. This internship was supervised by Benoît Libert and devoted to the implementation of pseudo-random functions based on hard algorithmic problems in lattices.

Thomas Grégoire (ENS de Lyon) did a Master 2 internship from February to June under the supervision of Nicolas Brisebarre. He designed some tools for the certified approximation of functions in various orthogonal bases.

Saurabh Yadav (2nd year student, Indian Institute of Technology Delhi, India) did a Summer internship supervised by Benoît Libert in July and August 2014. The goal was to study and survey the applications of a cryptographic primitive built on top of multi-linear maps and called “indistinguishability obfuscation.”

Guillaume Hanrot is director of the LIP laboratory (Laboratoire de l'Informatique du Parallélisme) since April 1, 2014. He was deputy director of the LIP before;

Jean-Michel Muller is co-director of the Groupement de Recherche (GDR) *Informatique Mathématique* of CNRS;

Gilles Villard has been director of the LIP laboratory until April 1, 2014.

Damien Stehlé is a member of the steering committee of the PQCrypto conference series. He is also a member of the steering committee of the Cryptography and Coding French research grouping (C2).

Claude-Pierre Jeannerod is a member of the scientific committee of JNCF (Journées Nationales de Calcul Formel).

Bruno Salvy was one of the organizers of a workshop “Challenges in 21st Century Experimental Mathematical Computation”, at ICERM, Providence, Rhode Island.

Nicolas Brisebarre and Jean-Michel Muller organized a one week workshop “Formal Proof, Symbolic Computation and Computer Arithmetic” which took place from February 3 to February 7 (50 participants), in the framework of a whole month devoted to “Mathematical Structures of Computation” in Lyon.

Jean-Michel Muller was a member of the program committees of ASAP'2014 and ARITH'2015.

Damien Stehlé was a member of the program committees of LATINCRYPT'14, PQCrypto'14, and ACISP'14.

Bruno Salvy was a member of the program committee of AofA'2014.

Fabien Laguillaumie was a member of the program committee of Africacrypt'14.

Benoît Libert was a member of the program committee of ACM-CCS'14.

Jean-Michel Muller is a member of the editorial board of the *IEEE Transactions on Computers.* He is a member of the board of foundation editors of the *Journal for Universal Computer Science*. He was co-guest editor of a special issue of the journal *Science of Computer Programming* .

Bruno Salvy is a member of the editorial boards of the *Journal
of Symbolic Computation*, of the *Journal of Algebra* (section
Computational Algebra) and of the collections *Texts and Monographs
in Symbolic Computation* (Springer) and *Mathématiques et
Applications* (SMAI-Springer).

Gilles Villard is a member of the editorial board of the *Journal
of Symbolic Computation*.

Master: Jean-Michel Muller, *Floating-Point Arithmetic and Formal Proof* (8h + coordination of the 24h course), ENS de Lyon.

Master: Nicolas Brisebarre, *Introduction to Effective Approximation Theory* (24h), Hanoi Institute of Mathematics (Vietnam).

Master: Claude-Pierre Jeannerod, Nicolas Louvet, Nathalie Revol, *Algorithmique numérique et fiabilité des calculs en arithmétique flottante* (24h), M2 ISFA (Institut de Science Financière et d'Assurances), Université Claude Bernard Lyon 1.

Master: Vincent Lefèvre, *Arithmétique des ordinateurs* (20h), M2 ISFA (Institut de Science Financière et d'Assurances), Université Claude Bernard Lyon 1.

Professional teaching: Nathalie Revol, *Contrôler et améliorer la qualité numérique d'un code de calcul industriel* (2h30), Collège de Polytechnique.

Master: Fabien Laguillaumie, Cryptography, Error Correcting Codes, 150h, Université Claude Bernard Lyon 1.

Master: Damien Stehlé, Cryptography, 24h, ENS de Lyon.

Master: Benoît Libert, Advances cryptographic protocols, 24h, ENS de Lyon.

Research school: Adeline Langlois, Fabien Laguillaumie, Damien Stehlé, *Chiffrement avancé à partir du problème Learning With Errors* (4h30), École de printemps Codes et Crypto, Université de Grenoble.

Research school: Adeline Langlois, Fabien Laguillaumie, Damien Stehlé, *Chiffrement avancé à partir du problème Learning With Errors* (4h30), École Jeunes Chercheurs en Informatique Mathématique, Université de Caen.

Research school: Damien Stehlé, *Cryptographie reposant sur les réseaux euclidiens* (3h), Colloque Jeunes Chercheurs en Théorie des Nombres.

PhD : Nicolas Brunie, *Contribution à l'arithmétique des ordinateurs et applications aux systèmes embarqués*,
ENS Lyon, May 2014, co-supervised by Florent de Dinechin (and Renaud Ayrignac).

PhD : Adeline Langlois, *Lattice - Based Cryptography - Security Foundations and Constructions*,
ENS Lyon, October 2014, supervised by Damien Stehlé.

PhD : Philippe Théveny, *Numerical quality and high performance in interval linear algebra on multi-core Processors*,
ENS Lyon, October 2014, supervised by Nathalie Revol.

PhD in progress: Silviu Filip,
*Filtroptim : tools for an optimal synthesis of numerical filters*,
since September 2013, co-supervised by Nicolas Brisebarre and Guillaume Hanrot.

PhD in progress: Vincent Neiger,
*Multivariate interpolation in computer algebra: efficient algorithms ans applications*,
since September 2013, co-supervised by Claude-Pierre Jeannerod and Gilles Villard (together with Éric Schost (Western University, London, Canada)).

PhD in progress: Marie Paindavoine,
*Méthodes de calculs sur des données chiffrées*,
since October 2013 (Orange Labs - UCBL), co-supervised by Fabien Laguillaumie (together with Sébastien Canard).

PhD in progress : Antoine Plet,
*Contribution à l'analyse d'algorithmes en arithmétique virgule flottante*,
since September 2014, co-supervised by Nicolas Louvet and Jean-Michel Muller.

PhD in progress : Valentina Popescu,
*Vers des bibliothèques multi-précision certifiées et performantes*,
since September 2014, co-supervised by Mioara Joldes (LAAS) and Jean-Michel Muller.

PhD in progress : Serge Torres,
*Some tools for the design of efficient and reliable function evaluation libraries*,
since September 2010, co-supervised by Nicolas Brisebarre and Jean-Michel Muller.

PhD in progress: Louis Dumont, *Algorithmique efficace pour les diagonales, applications en combinatoire, physique et théorie des nombres*, co-supervised by Alin Bostan (SpecFun team) and Bruno Salvy.

PhD in progress: Sébastien Maulat, *Évaluation efficace et certifiée de fonctions différentiellement finies en précision modérée*, since september 2014, supervised by Bruno Salvy.

PhD in progress: Stephen Melczer, *Effective analytic combinatorics in one and several variables*, co-supervised by George Labahn (U. Waterloo, Canada) and Bruno Salvy.

In 2014, Jean-Michel Muller was vice-chair of the “Comité d'évaluation scientifique mathématiques–informatique théorique” of the ANR (the French national research agency). He participated to the PhD committees for the defenses of Laurent Thévenoux (Univ. Perpignan) and Karim Bigou (Univ. Rennes 1). He participated to the Habilitation committees for the defenses of Christophe Denis (Univ. Perpignan), Sylvie Boldo (Paris Sud Univ.) and David Defour (Univ. Perpignan). He was a member of the Scientific Council of ENS de Lyon until june 2014, and he is a member of the Scientific Council of CERFACS (Toulouse).

Bruno Salvy was a member of the PhD committees of Julien Courtien (Bordeaux), Jules Svartz (UPMC), and Pierre Lairez (École polytechnique).

Nathalie Revol was a member of the PhD committee of Olivier Mullier (École polytechnique). Nathalie Revol was in the hiring committee for junior researchers (CR) of Inria Grenoble - Rhône-Alpes and in the hiring committee for an assistant professor position at Université Paris Sud.

Damien Stehlé was reviewer of the PhD theses of Robert Fitzpatrick (Royal Holloway, UK), Tancrède Lepoint (ENS Paris, Univ. du Luxembourg and CryptoExperts) and Nicola di Pietro (Univ. Bordeaux).

Fabien Laguillaumie was reviewer for the PhD thesis of Alain Patey (Telecom ParisTech). He was part of the HDR committee of Damien Vergnaud (ENS).

Nicolas Louvet was a member of a hiring committee for an associate professor position at the university Montpellier 2.

Damien Stehlé gave an invited talk *Secure lattice codes for the gaussian wiretap channel*, at the Algebra, Codes and Networks symposium, Bordeaux, France, in June 2014;

Damien Stehlé gave an invited talk *The Learning with Errors problem*, at the Oberwolfach workshop on combinatorial optimization, Oberwolfach, Germany, in November 2014;

Jean-Michel Muller gave two invited talks, *On the maximum relative error when computing ${x}^{n}$ in floating-point arithmetic* (in Tokyo) and

Jean-Michel Muller gave an invited talk *Getting tight error bounds in floating-point arithmetic: illustration with complex functions, and the real ${x}^{n}$ function*, at the workshop

Bruno Salvy gave an invited talk *Algorithmic variations on linear differential equations* at the MBM2014 day organized in Bordeaux in October when Mireille Bousquet-Mélou received the silver medal of the CNRS. He was also invited to give a talk in Oberwolfach, for their meeting on Enumerative Combinatorics in March, where he talked about *Multiple Binomial Sums*, of which he had given a previous version in February at the “Holonomy days” where he had been invited in Grenoble.

Nathalie Revol gave invited talks at the workshop “Challenges in 21st Century Experimental Mathematical Computation” at ICERM, Providence, Rhode Island, and at French seminars: CEA-LIST, Inria comité des projets, Aristote.

Fabien Laguillaumie gave an invited scientific committee talk *
Anonymity-oriented Signatures based on Lattices* at the YACC'14
conference, Porquerolles, France, in June 2014.

Sylvie Boldo (Proval project) and Jean-Michel Muller wrote a popular science paper *Des ordinateurs capables de calculer plus juste* in the journal *La Recherche* .

Nicolas Brisebarre co-organizes scientific conferences, called «Éclats de sciences», at Maison du Livre, de l'Image et du Son in Villeurbanne. Around three conferences take place per year.

Nathalie Revol gave talks for pupils at collèges and lycées, as an incentive to choose scientific careers:
lycée Camille Vernet (Valence, Drôme),
lycée Jérémie de la Ville (Charlieu, Loire),
lycée Gabriel Fauré (Annecy, Haute-Savoie),
collège Jean Renoir (Neuville-sur-Saône, Rhône).
During the “Week of mathematics”, she gave a 2-hour talk at lycée de la Côtière (La Boisse, Ain).
She gave the inaugural conference of the congress “Math en Jean's” in Lyon and the conference for the scientific camp “Math C2