## Section: New Results

### Lattices and cryptography

#### Worst-Case to Average-Case Reductions for Module Lattices

Most lattice-based cryptographic schemes are built upon the assumed hardness of the Short Integer Solution (SIS) and Learning With Errors (LWE) problems. Their efficiencies can be drastically improved by switching the hardness assumptions to the more compact Ring-SIS and RingLWE problems. However, this change of hardness assumptions comes along with a possible security weakening: SIS and LWE are known to be at least as hard as standard (worst-case) problems on euclidean lattices, whereas Ring-SIS and Ring-LWE are only known to be as hard as their restrictions to special classes of ideal lattices, corresponding to ideals of some polynomial rings. Adeline Langlois and Damien Stehlé defined the Module-SIS and Module-LWE problems, which bridge SIS with Ring-SIS, and LWE with Ring-LWE, respectively. They proved that these average-case problems are at least as hard as standard lattice problems restricted to module lattices (which themselves bridge arbitrary and ideal lattices). As these new problems enlarge the toolbox of the lattice-based cryptographer, they could prove useful for designing new schemes. Importantly, the worst-case to average-case reductions for the module problems are (qualitatively) sharp, in the sense that there exist converse reductions. This property is not known to hold in the context of Ring-SIS/Ring-LWE: Ideal lattice problems could reveal easy without impacting the hardness of Ring-SIS/Ring-LWE [8] .

#### Semantically Secure Lattice Codes for the Gaussian Wiretap Channel

Cong Ling (Imperial College, UK), Laura Luzzi (ENSEA), Jean-Claude Belfiore (Telecom ParisTech) and Damien Stehlé proposed a new scheme of wiretap lattice coding that achieves semantic security and strong secrecy over the Gaussian wiretap channel. The key tool in their security proof is the flatness factor which characterizes the convergence of the conditional output distributions corresponding to different messages and leads to an upper bound on the information leakage. They not only introduced the notion of secrecy-good lattices, but also proposed the flatness factor as a design criterion of such lattices. Both the modulo-lattice Gaussian channel and the genuine Gaussian channel are considered. In the latter case, they proposed a novel secrecy coding scheme based on the discrete Gaussian distribution over a lattice, which achieves the secrecy capacity to within a half nat under mild conditions. No a priori distribution of the message is assumed, and no dither is used in their proposed schemes [9] .

#### GGHLite: More Efficient Multilinear Maps from Ideal Lattices

The Garg-Gentry-Halevi (GGH) Graded Encoding Scheme, based on ideal lattices, is the first plausible approximation to a cryptographic multilinear map. Unfortunately, the scheme requires very large parameters to provide security for its underlying encoding re-randomization process. Adeline Langlois, Damien Stehlé and Ron Steinfeld (Monash University, Australia) formalized, simplified and improved the efficiency and the security analysis of the re-randomization process in the GGH construction. This results in a new construction that they called GGHLite. In particular, they first lowered the size of a standard deviation parameter of the GGH re-randomization process from exponential to polynomial in the security parameter. This first improvement is obtained via a finer security analysis of the so-called drowning step of re-randomization, in which they applied the Rényi divergence instead of the conventional statistical distance as a measure of distance between distributions. Their second improvement is to reduce the number of randomizers needed to 2, independently of the dimension of the underlying ideal lattices. These two contributions allowed them to decrease the bit size of the public parameters to $O\left(\lambda {log}^{2}\lambda \right)$ in GGHLite, with respect to the security parameter $\lambda $ (for a constant multilinearity parameter $\kappa $) [22] .

#### LLL reducing with the most significant bits

Let $B$ be a basis of a Euclidean lattice, and $\tilde{B}$ an approximation thereof. Saruchi (IIT Delhi, India), Ivan Morel, Damien Stehlé and Gilles Villard gave a sufficient condition on the closeness between $\tilde{B}$ and $B$ so that an LLL-reducing transformation $U$ for $\tilde{B}$ remains valid for $B$. Further, they analysed an efficient reduction algorithm when $B$ is itself a small deformation of an LLL-reduced basis. Applications include speeding-up reduction by keeping only the most significant bits of $B$, reducing a basis that is only approximately known, and efficiently batching LLL reductions for closely related inputs [30] .

#### Hardness of $k$-LWE and Applications in Traitor Tracing

San Ling (NTU, Singapore), Duong Hieu Phan (LAGA), Damien Stehlé and
Ron Steinfeld (Monash University, Australia) introduced the $k$-LWE
problem, a Learning With Errors variant of the $k$-SIS problem. The
Boneh-Freeman reduction from SIS to $k$-SIS suffers from an
exponential loss in $k$. Ling *et al.* improved and extended it
to an LWE to $k$-LWE reduction with a polynomial loss in $k$, by
relying on a new technique involving trapdoors for random integer
kernel lattices. Based on this hardness result, they presented the
first algebraic construction of a traitor tracing scheme whose
security relies on the worstcase hardness of standard lattice
problems. The proposed LWE traitor tracing is almost as efficient as
the LWE encryption. Further, it achieves public traceability, i.e.,
allows the authority to delegate the tracing capability to untrusted
parties. To this aim, Ling *et al.* introduced the notion of
projective sampling family in which each sampling function is keyed
and, with a projection of the key on a well chosen space, one can
simulate the sampling function in a computationally indistinguishable
way. The construction of a projective sampling family from $k$-LWE
allows us to achieve public traceability, by publishing the projected
keys of the users [27] .

#### Lattice-Based Group Signatures Scheme with Verifier-local Revocation

Support of membership revocation is a desirable functionality for any group signature scheme. Among the known revocation approaches, verifier-local revocation (VLR) seems to be the most flexible one, because it only requires the verifiers to possess some up-to-date revocation information, but not the signers. All of the contemporary VLR group signatures operate in the bilinear map setting, and all of them will be insecure once quantum computers become a reality. Adeline Langlois, San Ling, Khoa Nguyen and Huaxiong Wang (NTU, Singapore) introduced the first lattice-based VLR group signature [21] , and thus, the first such scheme that is believed to be quantum-resistant. In comparison with existing lattice-based group signatures, this scheme has several noticeable advantages: support of membership revocation, logarithmic-size signatures, and weaker security assumption. In the random oracle model, our scheme is proved to be secure based on the hardness of the Shortest Independent Vector Problem with approximation factor $\gamma =\tilde{O}\left({n}^{1.5}\right)$ - an assumption that is as weak as those of state-of-the-art lattice-based standard signatures. Moreover, this construction works without relying on encryption schemes, which is an intriguing feature for group signatures.

#### Proxy Re-Encryption Scheme Supporting a Selection of Delegatees

Julien Devigne (Orange Labs), Eleonora Guerrini (Univ. Montpellier 2, LIRMM) and Fabien Laguillaumie adapt the primitive of proxy re-encryption which allows a user to decide that in case of unavailability, one (or several) particular user, the delegatee, will be able to read his confidential messages. They modify it so that a sender can choose who among many potential delegatees will be able to decrypt his messages, and propose a simple and efficient scheme which is secure under chosen plaintext attack under standard algorithmic assumption in a bilinear setting. They also investigate the possibility to add a traceability of the proxy so that one can detect if it has leaked some re-encryption keys [17] .

#### Practical validation of several fault attacks against the Miller algorithm

Ronan Lashermes (SAS-ENSMSE, PRISM), Marie Paindavoine, Nadia El Mrabet (Univ. P8, LIASD), Jacques Fournier (SAS-ENSMSE) and Louis Goubin (UVSQ, PRISM) describe practical implementations of fault attacks against the Miller algorithm, which computes pairing evaluations on algebraic curves. These implementations validate common fault models used against pairings. In the light of the implemented fault attacks, they show that some blinding techniques proposed to protect the algorithm against Side-Channels Analyses cannot be used as countermeasures against the implemented fault attacks [23] .

#### Non-Malleability from Malleability: Simulation-Sound Quasi-Adaptive NIZK Proofs and CCA2-Secure Encryption from Homomorphic Signatures

Verifiability is central to building protocols and systems with
integrity. Initially, efficient methods employed the Fiat-Shamir
heuristics. Since 2008, the Groth-Sahai techniques have been the most
efficient in constructing non-interactive witness indistinguishable
and zero-knowledge proofs for algebraic relations in the standard
model. For the important task of proving membership in linear
subspaces, Jutla and Roy (Asiacrypt 2013) gave significantly more
efficient proofs in the quasi-adaptive setting (QA-NIZK). For
membership of the row space of a $t\times n$ matrix, their QA-NIZK
proofs save $\Omega \left(t\right)$ group elements compared to Groth-Sahai.
In [26] , Benoît Libert, Thomas Peters (UCL, Belgique), Marc Joye (Technicolor,
USA) and Moti Yung (Google and Columbia U, USA) gave
QA-NIZK proofs made of a *constant* number group elements
– regardless of the number of equations or the number of variables –
and additionally proved them *unbounded* simulation-sound. Unlike
previous unbounded simulation-sound Groth-Sahai-based proofs, their
construction does not involve quadratic pairing product equations and
does not rely on a chosen-ciphertext-secure encryption
scheme. Instead, they built on structure-preserving signatures with
homomorphic properties. They applied their methods to design new and
improved CCA2-secure encryption schemes. In particular, they built the
first efficient threshold CCA-secure keyed-homomorphic encryption
scheme (*i.e.*, where homomorphic operations can only be carried
out using a dedicated evaluation key) with publicly verifiable
ciphertexts.

#### Born and Raised Distributively: Fully Distributed Non-Interactive Adaptively-Secure Threshold Signatures with Short Shares

Threshold cryptography is a fundamental distributed computational
paradigm for enhancing the availability and the security of
cryptographic public-key schemes. It does it by dividing private keys
into $n$ shares handed out to distinct servers. In threshold signature
schemes, a set of at least $t+1\le n$ servers is needed to produce a valid
digital signature. Availability is assured by the fact that any subset
of $t+1$ servers can produce a signature when authorized. At the same
time, the scheme should remain robust (in the fault tolerance sense)
and unforgeable (cryptographically) against up to t corrupted servers;
*i.e.*, it adds quorum control to traditional cryptographic
services and introduces redundancy. Originally, most practical
threshold signatures have a number of demerits: They have been
analyzed in a static corruption model (where the set of corrupted
servers is fixed at the very beginning of the attack), they require
interaction, they assume a trusted dealer in the key generation phase
(so that the system is not fully distributed), or they suffer from
certain overheads in terms of storage (large share sizes).

In [24] , Benoît Libert, Marc Joye
(Technicolor, USA) and Moti Yung (Google and Columbia U, USA)
constructed practical *fully distributed* (the private key is born
distributed), non-interactive schemes – where the servers can compute
their partial signatures without communication with other servers –
with adaptive security (*i.e.*, the adversary corrupts servers
dynamically based on its full view of the history of the
system). Their schemes are very efficient in terms of computation,
communication, and scalable storage (with private key shares of size
$O\left(1\right)$, where certain solutions incur $O\left(n\right)$ storage costs at each
server). Unlike other adaptively secure schemes, their schemes are
erasure-free (reliable erasure is a hard to assure and hard to
administer property in actual systems). Such a fully distributed
highly constrained scheme has been an open problem in the area. In
particular, and of special interest, is the fact that Pedersen's
traditional distributed key generation (DKG) protocol can be safely
employed in the initial key generation phase when the system is born
– although it is well-known not to ensure uniformly distributed
public keys. An advantage of this is that this protocol only takes one
round optimistically (in the absence of faulty player).

#### Concise Multi-challenge CCA-Secure Encryption and Signatures with Almost Tight Security

To gain strong confidence in the security of a public-key scheme, it is most desirable for the security proof to feature a tight reduction between the adversary and the algorithm solving the under-lying hard problem. Recently, Chen and Wee (Crypto '13) described the first Identity-Based Encryption scheme with almost tight security under a standard assumption. Here, “almost tight” means that the security reduction only loses a factor $O\left(\lambda \right)$ – where $\lambda $ is the security parameter – instead of a factor proportional to the number of adversarial queries. Chen and Wee also gave the shortest signatures whose security almost tightly relates to a simple assumption in the standard model. Also recently, Hofheinz and Jager (Crypto '12) constructed the first CCA-secure public-key encryption scheme in the multi-user setting with tight security. These constructions give schemes that are significantly less efficient in length (and thus, processing) when compared with the earlier schemes with loose reductions in their proof of security. Hofheinz and Jager's scheme has a ciphertext of a few hundreds of group elements, and they left open the problem of finding truly efficient constructions. Likewise, Chen and Wee's signatures and IBE schemes are somewhat less efficient than previous constructions with loose reductions from the same assumptions.

In [25] , Benoît Libert, Thomas Peters (UCL, Belgique), Marc Joye (Technicolor, USA) and Moti Yung (Google and Columbia U, USA) considered space-efficient schemes with security almost tightly related to standard assumptions. As a step in solving the open question by Hofheinz and Jager, they constructed an efficient CCA-secure public-key encryption scheme whose chosen-ciphertext security in the multi-challenge, multi-user setting almost tightly relates to the DLIN assumption (in the standard model). Quite remarkably, the ciphertext size decreases to 69 group elements under the DLIN assumption whereas the best previous solution required about 400 group elements. Their scheme is obtained by taking advantage of a new almost tightly secure signature scheme (in the standard model) they developed and which is based on the recent concise proofs of linear subspace membership in the quasi-adaptive non-interactive zero-knowledge setting (QA-NIZK) defined by Jutla and Roy (Asiacrypt '13). The new signature scheme reduces the length of the previous such signatures (by Chen and Wee) by 37% under the Decision Linear assumption, by almost 50% under the K-LIN assumption, and it becomes only 3 group elements long under the Symmetric eXternal Diffie-Hellman assumption. Our signatures are obtained by carefully combining the proof technique of Chen and Wee and the above mentioned QA-NIZK proofs.