Section: New Results
New Dissemination Paradigms
Participants : Walid Dabbous, Sebastien Faurite, Aurelien Francillon, Mohamed Ali Kaafar, Zainab Khallouf, Vincent Roca, Thierry Turletti.
Reliable multicast protocols
We are actively participating in the RMT working group at the IETF, and in particular on work on the FLUTE (File Delivery over Unidirectional Transport) application. FLUTE has been standardized as RFC 3926 in 2004 and is currently being revised  , and has been included in both the 3GPP technical specification release 6 for the MBMS (Multimedia Broadcast/Multicast Service) service, and in DVB-H IP Datacasting technical specification.
A logical and physical file aggregation scheme for FLUTE (INRIA-Nokia Internet-Draft) is currently under discussion at IETF. This is a follow up of work we started in 2004 and this should become a WG Item.
We are also participating in the new FLUTE specifications, that take advantage of experience gained during the past two years in operational environments (3GPP and DVB-H). The goal is to move from an "Experimental" RFC to a "Proposed Standard" RFC.
Security in group communications
Security has become a major requirement, in particular in the context of Content Delivery Protocols (CDP). We are therefore working on an instantiation of the TESLA source authentication and packet integrity building block to the particular needs of the CDP, more specifically on ALC and NORM protocols  . We are working on an implementation of TESLA, fully integrated in our MCLv3 FLUTE/ALC and NORM library, and are standardizing this instantiation at the IETF RMT and MSEC working groups.
Another topic is the security of the multicast routing infrastructure  . Multicast is a promising technology for the distribution of streaming media, bulk data and many other added-value applications. Yet the deployment of multicast still in its infancy. This work considers one of the most challenging features of multicast: the security. More specifically this thesis focusses on the security of the multicast routing infrastructure Security from the Network Operator Point of View . A pragmatic and easily deployable filtering solution has been designed, implemented and evaluated. This solution makes the routing infrastructure more robust to several known attacks that take advantage of group management protocols.
Finally, we have initiated a study within the IETF that aims at analyzing the security risks associated to CDP  . The goals of this activity are first of all to define the possible general security goals. Defining what we want to protect, i.e. the network itself, and/or the protocol, and/or the content, is the first step; In a second step, we want to list the possible elementary security services that will make it possible to fullfil the general security goals. Some of these services are generic (e.g. object and/or packet integrity), while others are specific to RMT protocols (e.g. congestion control specific security schemes). In a third step, we want to list some technological building blocks and solutions that can provide the desired security services. Finally, we want to highlight the CDP specificities that will impact security and define some use-cases. Indeed, the set of solutions proposed to fulfill the security goals will greatly be impacted by the target use case.
Large block FEC codes for the erasure channel
Traditional small block Forward Error Correction (FEC) codes, such as the Reed-Solomon Erasure (RSE) code, are known to raise efficiency problems, in particular when applied to the ALC reliable multicast protocol. We identified a class of large block FEC codes, LDPC, capable of operating on source blocks that are several hundreds of megabytes long. We have designed an LDPC codec and performed intensive performance evaluations. We have shown that the two FEC codes we designed, LDPC-Staircase (already known in the domain) and LDCP-Triangle are significantly more interesting than Reed-Solomon codes, both in terms of raw encoding/decoding speed (they are an order of magnitude faster) and error recovery capabilities (they offer better protection with large objects).
We are now working on the standardization of these LDPC codes at the IETF RMT Working Group  .
We have also proposed the use of LDPC codes in the context of the DVB-H IP Datacasting service. To that goal a DVB-H channel simulator has been designed in order to precisely benchmark these codes in a realistic environment, and compare the protection offered by our application level FEC codes with the one offered by the MPE-FEC lower level protection based on Reed Solomon codes.
Finally, even if this is not a large block FEC codes and in spite of the associated limitations, we have participated to the standardization of Reed-Solomon codes at the IETF RMT Working Group  . Reed-Solomon for the erasure channel remain a technology used in the context of content broadcast.
DVB-SH MAC layer
We are working on the definition of the MAC layer of the future DVB-SH (DVB - Satellite Handheld devices) standard, meant to extend the coverage area of digital TV broadcasting systems thanks to the use of hybrid satellite/terrestrial broadcasting technics. In this context, due to the harsh packet loss conditions, the MPE-FEC MAC layer and the associated erasure correction capabilities (provided by erasure Reed-Solomon codes), designed for the particular case of DVB-H systems (terrestrial) is not sufficient. We are therefore studying new technics, based on large block codes and dispersion and/or multi MPE-frame encoding to improve the reception capabilities of mobile devices. A simulator has been designed to this purpose and results are expected soon. This is a joint work with STMicroelectronics, in close collaboration with the DVB-SH working group.
A Backup Tree Algorithm for Multicast Overlay Networks
Application Level Multicast is a promising approach to overcome the deployment problems of IP level multicast. We have developed an algorithm to compute a set of n-1 backup multicast delivery trees from the default multicast tree. Each backup multicast tree is characterized by the fact that exactly one link of the default multicast tree is replaced by a backup link from the set of available links. The trees can be calculated individually by each of the nodes. The so-called backup multicast tree algorithm can compute this set of trees with a complexity of O(mlogn) . This is identical to the complexity of well known minimum spanning tree algorithms. The backup multicast tree algorithm is the basis for the reduced multicast tree algorithm that can calculate a tree, which results from the default multicast tree by removing a particular node and by replacing the links of the removed node. Several mechanisms can be used to choose these explicit backup trees  . This work has been done in collaboration with Prof. Torsten Braun from Univ. of Bern.
Locate, Cluster and Conquer: A Scalable Topology-Aware Overlay Multicast
We have designed a novel highly scalable locating algorithm for improving multicast overlay networks. Our mechanism initially directs newcomers to the closest set of existing nodes. Each newcomer sends request to a few nodes to build its neighborhood information. On the basis of the locating process, we have built a two-level topology-aware scheme, namely LCC. We have compared the scalability and efficiency of LCC with that of initially-randomly connected overlays. Results demonstrate promising performance of LCC, and show that locating-based overlays achieve 70% less link adjustments than initially randomly-connected structures, with three times faster convergence. Moreover, while being accurate, the locating process entails modest resources and incurs low overhead during new nodes arrivals  ,  .
Attacks on Virtual Networks
The recently proposed coordinates-based systems for network positioning have been shown to be accurate, with very low distance prediction error. However, these systems often rely on nodes coordination and assume that information reported by probed nodes is correct. We have identified different attacks against coordinates embedding systems and have studied the impact of such attacks on the recently proposed Vivaldi decentralized positioning system. We made a simulation study of "genesis" attacks carried out by malicious nodes that provide biased coordinates information and delay measurement probes. We experimented with attack strategies that aim to (i) introduce disorder in the system, (ii) fool honest nodes to move far away from their correct positions and (iii) isolate a particular node in the system through collusion. Our findings confirm the susceptibility of the Vivaldi System to such attacks  . We have extended this work in  with the Sinjection T context, where the malicious nodes are introduced in the system that has already converged. This is in contrast with the Sgenesis T attack where the malicious nodes are present from the system Rs creation time. This new work not only consider Vivaldi but also NPS systems. Our study demonstrates that these attacks can seriously disrupt the operations of these systems and therefore the virtual networks and applications relying on them for distance measurements.
From Content Distribution Networks to Content Networks
In order to make multimedia content available to potentially large and geographically distributed consumer populations, Content Distribution Networks (CDNs) have been used for many years. The main task of current CDNs is the efficient delivery and increased availability of content to the consumer. Modern CDN solutions aim to additionally automate the CDN management. Furthermore, modern applications do not just perform retrieval or access operations on content, but also create and modify content, actively place content at appropriate locations of the infrastructure, etc. If these operations are also supported by the distribution infrastructure, it is called infrastructure Content Networks (CN) instead of CDN. In order to solve the major challenges of future CNs, researchers from different communities have to collaborate, based on a common terminology. In this work we have summarized the state-of-the-art, and we have identified and discussed the most important challenges for CNs  . Our conception of these challenges is supported by the answers to a questionnaire we received from many leading European research groups in the field. This work has been done in the context of the E-Next Network of Excellence (NoE) with the participation of University of Oslo, Darmstadt University of Technology, Lancaster University and Institut Eurecom.