Section: New Results
Data Centric Networking
Participants : Chadi Barakat, Mathieu Cunche, Walid Dabbous, Diego Dujovne, Aurélien Francillon, Amine Ismail, Mohamed Ali Kaafar, Mathieu Lacage, Naveed Bin Rais, Vincent Roca, Emna Salhi, Karim Sbai, Thierry Turletti.
The work on data centric architectures is a follow-up and federation of three of our previous activities (adaptive multimedia transmission protocols for heterogeneous networks, data dissemination paradigms and peer-to-peer systems). We present hereafter the results obtained in 2009 in this area.
Application-Level Forward Error Correction Codes (AL-FEC) and their applications to broadcast/multicast systems
With the advent of broadcast/multicast systems (e.g., DVB-H/SH), large scale content broadcasting is becoming a key technology. This type of data distribution scheme largely relies on the use of Application Level Forward Error Correction codes (AL-FEC), not only to recover from erasures but also to improve the content broadcasting scheme itself (e.g., with FLUTE/ALC).
After the publication of RFC 5170 in 2008, our specification of Reed-Solomon codes and their use has been published in 2009 in RFC 5510  ("proposed standard" maturity level). We also performed a detailed performance comparison of LDPC-Staircase, Reed-Solomon and Raptor codes in  . We also studied the possibility of light-weight software decoding of Reed-Solomon codes in  .
Another activity consisted in improving the decoding of AL-FEC codes thanks to an appropriate code structure. Indeed, the ML (maximum Likelihood) decoding of LDPC codes (e.g. as specified in RFC 5170) is sooner or later limited by Gaussian pivoting algorithmic complexity. The idea is therefore to design LDPC codes that, thanks to their inner structure, feature at the same time good erasure recovery capabilities and high speed decoding under both iterative decoding and ML decoding. This work has been published in  .
We have also studied an extension of LDPC-Staircase codes in order to provide an object-level authentication service. The system designed, called VeriFEC, enables a receiver to identify the vast majority of corrupted objects (the detection probability amounts to 99.86% in case of a single random symbol corruption) almost for free. This work has been published in  .
Application-Level Forward Error Correction Codes (AL-FEC) and their applications to Robust Streaming Systems
AL-FEC codes are known to be useful to protect time-constrained flows. The goal of the IETF FECFRAME working group is to design a generic framework to enable various kinds of AL-FEC schemes to be integrated within RTP/UDP (or similar) data flows. We have proposed the use of Reed-Solomon codes and LDPC-Staircase codes within the FECFRAME framework  ,  ,  . In parallel we have started an implementation of the FECFRAME framework in order to gain an in-depth understanding of the system.
In the context of robust streaming systems, we also contributed to the analysis of the Tetrys approach, in  .
A new File delivery application for broadcast/multicast systems
FLUTE has long been the one and only official file delivery application on top of the ALC reliable multicast transport protocol. However FLUTE has several limitations (essentially because the object meta-data are transmitted independently of the objects themselves, in spite of their inter-dependency), features an intrinsic complexity, and is only available for ALC.
Therefore, we started the design of FCAST, a simple, lightweight file transfer application, that works both on top of both ALC and NORM. This work is carried out as part of the IETF RMT Working Group, in collaboration with B. Adamson (NRL). It has recently been accepted as a Working Group Item and WG Last Call should quickly begin  ,  ,  .
Security of the broadcast/multicast systems
We believe that sooner or later, broadcasting systems will require security services. This is all the more true as heterogeneous broadcasting technologies will be used, for instance hybrid satellite-based and terrestrial networks, some of them being by nature open, wireless networks (e.g., wimax, wifi). Therefore, one of the key security services is the authentication of the packet origin, and the packet integrity check. A key point is the ability for the terminal to perform these checks easily (the terminal often has limited processing and energy capabilities), while being tolerant to packet losses.
The TESLA (Timed Efficient Stream Loss-tolerant Authentication) scheme fulfills these requirements. We are therefore standardizing the use of TESLA in the context of the ALC and NORM reliable multicast transport protocols, within the IETF MSEC working group. The document has been reviewed by IESG, comments addressed, and it is currently in the RFC Editor queue, which means it should soon be published as an RFC  ,  ,  .
In parallel, we have specified the use of simple authentication and integrity schemes (i.e., group MAC and digital signatures) in the context of the ALC and NORM protocols in  . This activity is also carried out within the IETF RMT working group.
Authorization management in Grids
This work, carried out as part of the HIPCAL project, proposes to combine the network and system virtualization with the SPKI/HIP/IPsec protocols, in order to help the Grid communities to build and share their own computing intensive systems. More specifically, the security and authorization management system relies on the Simple Public Key Infrastructure (SPKI) protocol, which enables the creation of a lightweight, dynamic and extensible, private authorization management system, that is in line with the requirements of Grid systems.
We have implemented a SPKI library, with an API that enables its use in the context of HIPCAL but also in other use-cases. An in-depth analysis and performance evaluation is currently under progress. The ideas has been published in  .
Optimizing the DVB-SH FEC Scheme for Efficient Erasure Recovery
DVB-SH is a new broadcasting standard offering a mobile TV service for handheld devices using hybrid satellite/terrestrial-repeaters solution. A new link layer protection algorithm called Multi-Burst Sliding Encoding (MBSE) has been recently adopted to cope with the long fading time introduced by the direct satellite link. We proposed a method to optimize the MBSE parameters and an analysis of the performance gain. Various sets of parameters are studied and optimized with respect to the link performance. Furthermore, we have designed an algorithm to compute the optimum values of the MBSE parameters according to some constraints. We have implemented MBSE in two UDcast DVB-SH equipements and have validated the optimization method using intensive experiments with typical usage scenarios under a hardware-emulated wireless link  .
Disruption Tolerant Networking
Communication networks are traditionally assumed to be connected. However, emerging wireless applications such as vehicular networks, pocket-switched networks, etc. coupled with volatile links, node mobility, and power outages, will require the network to operate despite frequent disconnections. To this end, opportunistic routing techniques have been proposed, where a node may store-and-carry a message for some time, until a new forwarding opportunity arises. Although a number of such algorithms exist, most focus on relatively homogeneous settings of nodes. However, in many envisioned applications, participating nodes might include handhelds, vehicles, sensors, etc. These various classes have diverse characteristics and mobility patterns, and will contribute quite differently to the routing process. We have addressed the problem of routing in intermittently connected wireless networks comprising multiple classes of nodes. We have shown in  that proposed solutions, which perform well in homogeneous scenarios, are not as competent in this setting. To this end, we proposed a class of routing schemes that can identify the nodes of highest utility for routing, improving the delay and delivery ratio by 4-5 times. Additionally, we proposed an analytical framework based on fluid models that can be used to analyze the performance of various opportunistic routing strategies, in heterogeneous settings.
In this research area, another work focuses on efficient message delivery mechanism to enable distribution/dissemination of messages in an internet connecting heterogeneous networks and prone to disruptions in connectivity. We called our protocol MeDeHa for Message Delivery in Heterogeneous, Disruption prone Networks. MeDeHa stores data at the link layer addressing heterogeneity at lower layers (e.g., when intermediate nodes do not support higher-layer protocols). It also takes advantage of network heterogeneity (e.g., nodes supporting more than one network) to improve message delivery. Another important feature of MeDeHa is that there is no need to deploy special-purpose nodes such as message ferries, data mules, or throwboxes in order to relay data to intended destinations, or to connect to the backbone network wherever infrastructure is available. The network is able to store data destined to temporarily unavailable nodes for some time depending upon existing storage as well as quality-of-service issues such as delivery delay bounds imposed by the application. We have evaluated MeDeHa via simulations using indoor scenarios (e.g. convention centers, exposition halls, museums etc.) and have shown significant improvement in delivery ratio in the face of episodic connectivity.
Then, we have extended the MeDeHa framework to include ad hoc network support, as the earlier version of the framework implementation only had infrastructure mode working. This is the first step towards achieving network heterogeneity. Currently, the implementation is able of supporting wired, infrastructure wireless, and ad hoc networks, and it is implemented within the new Network Simulator 3 (ns-3), which allows simulations as well as emulations. Thus, this implementation will be helpful not only in analyzing different scenarios in the simulator, but also to test the framework on real networks in the future.
These works are the result of collaborations with Thrasyvoulos Spyropoulos from ETH Zurich and Katia Obraczka from University of California Santa Cruz (UCSC). It is done in the context of the COMMUNITY Associated Team (http://planete.inria.fr/COMMUNITY/ ).
In DTNs disconnections may occur frequently. In order to achieve data delivery in such challenging environments, researchers have proposed the use of store-carry-and-forward protocols: there, a node may store a message in its buffer and carry it along for long periods of time, until an appropriate forwarding opportunity arises. Multiple message replicas are often propagated to increase delivery probability. This combination of long-term storage and replication imposes a high storage and bandwidth overhead. Thus, efficient scheduling and drop policies are necessary to: (i) decide on the order by which messages should be replicated when contact durations are limited, and (ii) which messages should be discarded when nodes' buffers operate close to their capacity.
We have proposed an efficient joint scheduling and drop policy that can optimize different performance metrics, such as the average delivery rate and the average delivery delay. First, we present an optimal policy using global knowledge about the network, then we introduce a distributed algorithm that collects statistics about network history and uses appropriate estimators for the global knowledge required by the optimal policy, in practice. At the end, we are able to associate to each message inside the network a utility message that can be calculated locally, and that allows to compare it to other messages upon scheduling and buffer congestion. We pursue the research in this area by looking for methods to reduce the overhead of the history-collection plane, and by trying to cast existing standard policies within the framework of our study.
File sharing in wireless ad hoc networks
This activity started with the PURPURA COLOR projet in conjunction with the LIA laboratory at the University of Avignon and grows within the ExpeShare ITEA European project. The latter project started in February 2007 and ended in October 2009. Within this activity, we focus on file sharing over wireless ad hoc networks. File sharing protocols, typically BitTorrent, are known to perform very well over the wired Internet where end-to-end performances are almost guaranteed. However, in wireless ad-hoc networks the situation is different due to topology constraints and the fact that nodes are at the same time peers and routers. For example, in a wireless ad-hoc network running standard BitTorrent, sending pieces to distant peers incurs lot of overhead due to resources consumed in intermediate nodes. Moreover, TCP performance is known to drop seriously with the number of hops. Running file sharing with its default configuration no longer guarantees the best performances. For instance, the neighbor and piece selection algorithms in BitTorrent need to be studied in the wireless ad-hoc scenarios, since it is no longer efficient to choose and treat with peers independently of their location. A potential solution could be to limit the scope of the neighborhood. In this case, TCP connections are fast but pieces will very likely propagate in a unique direction from the seed to distant peers. This would prohibit peers from reciprocating data and would result in low sharing ratios and suboptimal utilization of network resources. There is a need for a solution that minimizes the average download finish time per peer while encouraging peers to collaborate by enforcing a fair sharing of data.
Last year, we presented a first solution to this problem that we refine in  ,  . Unlike uni-metric approaches, our solution considers relevant performance metrics together as throughput, sharing and routing overhead. We define a new neighbor selection strategy that balances sharing and diversification efforts and decides on the optimal neighboring scope of a node. We also consider the diversification incentives problem and evaluates the impact of nodes' mobility on the P2P strategy to be adopted. Through extensive simulations, we prove that our solution achieves both better download time and sharing ratio than uni-metric solutions.
To push our research further in this direction and to give it a practical flavor, we have worked on the design and implementation of a new application that enables content sharing among spontaneous communities of mobile users using wireless multi-hop connections. Our application is called BitHoc, which stands for BitTorrent for wireless ad hoc networks. It is an open source software developed under the GPLv3 licence. BitHoc is made public and is available for download at http://planete.inria.fr/bithoc . It is intended to be the real testbed over which we evaluate our solutions for the support and optimization of file sharing in a mobile wireless environment where the existence of an infrastructure is not needed or might not exist. The proposed BitHoc architecture includes two principal components: a membership management service and a content sharing service. As classical tracker-based BitTorrent membership management and peer discovery are unfeasible in ad hoc networks, we design the membership management service as a distributed tracker overlay that connects peers involved in the same sharing session (see  for more details on how to construct this membership management overlay). Using the membership information provided by the tracker overlay, the content sharing service schedules the data transfer connections among the session members by leveraging the multi-hop routing feature of wireless ad-hoc networks. The testbed in its current form is composed of PDAs and smartphones equipped with WIFI adapters and Windows Mobile 6 operating system.
Efficient Wireless LAN Protocols
We have worked on two different areas to increase the performance of wireless LAN protocols. First, we have proposed an efficient aggregation mechanism for the upcoming IEEE 802.11n standard. Second, we have worked on efficient PHY rate selection mechanisms for IEEE 802.11 networks.
We have proposed the Aggregation with Fragment Retransmission (AFR) mechanism to achieve high efficiency at the MAC layer of IEEE 802.11n  . In the AFR scheme, multiple packets are aggregated into and transmitted in a single large frame. If errors occur during the transmission, only the corrupted fragments of the large frame are retransmitted. An analytic model has been developed to evaluate the throughput and delay performance of AFR over noisy channels, and to compare AFR with similar schemes in the literature. Optimal frame and fragment sizes have been calculated using this model. Transmission delays are minimized by using a zero waiting mechanism where frames are transmitted immediately once the MAC wins a transmission opportunity. We prove that this mechanism achieves maximum throughput. As a complement to the theoretical analysis, we investigated by simulations the impact of AFR on the performance of realistic application traffic for diverse scenarios: TCP, VoIP and HDTV traffic. The AFR scheme described was developed as part of the 802.11n working group work. It is the result of a collaboration with Tianji Li, David Malone and Douglas Leith from Hamilton Institute in Ireland, Qiang Ni at University of Brunel, England and Yang Xiao from the Dept. of CS at University of Alabama.