Section: New Results
Data Centric Networking
The work on data centric architectures is a follow-up and federation of three of our previous activities (adaptive multimedia transmission protocols for heterogeneous networks, data dissemination paradigms and peer-to-peer systems). We present hereafter the results obtained in 2008 in this area.
Application-Level Forward Error Correction Codes (AL-FEC) and their applications to broadcast/multicast systems
With the advent of broadcast/multicast systems (e.g., DVB-H/SH), large scale content broadcasting is becoming a key technology. This type of data distribution scheme largely relies on the use of Application Level Forward Error Correction codes (AL-FEC), not only to recover from erasures but also to improve the content broadcasting scheme itself (e.g., with FLUTE/ALC).
We have introduced in 2005 and standardized, within the IETF RMT working group, the LDPC-staircase/LDPC-triangle large block FEC codes. These specifications are now an IETF standard ("proposed standard" maturity level), RFC 5170  .
Another activity consisted in improving the erasure recovery capabilities of these codes. This has been made possible by means of a hybrid Iterative decoding/Maximum Likelihood (based on Gaussian elimination) scheme. Our LDPC codes are now extremely close to ideal codes in many circumstances, while keeping a high decoding speed. This work is described in  and  .
A new File delivery application for broadcast/multicast systems
FLUTE has long been the one and only official file delivery application on top of the ALC reliable multicast transport protocol. However FLUTE has several limitations (essentially because the object meta-data are transmitted independently of the objects themselves, in spite of their inter-dependency), features an intrinsic complexity, and is only available for ALC. Therefore, we started the design of FCAST, a simple, lightweight file transfer application, that works both on top of both ALC and NORM, and which, furthermore, bypasses the IPR claims of Nokia with FLUTE. This work is carried out as part of the IETF RMT Working Group, in collaboration with B. Adamson (NRL)  ,  ,  .
Security of the broadcast/multicast systems
We believe that sooner or later, broadcasting systems will require security services. This is all the more true as heterogeneous broadcasting technologies will be used, for instance hybrid satellite-based and terrestrial networks, some of them being by nature open, wireless networks (e.g., wimax, wifi). Therefore, one of the key security services is the authentication of the packet origin, and the packet integrity check. A key point is the ability for the terminal to perform these checks easily (the terminal often has limited processing and energy capabilities), while being tolerant to packet losses. The TESLA (Timed Efficient Stream Loss-tolerant Authentication) scheme fulfills these requirements. We are therefore standardizing the use of TESLA in the context of the ALC and NORM reliable multicast transport protocols, within the IETF MSEC working group  ,  ,  ,  . The document passed Working Group Last Call in sept-october 2008, a new revision has been submitted to answer the comments received, and a second Working Group Last Call will be issued soon. In addition, an implementation of TESLA, integrated within our FLUTE/FCAST/ALC protocol stack, has been performed, as part of the HIPCAL project.
In parallel, we have specified the use of simple authentication and integrity schemes (i.e., group MAC and digital signatures) in the context of the ALC and NORM protocols in  , and we are discussing security aspects in general in  ,  ,  . These activities are also carried out within the IETF RMT working group.
Authorization management in Grids
This work, carried out as part of the HIPCAL project, proposes to combine the network and system virtualization with the SPKI/HIP/IPsec protocols, in order to help the Grid communities to build and share their own computing intensive systems. More specifically, the security and authorization management system relies on the Simple Public Key Infrastructure (SPKI) protocol, which enables the creation of a lightweight, dynamic and extensible, private authorization management system, that is in line with the requirements of Grid systems. An implementation of SPKI has been performed and is currently being integrated in the HIPCAL system. This work is also described in a paper currently under review.
Enhanced MAC level Encoding scheme for Mobile Satellite TV Broadcasting
Protection of data against long fading time is one of the greatest challenges posed by a satellite delivery system offering multimedia services to mobile devices like DVB-SH. To deal with this challenge several enhancements and modifications of the existing terrestrial mobile TV (DVB-H) physical and link layers are being considered. These solutions provide the required protection depth but they donÇt take into account the specificity of mobile handheld devices such as power consumption, memory constraints and chipsets implementation costs. In addition to our work on application level encoding schemes, we explored the design of a MAC level scheme. We have proposed an innovative algorithm (called Multi Burst Sliding Encoding or MBSE) that extends the DVB-H intra-burst (MPE-FEC) protection to an inter-burst protection so that complete burst losses could be recovered while taking into account the specificity of mobile handheld devices. Based on a clever organisation of the data, our algorithm allows to provide protection against long term fading while still using RS code implemented in DVB-H chipsets. We evaluate the performance of MBSE by both theoretical analysis as well as intensive simulations and experiments. The results also show good performance in terms of protection, battery and memory saving. The MBSE is now under standardisation and it is considered by the DVB Forum as the main solution for the DVB-SH class terminals .
Disruption Tolerant Networking
Communication networks are traditionally assumed to be connected. However, emerging wireless applications such as vehicular networks, pocket-switched networks, etc. coupled with volatile links, node mobility, and power outages, will require the network to operate despite frequent disconnections. To this end, opportunistic routing techniques have been proposed, where a node may store-and-carry a message for some time, until a new forwarding opportunity arises. Although a number of such algorithms exist, most focus on relatively homogeneous settings of nodes. However, in many envisioned applications, participating nodes might include handhelds, vehicles, sensors, etc. These various classes have diverse characteristics and mobility patterns, and will contribute quite differently to the routing process. We have addressed the problem of routing in intermittently connected wireless networks comprising multiple classes of nodes. We have shown in  that proposed solutions, which perform well in homogeneous scenarios, are not as competent in this setting. To this end, we proposed a class of routing schemes that can identify the nodes of highest utility for routing, improving the delay and delivery ratio by 4-5 times. Additionally, we proposed an analytical framework based on fluid models that can be used to analyze the performance of various opportunistic routing strategies, in heterogeneous settings.
In this research area, another work focuses on efficient message delivery mechanism to enable distribution/dissemination of messages in an internet connecting heterogeneous networks and prone to disruptions in connectivity. We called our protocol MeDeHa for Message Delivery in Heterogeneous, Disruption prone Networks. MeDeHa stores data at the link layer addressing heterogeneity at lower layers (e.g., when intermediate nodes do not support higher-layer protocols). It also takes advantage of network heterogeneity (e.g., nodes supporting more than one network) to improve message delivery. Another important feature of MeDeHa is that there is no need to deploy special-purpose nodes such as message ferries, data mules, or throwboxes in order to relay data to intended destinations, or to connect to the backbone network wherever infrastructure is available. The network is able to store data destined to temporarily unavailable nodes for some time depending upon existing storage as well as quality-of-service issues such as delivery delay bounds imposed by the application. We have evaluated MeDeHa via simulations using indoor scenarios (e.g. convention centers, exposition halls, museums etc.) and have shown significant improvement in delivery ratio in the face of episodic connectivity  .
These works are the result of collaborations with Thrasyvoulos Spyropoulos from ETH Zurich and Katia Obraczka from University of California Santa Cruz (UCSC).
A third activity in this area is on efficient message delivery in DTNs. Delay Tolerant Networks are wireless networks where disconnections may occur frequently. In order to achieve data delivery in such challenging environments, researchers have proposed the use of store-carry-and-forward protocols: there, a node may store a message in its buffer and carry it along for long periods of time, until an appropriate forwarding opportunity arises. Multiple message replicas are often propagated to increase delivery probability. This combination of long-term storage and replication imposes a high storage and bandwidth overhead. Thus, efficient scheduling and drop policies are necessary to: (i) decide on the order by which messages should be replicated when contact durations are limited, and (ii) which messages should be discarded when nodes' buffers operate close to their capacity.
In  ,  , we propose an efficient joint scheduling and drop policy that can optimize different performance metrics, such as the average delivery rate and the average delivery delay. Using the theory of encounter-based message dissemination, we first propose an optimal policy based on global knowledge about the network. Then, we introduce a distributed algorithm that collects statistics about network history and uses appropriate estimators for the global knowledge required by the optimal policy, in practice. Using simulations based on a synthetic mobility model and a real mobility trace, we show that our history-based statistical policy successfully approximates the performance of the optimal policy in all considered scenarios. At the same time, our optimal policy and its distributed variant outperform existing resource allocation schemes for DTNs, both in terms of average delivery ratio and delivery delay.
File sharing in wireless ad hoc networks
This activity started with the PURPURA COLOR projet in conjunction with the LIA laboratory at the University of Avignon and the ExpeShare ITEA European project. Within this activity, we focus on file sharing over wireless ad hoc networks. File sharing protocols, typically BitTorrent, are known to perform very well over the wired Internet where end-to-end performances are almost guaranteed. However, in wireless ad-hoc networks the situation is different due to topology constraints and the fact that nodes are at the same time peers and routers. For example, in a wireless ad-hoc network running standard BitTorrent, sending pieces to distant peers incurs lot of overhead due to resources consumed in intermediate nodes. Moreover, TCP performance is known to drop seriously with the number of hops. It is clear that running file sharing with its default configuration no longer guarantees the best performances. For instance, the neighbor and piece selection algorithms in BitTorrent need to be studied in the wireless ad-hoc scenarios, since it is no longer efficient to choose and treat with peers independently of their location. A potential solution could be to limit the scope of the neighborhood. In this case, TCP connections are fast but pieces will very likely propagate in a unique direction from the seed to distant peers. This could prohibit peers from reciprocating data and might lead to low sharing ratios and suboptimal utilization of network resources. There is then a need for a solution that minimizes the average download finish time per peer while encouraging peers to collaborate by enforcing a fair sharing of data.
In  we presented a first solution to this problem that we are currently exploring further by the help of extensive simulations on more complex scenarios. Our main objective is to minimize the time to download digital contents while enforcing cooperation among peers. We observed that one can indeed realize this objective by restricting neighborhood to reduce routing overhead and to improve throughput, while establishing few connections to remote peers to improve diversity of information. With these enhancements to BitTorrent, one can significantly improve the completion time while fully profiting from the incentives implemented in BitTorrent to enforce fair sharing.
To push our research further in this direction and to give it a practical flavor, we worked on the design and implementation of a new application that enables content sharing among spontaneous communities of mobile users using wireless multi-hop connections. Our application is called BitHoc, which stands for BitTorrent for wireless ad hoc networks. It is an open source software developed under the GPLv3 licence. A first version of BitHoc has been made public at this URL http://planete.inria.fr/bithoc . We want BitHoc to be the real testbed over which we evaluate our solutions for the support and optimization of file sharing in a mobile wireless environnement where the existence of an infrastructure is not needed. The proposed BitHoc architecture includes two principal components: a membership management service and a content sharing service. As classical tracker-based BitTorrent membership management and peer discovery are unfeasible in ad hoc networks, we design the membership management service as a distributed tracker overlay that connects peers involved in the same sharing session (see  for more details on how to optimally construct this membership management overlay). Using the membership information provided by the tracker overlay, the content sharing service schedules the data transfer connections among the session members by leveraging the multihop routing feature of wireless ad-hoc networks. The testbed in its current form is composed of PDAs and smartphones equipped with WIFI adapters and Windows Mobile 6 operating system.
Efficient Wireless LAN Protocols
We have worked on two different areas to increase the performance of wireless LAN protocols. First, we have proposed an efficient aggregation mechanism for the upcoming IEEE 802.11n standard. Second, we have worked on efficient PHY rate selection mechanisms for IEEE 802.11 networks.
We have proposed the Aggregation with Fragment Retransmission (AFR) mechanism to achieve high efficiency at the MAC layer of IEEE 802.11n  . In the AFR scheme, multiple packets are aggregated into and transmitted in a single large frame. If errors occur during the transmission, only the corrupted fragments of the large frame are retransmitted. An analytic model has been developed to evaluate the throughput and delay performance of AFR over noisy channels, and to compare AFR with similar schemes in the literature. Optimal frame and fragment sizes have been calculated using this model. Transmission delays are minimized by using a zero waiting mechanism where frames are transmitted immediately once the MAC wins a transmission opportunity. We prove that this mechanism achieves maximum throughput. As a complement to the theoretical analysis, we investigated by simulations the impact of AFR on the performance of realistic application traffic for diverse scenarios: TCP, VoIP and HDTV traffic. The AFR scheme described was developed as part of the 802.11n working group work. It is the result of a collaboration with Tianji Li, David Malone and Douglas Leith from Hamilton Institute in Ireland, Qiang Ni at University of Brunel, England and Yang Xiao from the Dept. of CS at University of Alabama.
The design of efficient IEEE 802.11 physical rate adaptation algorithms is a challenging research topic and usually the issues surrounding their implementations on real 802.11 devices are not disclosed. The challenge of rate adaptation schemes is to adapt the physical transmission rate based on channel-related losses, i.e. collisions should not influence the choice of the rate. In  we presented a survey on existing physical rate adaptation mechanisms and discuss their advantages and drawbacks. In  we proposed a new rate adaptation algorithm that behaves like Auto Rate Fallback (ARF), but makes use of the RTS/CTS handshake, only when necessary, to decide whether the physical transmission rate should be changed. The main advantages of this algorithm are its simple implementation and the good performance it attains in presence of collisions. We evaluated the performance of this new algorithm and compared it with performance of other well known algorithms using the new NS-3 simulator.