Section: New Results
Experimental Environment for future Internet architecture
The Internet is relatively resistant to fundamental change (differentiated services, IP multicast, and secure routing protocols have not seen wide-scale deployment).
A major impediment to deploying these services is the need for coordination: an Internet service provider (ISP) that deploys the service garners little benefit until other domains follow suit. Researchers are also under pressure to justify their work in the context of a federated network by explaining how new protocols could be deployed one network at a time, but emphasizing incremental deployability does not necessarily lead to the best architecture. In fact, focusing on incremental deployment may lead to solutions where each step along the path makes sense, but the end state is wrong. The substantive improvements to the Internet architecture may require fundamental change that is not incrementally deployable.
Network virtualisation has been proposed to support realistic large scale shared experimental facilities such as PlanetLab and GENI. We are working on this topic in the context of the European OneLab project.
Testing on PlanetLab has become a nearly obligatory step for an empirical research paper on a new network application or protocol to be accepted into a major networking conference or by the most prestigious networking journals. If one wishes to test a new video streaming application, or a new peer-to-peer routing overlay, or a new active measurement system for geo-location of internet hosts, hundreds of PlanetLab nodes are available for this purpose. PlanetLab gives the researcher login access to systems scattered throughout the world, with a Linux environment that is consistent across all of them.
However, network environments are becoming ever more heterogeneous. Third generation telephony is bringing large numbers of handheld wireless devices into the Internet. Wireless mesh and ad-hoc networks may soon make it common for data to cross multiple wireless hops while being routed in unconventional ways. For these new environments, new networking applications will arise. For their development and evaluation, researchers and developers will need the ability to launch applications on endhosts located in these different environments.
It is sometimes unrealistic to implement new network technology, for reasons that can be either technological - the technology is not yet available -, economical - the technology is too expensive -, or simply pragmatical - e.g. when actual mobility is key. For these kinds of situations, we believe it can be very convenient and powerful to resort to emulation techniques, in which real packets can be managed as if they had crossed, e.g., an ad hoc network.
In the OneLab project, we work to provide a unified environment for the next generation of network experiments. Such a large scale, open, heterogeneous testbed should be beneficial to the whole networking academic and industrial community.
Federating Research Testbeds
In cooperation with Princeton University who run the PlanetLab research platform, we have developped the first prototype of a federation paradigm, that provides a fully symetric model, where resources are locally managed and globally visible.
This mechanism was designed with operational objectives in mind, as it was a requirement for the OneLab project to operate the new PlanetLab Europe platform, that has been running since June 2007. There are thus some limitations in this first prototype, that are related to policy management and scalability.
This federation model basically relies on database caching; essentially the API that each testbed infrastructure (peer) provides has been kept unchanged except for convenience and efficiency, and it was shown to be sufficient to this particular need.
We plan on keeping improving this functionality. Scalability is not an immediate concern yet as there are at this time no deployed testbed with more than 3 peers. However there are clear indications that this could very well become a heavy trend in the future, as it is for instance the paradigm behind the GENI initiative. Our next challenges will be to define a hierarchical namespace for all involved objects in the system - a la DNS - and to take advantage of that tree structure to break the currently n-square peering model into an essentially linear one. Policy management will also be improved once PlnetLab Europe has gained sufficient feedback on the actual needs in this area.
Adding more heterogeneity to the PlanetLab testbed
As part of the OneLab project, we have created our own 'distribution' of the PlanetLab software, and have used the flexibility in order to add support for more heterogeneous experimental nodes, like wireless (WiFi, UMTS) or multi-homed nodes.
Over time, the software development cooperation with Princeton University has moved from an upstream/downstream model to codevelopment. As a result, most of our contribution is expected to be natively integrated in the 4.2 release of the PlanetLab software, that is about to be issued.
Making easier Experimentation
Evaluation of network protocols and architectures are at the core of research and can be performed using simulations, emulations, or experimental platforms. Simulations allow a fast evaluation process, fully controlled scenarios, and reproducibility. However, they lack realism and the accuracy of the models implemented in the simulators is hard to assess. Emulation allows controlled environment and reproducibility, but it also suffers from a lack of realism. Experimentations allow more realistic environment and implementations, but they lack reproducibility and are complex to perform. Wireless experimentations are even more challenging to evaluate due to the high variability of the channel characteristics and its sensitivity to interferences. We are developing a tool called Wextool that aims to make wireless experimentations easier to perform and analyze by automating some painful and menial tasks. Wextool is a flexible and scalable open-source tool that covers all the experimentation steps, from the definition of the experiment scenario to the generation and storage of results.In this way, researchers can better concentrate his/her efforts on peculiar research and/or implementation issues related to his/her experimental scenario.
Enhancing network simulations
Our main problem with existing simulation tools is the lack of accuracy of the application, network, and MAC/PHY layers which makes comparisons with real-world experimentations very hard, if not impossible. The core of the issue is that none of the existing network simulators allow easy re-use of existing real-world network components such as the TCP/IP stacks of an operating system together with a real-world routing protocol and a full 802.11 MAC layer.
Our involvement in the development of ns-3 focused on 3 major areas this year: the stabilization of its core architecture and facilities for its first stable releases, incremental improvements of our wimax models, and the development of a POSIX implementation to allow us to run unmodified socket-based network applications within the simulator.
Although converging towards our first stable release in june 2008 took much longer than expected, the efforts we invested in the simulation core payed off since we were able to quickly integrate major new features during july and august 2008 and release a second stable version in september 2008 which featured third-party contributions such as python bindings and the ability to run within the simulator unmodified kernel-level network stacks with the help of the Network Simulation Craddle. A third release due to become official in december 2008 contains an icmp stack we contributed in october 2008.
In parallel to our work on the simulation core, we started the developement of a POSIX compatibility layer which allows us to run unmodified userspace socket-based applications within ns-3: early versions of this technology were demonstrated at sigcomm in august but work on this project did not stop there: we expect to be able to merge this project in ns-3 proper sometime in february 2009, once we are able to run a few non-trivial network applications such as bit-torrent clients and quagga routing daemons.
Finally, we pursued our work on a set of MAC and PHY wimax models. With the recent emergence of broadband wireless networks, simulation support for such networks, and especially IEEE 802.16 WiMAX, is becoming a necessity. We have implemented an IEEE 802.16 WiMAX module for the ns-3 simulator. The aim is to provide a standard-compliant and well-designed implementation of this standard. Our module implements fundamental functions of the convergence sublayer (CS) and the MAC common-part sublayer (CPS), including QoS scheduling services, bandwidth request/grant mechanism, and a simple uplink scheduler. The module provides two different versions of the PHY layer. The first one is a basic PHY implementation which simply forwards bursts received by the MAC layer ignoring any underlying PHY layer details. The second one is a PHY layer based on the WirelessMAN-OFDM specification and was developed by our colleagues at LIP6, France. The MAC module currently lacks a full implementation of the classifier as well as support for fragmentation and defragmentation of PDUs. The simulation module is described in  .