Section: New Results
Quality of service and Transport Protocols for Future Networks
Bulk Data Transfer Scheduling and Dynamic Bandwidth provisionning
Keywords : flow scheduling, bulk data transfers, dynamic bandwidth provisioning, optical networks.
As the Internet has evolved from a research project into a popular consumer technology, it may not be reasonable to assume that all end hosts would fairly cooperate. In this context we are investigating new bandwidth sharing approaches.
Since several year we focus on different form of flow scheduling.
In this work we propose to manage explicitely the movements of massive data set between end point. We formulate the bulk data transfer scheduling problem and give an optimal solution to minimize the network congestion factor of a dedicated network or an isolated traffic class. The solution satisfying individual flows time and volume constrains can be found in polynomial time and expressed as a set of multi-interval bandwidth allocation profiles. To ensure a large scale deployment of this approach, we propose, for the data plane, a combination of a bandwidth profile enforcement mechanism with traditional transport protocols.
We pursue our exploration of the Bulk Data Transfer Scheduling Service (developed by INRIA and UIBK in the framework of the EU EC-GIN project (IST045256)), which operates at the control timescale. This service schedules and forwards packet aggregates for improving the predictability of massive data set transfer time. This service introduces and exploits the time dimension and a fine-grain user control plane. It implements the virtual network resource reservation paradigm. Exploring this type of service-oriented network resource management at a large scale in a heterogeneous environment will help in understanding the limits and alternatives of this approach. This help to have a better insight on fundamental issues, such as how does the control-plane interact with the data-plane, and, how do the abstraction layers, session, transport, network (corresponding to different timescales and aggregation levels) interact. This will clarify the limits of the current abstractions and protocols architecture and validate the potential of new abstractions. The Bulk Data Transfer Scheduling approach is based on the in-advance knowledge of resource requirement of an application or online estimation of these requirements can be applied. Signaling or real time flow analysis and also scalability issues are explored.
On an other hand, the optical fiber communication will be the predominant mechanism for data transmission in core network and may be also at the access. To address the anticipated terabit demands, dynamically reconfigurable optical networks are envisioned. This vision will be realized with the deployment of configurable optical components, which are now becoming economically viable. SInce 2008, RESO integrates this new perspective to understand how this optical component interact with electronic component and how to configure, control and tune them with end computers in the context of our associated team with AIST (Japan) and the G-Lambda Project and in collaboration with Alcatel-Lucent in the context of the CARRIOCAS project.
CARRIOCAS projects studies and implements a high bit rate optical network capable of accommodating the requirements of data-intensive, high-performance distributed applications, in terms of bandwidth, quality of service guarantees, dynamic and automated service provisioning. The investigations are carried out under the constraints of supporting the applications on converged network infrastructures hosting other types of traffic. Distributed storage of massive volumes of data as well as collaborative high resolution remote visualization are under experimentation on a testbed. We analyzed the requirements brought by the applications on the network, compares different network architectures, presents the management architectures along with some resource selection optimization algorithms, and developed a demonstrator of the SRV (scheduling Reconfiguration and Virtualisaiton) component.
The SRV entity handles the service requests (bandwidth on demand for example), aggregates them and trigger the provisioning of different types of resources accordingly. We proposed to adapt to envisioned heterogeneous needs by multiplexing rigid and flexible requests as well as coarse or fine demands. The goal is to optimize both resource provisioning and utility functions. Considering the options of advanced network bandwidth reservations and allocations, the optimization problem has been formulated. The impacts of the malleability factor have been studied by simulation to assess the gain  . Simulations show that the temporal parameters of requests (deadline and patience) are the dominant criteria and that a small malleability can improve performance a lot.
Keywords : flow analysis, flow scheduling, sampling, quality of service, QoS, flow-aware, cross-protect, game theory.
This work is conducted in the context of INRIA Bell Labs and in close collaboration with MAESTRO team (Eitan Altman). Flows crossing IP networks are not equally sensitive to loss or delay variations because they do not have the same utility functions and the same final usage. Since several years, research effort has been devoted to solve the problem of the heterogeneous performance needs of the IP traffic. A class of solutions considers that the IP layer should provide more sophisticated services than the simple best-effort service to meet the application's quality of service requirements. Quality of service has been studied in IP networks in the context of multimedia applications. Re-thinking the fundamental paradigm of packet switching network in high speed networks is on the table. The idea is to go from a packet-level approach to a flow-oriented strategy. To cope with the scalability issues, we work on disruptive algorithms within equipements, and on fully distributed (or localized) solutions. Problems that need to be explored concern flow identification and classification (see also next research direction), flow admission control, flow routing, flow scheduling, interaction with transport protocols and system stability.
Flow identification and classification (see also next research axis) The problem of traffic identification and classification has received considerable attention from the research community. Our interest here is on how to build an efficient global knowledge plane that can be used for taking decisions on traffic identifications locally. Traffic can be classified at application level, trying to identify the specific application associated with the traffic. For better flexibility, the behaviour of traffic can be used to classification. In this way, the classification itself can be independent of any new application type.
Besides looking at traffic at a course level, it is also useful in analysing traffic at a finer level. Interesting decisions can be taken based on flow characteristics. Various important flow characterstics are size, age and rate. Decisions can be based on any one of these characteristics, or multiples of the same. In this direction an exhaustive study of current literature dealing with flow classification, with respect to their size or to their underlying application, has been achieved. This bibliography survey led us to retain:
the “ Sample & Hold " technique and the “ muti-stage filters"  , for early on-line differentiation between elephants and mice;
a supervised classifier (C4.5) based on a set of 248 packet related parameters to discriminate among 12 application classes  .
In both situations, we intensively tested the proposed classifiers, in order (i) to assess their performance in terms of misclassification and confusion rates, and (ii) to check possible on-line implementations  .
Since performing per-packet measurement for per-flow analysis is computationally challenging, there is growing interest in obtaining useful information on flow characteristics using sampling. Sampling reduces the processing required to obtain flow statistics.
Flow scheduling One of the actions that can be taken based on flow characteristics is scheduling. Tremendous amount of work has ben done in the area of scheduling of jobs, and of late, many researchers have applied this in the context of networking, to schedule flows.
We incorporate the idea of sampling to schedule flows so as to induce less processing overhead. We propose a simple and practical scheduling strategy, as well as analyse the mean response time of flows when the classification is accurate, and when the classification is performed based on sampled information.
Scheduling flows research lead to the development of many queueing models, capitalizing on the heavy-tail property of flow size distribution. Theoretical studies have shown that 'size-based' schedulers improve the delay of small flows without almost no performance degradation to large flows. On the practical side, the issues in taking such schedulers to implementation have hardly been studied. We looked into practical aspects of making size-based scheduling feasible in future Internet. In this context, we propose a flow scheduler architecture comprising three modules - Size-based scheduling, Threshold-based sampling and Knockout buffer policy - for improving the performance of flows in the Internet. Unlike earlier works, we analyze the performance using five different performance metrics, and through extensive simulations show the goodness of this architecture.
Admission Control and flow routing One of the goal of using flow-aware networking is to achieve quality of service guarantees at flow level, which is the relevant granularity level for more and more users, applications and services like video and audio streaming or image guided surgery over long distances. By performing implicit differentiation between types of traffic and providing best quality of service for all admitted flows even in overload situations, Cross-protect is promising. In this work, we have partly evaluated this architecture, and then we have proposed a further evaluation and improvements concerning the implementation and failure tolerance, like adaptive routing for example.
System stability and flow-aware approach Size-based scheduling is advocated to improve response times of small flows. While researchers continue to explore different ways of giving preferential treatment to small flows without causing starvation to other flows, little focus has been paid to the study of stability of systems that deploy size-based scheduling mechanisms. The question on stability arises from the fact that, users of such a system can exploit the scheduling mechanism to their advantage and split large flows into multiple small flows. Consequently, a large flow in the disguise of small flows, may get the advantage aimed for small flows. As the number of misbehaving users can grow to a large number, an operator would like to learn about the system stability before deploying size-based scheduling mechanism, to ensure that it won't lead to an unstable system. In this study, we analyse the criteria for the existence of equilibria and reveal the constraints that must be satisfied for the stability of equilibrium points. Our study exposes that, in a two-player game, where the operator strives for a stable system, and users of large flows behave to improve delay, size-based scheduling doesn't achieve the goal of improving response time of small flows.
Integrating very large packets in networks
Keywords : jumbo frames, queueing delay analysis.
This work is conducted in the context of INRIA Bell Labs and in close collaboration with MAESTRO team (Eitan Altman). Looking into the future, this work (part of the INRIA Bell Labs research) addresses the need for larger packet size, called XLFrame (XLF), for an Internet which is soon to witness stupendous amounts of traffic that have to be processed and switched at amplifying line rates. Increasing the size of the basic transporting unit in the Internet has far-reaching incentives that otherwise appear hard to achieve. For a variety of reasons, we foresee a future Internet that has both packets (sand) and XLFs (rocks). As a first step, we analyse the effects of introducing XLFs in a network, and find the following: the amount of packet-header processing is greatly reduced, while the fair multiplexing of XLFs with standard packets can be achieved using a more careful queue management in routers.
We also look into how we can make improvements through incremental research. In this direction, studying the effect of having large packets (of size >> current MTU) in the current network is important as well as useful. Such packets are called XLFrames (or XLFs in short). Some of the motivating reasons for having XLFs in a network are: (1) to reduce power consumption at equipments by reducing the processing required, (2) achieving maximum throughput with increasing line rates, and (3) reducing per-packet cost involved in protocol processing and interrupt handling at the end-hosts.
In this work, we find that, though XLFs greatly reduces per-packet cost, flows using XLFs throttle packet-switched flows. Besides, XLF-switched flows experience higher loss rates. A solution to the unfairness comes in the form of Deficit Round Robin (DRR) scheduling that can be deployed at an equipment. DRR combined with ECN is seen to reduce the loss rates considerably.
Keywords : optical networks, resource virtualization, virtual infrastructure, VXDL.
With the expansion and the convergence of computing and communication, the dynamic provisioning of customized processing and networking infrastructures as well as resource virtualization are appealing concepts and technologies. Therefore, new models and tools are needed to allow users to create, trust and exploit such on-demand virtual infrastructures within wide area distributed environments. These ideas are investigated with the INRIA Planete, Grand Large and CNRS I3S and IBCP in the context of the ANR HIPCAL project. RESO is implementing them in the HIPerNet framework enabling the creation and the management of customized confined execution environments in a large scale context. We also investigate them in the context of the CARRIOCAS project. We are currently industrializing and transferring the knowledge, the know-how, the software and the associated patents to the RESO spinoff which will be launched in 2010. This year we explored different issues:
Network virtualisation and Security In this context we proposed to combine network and system virtualization with cryptographic identification and SPKI/HIP principles to help the user communities to build and share securely their own resource reservoirs. Based on the example of biomedical applications, we study the security model of the HIPerNet system and develops the key aspects of our distributed security approach. Then we examined how HIPerNet solutions fulfill the security requirements of applications through different scenarios  .
Network virtualisation and Application mapping Optimally designing customized virtual execution infrastructure and mapping them in a physical substrate remains a complex problem. We propose to exploit the expertize of both the application and workflow developers to ease this process while improving the end user satisfaction as well as the infrastructure usage. We study in particular how this knowledge can be captured and abstracted in the intermediate VXDL language, our language for specifying and describing virtual infrastructures. Based on the example of a specific biomedical application and workflow engine, we study the different optimisation strategies enabled by such an approach. Comparison of executions ran on different virtual infrastructures managed by our HIPerNet system show how the exploition of the application semantic can improve the overall process. All the experiments are enjoing the Grid'5000 testbed substrate  .
Network virtualisation and Dynamic resource provisionning To adjust the provisioning of the resources to end-user demand variations, new infrastructure capabilities have to be supported. These capabilities have to take into account the business requirements of telecom networks. In this work we proposes service framework to offer Internet service providers a dynamic access to extensible virtual private execution infrastructures , through on-demand and in-advance bandwidth and resource reservation services. This virtual infrastructure service concept is being studied in the CARRIOCAS project and implemented thanks to the SRV component  .
A language for virtual resources and interconnection networks description VXDL was developed to help users, applications or middleware in the virtual components specification, model and representation. Basically, this language enables the description of virtual infrastructures which are composed by I) virtual resources, II) virtual network topology and III) virtual time. Using these three sets of features it is possible represents a virtual infrastructure composition (describing resources individually and in groups) detailing the network topology desirable (through links configuration and virtual routers) informing the execution timeline of each set of resources and links. Each component (resource or group) can have different parameters, allowing the configuration of size, software, hardware, location and functionality. In addition, VXDL is able to interact with some specific configurations for virtual infrastructures, as the definition of the virtual machines numbers that can be allocated in a physical resources; basically location (anchor) of a resource; and virtual routers usage. VXDL is defined using both BNF notation and XML standard, allowing its utilization in frameworks (or middleware) for management virtual environments. In this context, different systems can use VXDL for exchange information about the virtual infrastructures. This year we continue to develop and validate this langage. We are also working on it within OGF NML WG.
Validation of HIPerNET virtual infrastructure manager: We investigate the benefit obtained with HIPerNET for the reservation and isolation of experimental slices on the Grid5000 test environment. The slice design, the integration of network description in a session reservation as well as the automatic deployment of all control software pieces are key aspects that are beeing investigated.
A Performance Evaluation Framework for Fair Solutions in Wireless Multihop Networks
Keywords : history-dependent utility functions, quality of service, performance evaluation.
Fairness in multihop wireless networks has received considerable attention in the literature. Many schemes have been proposed, which attempt to compute the “optimal" bit rates of the transmitting mobile nodes so that a certain fairness criterion is met. As the related literature indicates, there is a trade-off between fairness and efficiency, since fairness schemes typically reduce the channel utilization. Also, it is questionable whether certain fairness schemes have a positive or negative impact on the QoS of certain user services. So far, there has been limited research on the impact of the varying short-term allocations of these protocols, due to their inherent features and also nodes mobility, on the user-perceived QoS (and social welfare) for services of long duration.
In this work, we introduce an assessment framework, based on history-dependent utility functions that can be used as a holistic performance evaluation tool of these fairness schemes. These functions quantify the satisfaction that the ad hoc users obtain from the way their long-lived service sessions are allocated bandwidth, due to the behavior of the MANETs fair schemes. This way we can unambiguously compare the performance of various fair solutions whose maximization goals are inherently different (max-min fairness, proportional fairness, etc.). Finally, we demonstrate the usefulness of this framework by applying it on different protocols. This framework could also be used in any kind of networks.
Auction-based Bandwidth Allocation Mechanisms for Wireless Future Internet
Keywords : bandwidth allocation, auction theory, heterogeneous wireless networks.
An important aspect of the Future Internet is the efficient utilization of (wireless) network resources. In order for the - demanding in terms of QoS - Future Internet services to be provided, the current trend is evolving towards an “integrated" wireless network access model that enables users to enjoy mobility, seamless access and high quality of service in an all-IP network on an “Anytime, Anywhere" basis. The term “integrated" is used to denote that the Future Internet wireless “last mile" is expected to comprise multiple heterogeneous geographically coexisting wireless networks, each having different capacity and coverage radius. The efficient management of the wireless access network resources is crucial due to their scarcity that renders wireless access a potential bottleneck for the provision of high quality services.
In this work, we propose an auction mechanism for allocating the bandwidth of such a network so that efficiency is attained, i.e. social welfare is maximized. In particular, we propose an incentive-compatible, effi- cient auction-based mechanism of low computational complexity. We define a repeated game to address user utilities and incentives issues. Subsequently, we extend this mechanism so that it can also accommodate multicast sessions. We also analyze the computational complexity and message overhead of the proposed mechanism. We then show how user bids can be replaced from weights generated by the network and transform the auction to a cooperative mechanism capable of prioritizing certain classes of services and emulating DiffServ and time-of-day pricing schemes. The theoretical analysis is complemented by simulations that assess the proposed mechanisms properties and performance. We finally provide some concluding remarks and directions for future research.
Adaptive Mechanisms for Bandwidth Sharing in Multihop Wireless Networks
Keywords : bandwidth sharing, QoS and Best Effort flows.
Participant : Isabelle Guérin Lassous.
In this work, we have designed a new cross-layer protocol which guarantees bandwidth of QoS flows by adpating effectively and dynamically throughput of best effort transmissions when it is necessary. Our protocol relies on a estimation of the available bandwith differentiated according to the type of packets (QoS or best effort data packets) and a proportional fair bandwidth sharing between best effort flows. With these features, this solution increases the acceptance rate of QoS flows while ensuring an efficient use of the remaining bandwidth between best effort as a fair sharing.
Towards a User-Oriented Benchmark for Transport Protocols Comparison in very High Speed Networks
Keywords : Protocol Benchmark, TCP, Performance evaluation, High Speed transport, High Speed networks.
Standard TCP faces performance limitations in very high speed wide area networks, mainly due to a long end-to-end feedback loop and a conservative behaviour with respect to congestion. Many TCP variants have been proposed to overcome these limitations. However, TCP is a complex protocol with many user-configurable parameters and a range of different implementations. It is then important to define measurement methods so that the transport services and protocols can evolve guided by scientific principles and can be compared quantitatively. Users of these variants need performance parameters that describe protocol capabilities so that they can develop and tune their applications. The goal of this work to make some steps towards a user-oriented test suite and a benchmark, called HSTTS, for high speed transport protocols comparison. We first identified useful metrics. We then isolated infrastructure parameters and traffic factors which influence the protocol behaviour. This enabled us to define classes of representative applications and scenarios capturing and synthesising comprehensive and useful properties. We finally evaluate this proposal on the Grid'5000 experimental environment, and present it to the IRTF TRMG working group.