Section: New Results
Keywords : TCP, high-speed/scalable TCP, MIMD algorithm, FEC, fairness, estimation of traffic characteristics.
Quantitative analysis of protocols
Participants : Ahmad Al Hanbali, Sara Alouf, Eitan Altman, Konstantin Avrachenkov, Dhiman Barman, Olivier Gandouet, Alain Jean-Marie, Arzad Alam Kherani, Grigoriy Miller, Daniele Miorandi, Philippe Nain, Balakrishna Prabhu.
High-speed congestion control
Due to the rapid increase of the bandwidth in networks (with links up to 10Gbps), the mechanisms of most of the existing versions of TCP that adapt to the available bandwidth turn out to be too slow, and need sometimes hours until they manage to grab the available bandwidth. Recently, new versions of TCP have been proposed which implement more aggressive mechanisms for adapting to the available throughput. High Speed TCP (proposed by S. Floyd) and Scalable TCP (proposed by T. Kelly) are two such mechanims.
In  , E. Altman, K. Avrachenkov, A. A. Kherani and B. Prahbu in collaboration with C. Barakat ( Inria project-team Planete ), study Scalable TCP, which uses a Multiplicative Increase Multiplicative Decrease (MIMD) algorithm for the window size evolution. The authors present a mathematical analysis of the MIMD congestion control algorithm in the presence of random losses. Random losses are typical to wireless networks but can also be used to model losses in wireline networks with a high bandwidth-delay product. The approach is based on showing that the logarithm of the window size evolution has the same behavior as the workload process in a standard G/G/1 queue. The Laplace-Stieltjes transform of the equivalent queue is then shown to directly provide the throughput of the congestion control algorithm and the higher moments of the window size.
In  , E. Altman, K. Avrachenkov and B. Prahbu analyze fairness among sessions sharing a common bottleneck link, in the case when at least one session uses the MIMD algorithm. Both synchronous as well as asynchronous losses are considered. In the asynchronous case, only one session suffers a loss at a loss instant. Two models are then considered to determine which source looses a packet: a rate dependent model in which the packet loss probability of a session is proportional to its rate at the congestion instant, and the independent loss rate model. The authors first study how two MIMD sessions share the capacity in the presence of general combinations of synchronous and asynchronous losses. They show that, in the presence of rate dependent losses, the capacity is fairly shared whereas rate independent losses provide high unfairness. They also study inter-protocol fairness: how the capacity is shared in the presence of synchronous losses among sessions some of which use Additive Increase Multiplicative Decrease (AIMD) protocols whereas the others use MIMD protocols.
In  , E. Altman, K. Avrachenkov, A. A. Kherani and B. Prahbu study an Adaptive Window Protocol (AWP) with general increase and decrease profiles in the presence of window dependent random losses. The authors derive a steady-state Kolmogorov equation, and then obtain its solution in analytic form for particular TCP versions proposed for high-speed networks, such as Scalable TCP and HighSpeed TCP. They also relate the window evolution process under an AWP to the workload process in queueing systems. This observation provides a way to compare various AWP protocols.
In  , the same authors present an approximate expression for the throughput of a long-lived Scalable TCP session when the losses are i.i.d. and due to window dependent errors. In the second part of the paper, they analyze the case when losses are due to Markovian window independent errors.
Analysis of TCP at different granularities
TCP is frequently modeled as a fluid whose amount increases smoothly until a loss occurs, in which case the fluid drops instantaneously to a lower level. This is the well-known saw-tooth model of the TCP sending rate evolution. An instance of this model can be found in  , where E. Altman and K. Avrachenkov, in collaboration with C. Barakat ( Inria project-team Planete ), have revisited their Sigcomm 2000 paper with more detailed discussions on ergodicity conditions for the TCP sending rate evolution, the derivation of the second moment of the TCP sending rate, and the numerical experiments.
Yet other fluid models have been used (in particular by F. P. Kelly) in which the increase and decrease parts of the TCP dynamics are replaced by their average value, so that TCP dynamics are described using a differential equation (referred below to as the smooth fluid model). In  , E. Altman, in collaboration with R. Márquez and S. Sole-Alvarez (both from the University of Los Andes, Merida, Venezuela), has shown that the smooth fluid model provides a good approximation of the saw-tooth model.
Optimal dynamics of TCP/IP networks
Optimal evolution of TCP sending rate
In  , K. Avrachenkov and G. Miller, in collaboration with B. Miller and K. Stepanyan (both from Institute for Transmission Problems, Moscow, Russia), consider a nonlinear optimal stochastic control problem. The state of the link is described by a controlled hidden Markov process while the arrival of congestion notifications is described by a counting process with the intensity depending on the current transmission rate (control) and unobserved link state. The aim of the control is to achieve the maximum of some utility function taking into account the losses of transmitted information. A necessary optimality condition is derived in the form of stochastic maximum principle which allows the authors to obtain explicit analytic expressions for the optimal control in some particular cases. It is shown that for a Markovian path the optimal sending rate evolution is indeed piecewise deterministic as it is the case in the current TCP implementation. However, it is also shown that the optimal increase is not linear and the optimal multiplicative decrease is not proportional to the instantaneous sending rate. The optimal control takes advantage from a priori information about the path characteristics. In particular, this allows congestion control to achieve much smaller variance of the sending rate evolution, and at the same time to gain in the average throughput.
Optimal buffer size for Internet routers
In  , K. Avrachenkov, in collaboration with U. Ayesta ( Cwi , The Netherlands) and A. Piunovskiy (University of Liverpool, UK), studies the optimal choice of the buffer size in Internet routers. The objective is to determine the minimum value of the buffer size required in order to fully utilize the link capacity. There are some empirical rules for the choice of the buffer size. The most known rule of thumb states that the buffer length should be set to the bandwidth delay product of the network. Several recent works suggest that, as a consequence of the traffic aggregation, the buffer size should be set to smaller values. The analytical results in  provide further evidence that indeed the buffer size should be reduced in the presence of traffic aggregation. Furthermore, their result states that the minimum required buffer is smaller than what previous studies suggested.
Loss policies for parallel TCP sessions
Consider permanent TCP sessions that share an intelligent bottleneck router. When the sum of the TCP rates reach the throughput capacity then a loss (or a congestion notification) occurs. It is assumed that the router decides which of the sessions will receive a notification (or will suffer a loss). Examples of such policies include (i) independent marking, (ii) choosing the flow with the largest throughput for marking, (iii) choosing the flow with the smallest throughput, (iv) choosing the flow for marking with a probability that is proportional to its rate. Surprisingly it has been shown that no matter what policy is used, the sum of TCP throughputs will be the same. However, the second moments do depend on the used marking policy. The above results have been obtained for two connections  by E. Altman, in collaboration with R. El Azouzi (University of Avignon, France), D. Ros and B. Tuffin (both from Inria project-team Armor ), and extended to any number of connections  by E. Altman and D. Barman, in collaboration with B. Tuffin and M. Vojnovic (Microsoft Research, Cambridge, UK).
TCP in wireless networks
TCP was designed to provide reliable end-to-end delivery of data over unreliable networks. In practice, most TCP deployments have been carefully optimized in the context of wired networks. Ignoring the characteristics of wireless Mobile Ad Hoc Networks (MANETs), such as high bit error rates, path asymmetry, network partitions, route failures, power constraints, etc. lead to TCP implementations with poor performance. In  , A. Al Hanbali, E. Altman and P. Nain review proposals recently made to improve the performance of TCP in MANET environment.
In  , E. Altman, in a collaborative work with A. Chockalingam, J. V. K. Murthy and R.Kumar (all from IISc , Bangalore, India), proposes a cross-layer design for TCP over a wireless channel. As resources are scarce and more expensive in radio communications, it becomes advantageous to invest in a cross-layer network design. The authors study how to jointly optimize the modulation scheme, the retransmission policy, the amount of forward error correction, etc., so as to optimize data transfer goodput using TCP. This is done by using the well-known TCP throughput formula to express the dependence of the throughput with respect to loss probabilities and of the round trip delay, and then express the loss rates and delays as a function of the physical, link and network layer parameters.
FEC (Forward Error Correction)
Participant : Alain Jean-Marie.
A. Jean-Marie, T. Alemu and Y. Calas have pursued the investigation of the interaction of FEC and the queue management algorithms, specifically, RED (Random Early Detection) and the standard Drop Tail (DT). The problem was to derive quantitative rules enabling to decide when one situation is better than the other. The critical object here is the process of losses. Several models, including a batch-Poisson process, have been analyzed. This last one allows the authors to derive asymptotic laws, involving moments of the loss run length , which explains the presence of a cross-over phenomenon when the packet loss rate increases. Simulations of networking situations illustrate the qualitative validity of the predictions  ,  ,  .
Estimation of traffic characteristics
In the framework of the FLUX project (funded by the ACI Masses of Data), A. Jean-Marie and O. Gandouet have focused on the problem of recognizing long-lived and short-lived data flows (elephants and mice) using algorithms with a (very) small memory. The modeling of the problem reduces it to the estimation of some characteristics of multi-sets, like moments of frequencies, or the number of elements with specific properties. Algorithms based on probabilistic counting seem to offer this possibility. From there, they have explored two directions.
On one side, they have studied the intrinsic algorithmic difficulty of the question. Using results from the theory of ``communication complexity'', they have proved that no online algorithm could solve the problem in general with substantially less memory than an exact algorithm. This result holds whether the algorithm is exact or probabilistic.
On the other side, an algorithm allowing the estimation of the number of elephants in a flow of packets (an elephant being defined as having more than k times as much packets than a mouse) has been constructed, on the basis of the LogLog algorithm of Durand and Flajolet. For multi-sets in a specific (yet practical) family, this algorithm indeed uses much less memory than an exact algorithm.
In secure multicast communications (etc. TV over IP, confidential videoconferences) a hierarchical organization of encryption keys is often adopted to minimize the costs of updating the data encryption key at all members. However, volatile members still induce a high number of updates needed to ensure data confidentiality. It has been proposed to isolate these members to improve the quality of service rendered to long-lived members. In  , S. Alouf, A. Jean-Marie and P. Nain develop a stochastic model of the system, the objective being to optimally tune its parameters. In particular, the joint distribution of disjoint populations in a M/G/ system is derived yielding the joint distribution of the populations at hand (ongoing work).
Reliable multicast protocol for unidirectional satellite (RMUS)
A. Jean-Marie and L. M. Ngo have analyzed one aspect of the RMUS protocol, which has been proposed for flow control in the satellite-based AI3 system (Asian Internet Interconnection Initiatives Project). This study led to the conclusion that the acknowledgment implosion problem may impede the scalability of the protocol. The results will appear in  .