Section: New Results
Network Dynamics
Participants : François Baccelli, Ana Bušić, Giovanna Carofiglio, Sergey Foss, Bruno Kauffmann, Marc Lelarge.
This traditional research topic of TREC has several new threads like perfect simulation, active probing or Markov decision.
Queueing Theory and Active Probing
Inverse Problems.
Active probing began by measuring end-to-end path metrics, such as delay and loss, in a direct measurement process which did not require inference of internal network parameters. The field has since progressed to measuring network metrics, from link capacities to available bandwidth and cross traffic itself, which reach deeper and deeper into the network and require increasingly complex inversion methodologies. In [7] , we formulate this line of thought as a set of inverse problems in queueing theory. Queueing theory is typically concerned with the solution of direct problems, where the trajectory of the queueing system, and laws thereof, are derived based on a complete specification of the system, its inputs and initial conditions. Inverse problems aim to deduce unknown parameters of the system based on partially observed trajectories. We provide a general definition of the inverse problems in this class and map out the key variants: the analytical methods, the statistical methods and the design of experiments. We also show how this inverse problem viewpoint translates to the design of concrete Internet probing applications.
Internet Tomography.
Most active probing techniques suffer of the “Bottleneck” limitation: all characteristics of the path after the bottleneck link are erased and unreachable. we are currently investigating a new tomography technique, based on the measurement of the fluctuations of point-to-point end-to-end delays, and allowing one to get insight on the residual available bandwidth along the whole path. For this, we combine classical queueing theory models with statistical analysis to obtain estimators of residual bandwidth on all links of the path. These estimators are proved to be tractable, consistent and efficient. In [8] we evaluate their performance with simulation and trace-based experiments.
Lately this method has been generalized in [56] to a probing multicast tree instead of a single path
Perfect Simulation
Perfect simulation, introduced by Propp and Wilson in 1996, is a
simulation algorithm that uses coupling arguments to give an unbiased
sample from the stationary distribution of a Markov chain on a finite state space
. In the general case, the algorithm starts trajectories from all
at some
time in the past until time t = 0 . If the end state is the
same for all trajectories, then the chain has coupled and
the end state has the stationary distribution of the Markov
chain. Otherwise, the simulations are started further in the
past. The complexity of the algorithm depends on the cardinality of the state space, which is prohibitive for most applications.
This simulation technique becomes efficient if the Markov chain is monotone, as the monotonicity
allows to consider only the extremal trajectories of the system. In the non-monotone case, it is possible to avoid
generating all the trajectories by considering bounding processes (Hubert, 2004). The construction of these
bounding processes is model-dependent and in general not straightforward.
In a recent work [39] we proposed
an algorithm to construct bounding processes, called envelopes, for the case of a finite Markov chain with a
lattice structure. We show that this algorithm is efficient for some classes of non-monotone queueing networks,
such as networks of queues with batch arrivals,
queues with fork and join nodes and/or with negative customers.
Perfect Sampling of Piece-wise Space Homogeneous Markov Chains.
In an ongoing work with B. Gaujal [INRIA Rhône-Alpes], F. Pin [ENS Paris] and J.-M. Vincent [Université Joseph Fourier], we are extending these results to a more general framework of piece-wise space homogeneous Markov chains (each event divides the state space into a few zones and the transition of the event is constant within each zone).
Probabilistic cellular automata, invariant measures, and perfect simulation.
Cellular automata were first introduced as deterministic functions
, were
is a finite alphabet, and E a
discrete space (
or
). Their particularity is to be
characterized by a local transition function
for some finite neighborhood
, through the relation
. In an ongoing work with J. Mairesse
[LIAFA, CNRS and Université Paris 7] and I. Marcovici [ENS Lyon],
we consider probabilistic cellular automata (PCA), which are defined by a local
function
, where
denotes the set of
probability measures on
. We study some properties of their
invariant measures, and propose an algorithm allowing for perfect sampling of
the stationary distribution of an ergodic PCA that is an
extension of the envelope algorithm in [39] .
Markov reward and Markov decision processes
Numerical methods for solving Markov chains are in general inefficient if the state space of the chain is very large (or infinite) and lacking a simple repeating structure. An alternative to solving such chains is to construct models that are simple to analyze and that provide bounds for a reward function of interest. In a recent work [28] we presented a new bounding method for Markov chains inspired by Markov reward theory; our method constructs bounds by redirecting selected sets of transitions, facilitating an intuitive interpretation of the modifications on the original system. Redirecting sets of transitions is based on an extension of precedence relations to sets of states (van Houtum et al. 1998), and allows to design more accurate bounds (ex: bounds having the same mean behavior). We show that our method is compatible with strong aggregation of Markov chains; thus we can obtain bounds for the initial chain by analyzing a much smaller chain. We apply the precedence relations on set of states combined with aggregation to prove the bounds of order fill rates for an inventory system of service tools with joint demands/returns. We are currently extending these results to Markov decision processes.
In an ongoing work with I.M. H. Vliegen [Technische Universiteit Eindhoven, The Netherlands] and A. Scheller-Wolf [Carnegie Mellon University, USA], we apply these results in an optimization problem of base stock levels for service tools inventory. The first results of this work were published as a part of the PhD thesis of I. Vliegen (defended in November 2009) [57] .
Stochastic Stability
Bipartite Matching Queueing Model
In an ongoing work with V. Gupta [Carnegie Mellon University, USA] and J. Mairesse
Ana Bušić studies
the bipartite matching model of customers and servers is a queueing model introduced by Caldentey,
Kaplan and Weiss (Adv. in Appl. Probab., 2009).
Let C and S be the sets of customer and server classes.
At each time step, a pair of customer and server arrive according to
a joint probability measure . Also, a pair of matched
customer and server, if there exists any, departs from the system.
Authorized matchings are given by a fixed bipartite graph
G = (C, S, E) , where
. The evolution of the
model can be described by a discrete time Markov chain, where the
state of the chain is given by two equal-length words of unmatched customers and
servers. The stability properties are studied under various BF (Buffer First) matching policies,
i.e. policies with priority given to customers/servers that are already present in the buffer.
It includes the following policies: FIFO, priorities, MLQ
(Match the Longest Queue), or MSQ (Match the Shortest Queue).
Assume that the model cannot be decomposed into two independent
submodels. Necessary stability conditions are then given by:

where S(U) denotes the servers that can be matched with customers in U (and C(V) is defined dually).
The notion of extremal facet is introduced. For
models with only extremal facets, the stability region is maximal
for any BF policy, i.e. the conditions are
also sufficient. For models with non-extremal
facets, the situation is more intricate.
The MLQ policy has a maximal stability region.
In the case of a tree, there is a static priority
policy that has maximal stability region.
Spatial Queues
In a joint work with S. Foss [Heriot–Watt University, UK] [37] , we consider a queue where the server is the Euclidean space, the customers are random closed sets of the Euclidean space arriving according to a Poisson rain and where the discipline is a hard exclusion rule: no two intersecting random closed sets can be served at the same time. We use the max plus algebra and Lyapunov exponents to show that under first come first serve assumptions, this queue is stable for a sufficiently small arrival intensity. We also discuss the percolation properties of the stationary regime of the random closed sets in the queue.
Flow and Congestion Control
The main topics covered in 2009 concern transport equations for Scalable TCP and for Split TCP.
Split TCP
The idea of Split TCP is to replace a multihop, end-to-end TCP connection by a cascade of shorter TCP connections using intermediate nodes as proxies, thus achieving higher throughput. In the model that we developed with G. Carofiglio [Bell Laboratories, Alcatel–Lucent] and S. Foss, we consider two long-lived TCP-Reno flows traversing two links with different medium characteristics in cascade. A buffer at the end of the first link prevents the loss of packets that cannot be immediately forwarded on the second link by storing them temporarily. The target of our study is the characterization of the TCP throughput on both links as well as the buffer occupancy. In [22] we establish the partial differential equations for throughput dynamics jointly with that of buffer occupancy in the proxy, we determine the stability conditions by exploiting some intrinsic monotonicity and continuity properties of the system and we derive tail asymptotics for buffer occupancy in the proxy and end-to-end delays.
Scalable TCP
The unsatisfactory performance of TCP in high speed wide area networks has led to several versions of TCP–like HighSpeed TCP, Fast TCP, Scalable TCP or CUBIC, all aimed at speeding up the window update algorithm. In a joint work with G. Carifiglio [13] , we focus on Scalable TCP which belongs to the class of Multiplicative Increase Multiplicative Decrease congestion control protocols. We present a new stochastic model for the evolution of the instantaneous throughput of a single STCP flow in the Congestion Avoidance phase, under the assumption of a constant per-packet loss probability. This model allows one to derive several closed-form expressions for the key stationary distributions associated with this protocol: we characterize the throughput obtained by the flow, the time separating Multiplicative Decrease events, the number of bits transmitted over certain time intervals and the size of rate decrease. Several applications leveraging these closed form expressions are considered with a particular emphasis on QoS guarantees in the context of dimensioning.
Rare Events in Stochastic Networks
In [11] , we analyze the behavior of Generalized Processor Sharing (GPS) queues with heavy-tailed service times. We compute the exact tail asymptotics of the stationary workload of an individual class and give new conditions for reduced-load equivalence and induced burstiness to hold. We also show that both phenomena can occur simultaneously. Our proofs rely on the single big event theorem and new fluid limits obtained for the GPS system that can be of interest by themselves.