Team Cépage

Overall Objectives
Scientific Foundations
New Results
Contracts and Grants with Industry
Other Grants and Activities

Section: Scientific Foundations

Theoretical Validation

In order to analyze the performance of the proposed algorithms, we first need to define a metric adapted to the targeted platform. In particular, since resource performance and topology may change over time, the metric should also be defined from the optimal performance of the platform at any time step. For instance, if throughput maximization is concerned, the objective is to provide for the proposed algorithm an approximation ratio with respect to Im1 ${\#8747 _{SimulationTime}OptThroughput{(t)}}$ or at least minSimulationTimeOptThroughput(t).

For instance, Awerbuch and Leighton  [57] , [58] developed a very nice distributed algorithm for computing multi-flows. The algorithm proposed in  [58] consists in associating queues and potential to each commodity at each node for all incoming or outgoing edges. These regular queues store the flow that did not reach its destination yet. Using a very simple and very natural framework, flow goes from high potential areas (the sources) to low potential areas (the sinks). This algorithm is fully decentralized since nodes make their decisions depending on their state (the size of their queues), the state of their neighbors (the size of their queues), and the capacity of neighboring links.

The remarkable property about this algorithm is that if, at any time step, the network is able to ship (1 + $ \epsilon$)di flow units for each capacity at each time step, then the algorithm will ship at least di units of flow at steady state. The proof of this property is based on the overall potential of all the queues in the network, which remains bounded over time.

It is worth noting that this algorithm is quasi-optimal for the metrics we defined above, since the overall throughput can be made arbitrarily close to minSimulationTimeOptThroughput(t).

In this context, the approximation result is given under an adversary model, where the adversary can change both the topology and the performances of communication resources between any two steps, provided that the network is able to ship (1 + $ \epsilon$)di .

Most of Scheduling problems are NP-Complete and unapproximability results exist in on-line settings, especially when resources are heterogeneous. Therefore, we need to rely on simplified communication models (see next section) to prove theoretical results. In this context, resource augmentation techniques are very useful. It consists in identifying a weak parameter (a parameter whose value can be slightly increased without breaking any strong modeling constraint) and then to compare the solution produced by a polynomial time algorithm (with this relaxed constraint) with the optimal solution of the NP-Complete problem (without resource augmentation). This technique is both pertinent in a difficult setting and useful in practice.


Logo Inria