## Section: New Results

### Quantitative aspects of distributed systems

Participants : Bruno Sericola, Romaric Ludinard.

This work is a collaboration with the Inria team-project Asap. We proposed in [20] a fully decentralized algorithm to provide each of the nodes of a distributed system with a value reflecting its connectivity quality. Comparing these values between nodes, enables to have a local approximation of a global characteristic of the graph. Our algorithm relies on an anonymous probe visiting the network in a unbiased random fashion. Each node records the time elapsed between visits of the probe which is called the return time of the random walk. Computing the standard deviation of such return times enables to approximate the conductance of the graph. Typically, this information may be used by nodes to assess their position, and therefore the fact that they are critical, in a graph exhibiting low conductance.

We continue our collaboration with the Inria team-projects Adept and Ipso. It is well-known that peer-to-peer overlays networks can only survive Byzantine attacks if malicious nodes are not able to predict what will be the topology of the network for a given sequence of join and leave operations. In [13] and in [35] , we investigate adversarial strategies by following specific games. Our analysis demonstrates first that an adversary can very quickly subvert DHT-based overlays by simply never triggering leave operations. We then show that when all nodes (honest and malicious ones) are imposed on a limited lifetime, the system eventually reaches a stationary regime where the ratio of polluted clusters is bounded, independently from the initial amount of corruption in the system. These results have been obtained using Markov models. In [14] and [34] , we consider the behavior of a stochastic system composed of several identically distributed, but non independent, discrete-time absorbing Markov chains competing at each instant for a transition. The competition consists in determining at each instant, using a given probability distribution, the only Markov chain allowed to make a transition. We analyze the first time at which one of the Markov chains reaches its absorbing state. We obtain its distribution and its expectation and we propose an algorithm to compute these quantities. We also exhibit the asymptotic behavior of the system when the number of Markov chains goes to infinity. Actually, this problem comes from the analysis of large-scale distributed systems and we show how our results apply to this domain.