## Section: New Results

### Random Graphs and Combinatorial Optimization

Participants : Hamed Amini, Emilie Coupechoux, Marc Lelarge, Justin Salez.

#### Belief Propagation for the Random Assignment Problem

Belief propagation is a non-rigorous decentralized and iterative algorithmic strategy for solving complex optimization problems on huge graphs by purely-local propagation of dynamic messages along their edges. Its remarkable performance in various domains of application from statistical physics to image processing or error-correcting codes have motivated a lot of theoretical works on the crucial question of convergence of beliefs despite the cycles, and in particular the way it evolves as the size of the underlying graph grows to infinity. However, a complete and rigorous understanding of those remarkable emergence phenomena (general conditions for convergence, asymptotic speed and influence of the initialization) still misses. A new idea consists in using the topological notion of local weak convergence of random geometric graphs to define a limiting local structure as the number of vertexes grows to infinity and then replace the asymptotic study of the phenomenon by its direct analysis on the infinite graph.

This method has already allowed us to establish
asymptotic convergence at constant speed for the special case of the
famous optimal assignment problem, resulting in a distributed algorithm
with asymptotic complexity O(n^{2}) compared to O(n^{3}) for the
best-known exact algorithm. This is joint work with Devavrat Shah (MIT).
It has been published in the Journal of Mathematics of
Operations Research [12] and appeared in SODA'09 [21] .
We hope
this method will be easily extended to other optimization problems on
tree-like graphs and will become a
powerful tool in the fascinating quest for a general mathematical
understanding of Belief Propagation.

#### Dynamic Programming Optimization over Random Data: the Scaling Exponent for Near-optimal Solutions

A very simple example of an algorithmic problem solvable by dynamic
programming is to maximize, over , the
objective function for
given _{i}>0 . This problem, with random (_{i}) , provides a
test example for studying the relationship between optimal and
near-optimal solutions of combinatorial optimization problems. In [4]
we
showed that, amongst solutions differing from the optimal solution in a
small proportion of places, we can find near-optimal
solutions whose objective function value differs from the optimum by a
factor of order ^{2} but not smaller order. We conjecture this
relationship holds widely in the context of dynamic programming over
random data, and Monte Carlo simulations for the Kauffman-Levin NK
model are consistent with the conjecture. This work is a technical
contribution to a broad program initiated in Aldous-Percus (2003) of
relating such scaling exponents to the algorithmic difficulty of
optimization problems.

#### The rank of diluted random graphs

In [26] , with Charles Bordenave (CNRS-Université de Toulouse), we investigate the rank of the adjacency matrix of large diluted random graphs: for a sequence of graphs converging locally to a tree, we give new formulas for the asymptotic of the multiplicity of the eigenvalue 0. In particular, the result depends only on the limiting tree structure, showing that the normalized rank is Çcontinuous at infinityÇ. Our work also gives a new formula for the mass at zero of the spectral measure of a Galton-Watson tree. Our techniques of proofs borrow ideas from analysis of algorithms, random matrix theory, statistical physics and analysis of Schrödinger operators on trees

#### Bootstrap Percolation in Random Networks

Bootstrap percolation model has been used in several related applications. In [23] , we consider bootstrap percolation in living neural networks. Recent experimental studies of living neural networks reveal that global activation of neural networks induced by electrical stimulation can be explained using the concept of bootstrap percolation on a directed random network. The experiment consists in activating externally an initial random fraction of the neurons and observe the process of firing until its equilibrium. The final portion of neurons that are active depends in a non linear way on the initial fraction. Our main result in [23] is a theorem which enables us to find the final proportion of the fired neurons in the asymptotic case, in the case of random directed graphs with given node degrees as the model for interacting network. This gives a rigorous mathematical proof of a phenomena observed by physicists in neural networks [44] .

#### Epidemics over Random Hypergraphs

In [30] , we adapt the model given in [48] , which is on graphs, to an equivalent on hypergraphs. For this, we generalized the result obtained by Darling and Norris in [45] , which deals with the k-core of a random hypergraph. The proof of this result was the subject of the training course report (for master's degree) of E. Coupechoux [30] . Now, we are trying to deduce from this result, new results on the giant component of random hypergraphs.

#### Efficient Control of Epidemics over Random Networks

Motivated by the modeling of the spread of viruses or epidemics with coordination among agents, we introduce in [20] a new model generalizing both the basic contact model and the bootstrap percolation. We analyze this percolated threshold model when the underlying network is a random graph with fixed degree distribution. Our main results unify many results in the random graphs literature. In particular, we provide a necessary and sufficient condition under which a single node can trigger a large cascade. Then we quantify the possible impact of an attacker against a degree based vaccination and an acquaintance vaccination. We define a security metric allowing to compare the different vaccinations. The acquaintance vaccination requires no knowledge of the node degrees or any other global information and is shown to be much more efficient than the uniform vaccination in all cases.