Section: New Results
Dependability and extensions
Participants : Gerardo Rubino, Samira Saggadi, Bruno Sericola, Bruno Tuffin.
We maintain a research activity in different areas related to dependability, performability and vulnerability analysis of communication systems. In 2009 our focus has been on evaluation techniques using both the Monte Carlo and the Quasi-Monte Carlo approaches, following the cooperative research action (ARC) RARE we were leading in 2006-2007. Monte Carlo methods often represent the single tool to solve complex problems, and rare event simulation requires a special attention, to accelerate the occurence of the event and get an unbiased estimator with a sufficiently small relative variance. We have published a book on rare event simulation [79] . In this book, we have authored or co-authored several chapters: the introductive one presenting the main issues and the bases of the solution methods [74] , the two main techniques that are importance sampling (IS) [73] and splitting [72] , as well as the robustness properties of estimators and confidence intervals with respect to rarity in [70] . While applications in physics, queuieing, transport or biology were tackled by colleagues, we have treated dependability problems, both for static [68] and dynamic [75] models.
Another book on Monte Carlo simulation and addressed to graduate students or practitionners, is [77] . It presents all the basic notions from random number generation output analysis, variance reduction techniques and shows how they can be applied to the computation of integrals or sums, and to solve equations or optimization problems. We also presented a tutorial on Monte Carlo methods for rare event analysis, using material from [79] , in QEST'09 [59] .
Novel results in simulation during year 2009 can be decomposed into two subsets: results on rare event simulation, and those on quasi-Monte Carlo methods. On rare event simulation, we have discussed in [16] the importance of designing estimators that stay efficient as the probability of the considered event decreases to zero. While robustness properties generally look at the second moment only, we discuss in the importance of investigating higher order moments, and define related properties. An efficient application of importance sampling to highly reliable Markovian systems is obtained in [17] , where we try to approach the so-called zero-variance by using the knowledge of its general form, for which we approach the (unknown in practice) parameters. Two other methods, specific to the evaluation of the probability that a graph is disconnected, are described in [69] and in [38] . Randomized quasi-Monte Carlo (RQMC) methods estimate the expectation of a random variable by the average of n dependent realizations of it. In general, due to the strong dependence used, the estimation error may not obey a central limit theorem. Analysis of RQMC methods have so far focused mostly on the convergence rates of asymptotic worst-case error bounds and variance bounds, when n tends to infinity, but little is known about the limiting distribution of the error. We have analyzed that asymptotic distribution in [46] , [45] for the special case of a randomly-shifted lattice rule, when the integrand is smooth. In dimension 1, we show that the limiting distribution is uniform over a bounded interval if the integrand is non-periodic, and has a square root form over a bounded interval if the integrand is periodic. In higher dimensions, for linear functions, the distribution function of the properly standardized error converges to a spline of degree equal to the dimension. An efficient application of RQMC to discrete choice models is also realized in [60] .
We started a collaboration with the Inria team-project Ipso on the evaluation of the moments of cumulative reward in Markov models [14] . We studied the convergence of the normalized moments and, based on this convergence, we developed a new algorithm to compute them. We also analyzed these moments and gave a probabilistic interpretation of the quantities arising in the algorithm. We also obtained in [13] an improvement of an algorithm we developed a few years ago for the distribuition computation of the cumulative reward in Markov chains.
Last, in [54] we analyzed some issues related to the way the Mean Up Time of components are estimated and the impact of this estimation when deriving results for the whole system.