Section: Scientific Foundations
Management of Quantitative Behavior
Participants : Benedikt Bollig, Thomas Chatain, Paul Gastin, Stefan Haar, Serge Haddad.
Besides the logical functionalities of programs, the quantitative aspects of component behavior and interaction play an increasingly important role.
Real-time properties cannot be neglected even if time is not an explicit functional issue, since transmission delays, parallelism, etc, can lead to time-outs striking, and thus change even the logical course of processes. Again, this phenomenon arises in telecommunications and web services, but also in transport systems.
In the same contexts, probabilities need to be taken into account, for many diverse reasons such as unpredictable functionalities, or because the outcome of a computation may be governed by race conditions.
Last but not least, constraints on cost cannot be ignored, be it in terms of money or any other limited resource, such as memory space or available CPU time.
Traditional mainframe systems were proprietary and (essentially) localized; therefore, impact of delays, unforeseen failures, etc. could be considered under the control of the system manager. It was therefore natural, in verification and control of systems, to focus on functional behavior entirely. With the increase in size of computing system and the growing degree of compositionality and distribution, quantitative factors enter the stage:
calling remote services and transmitting data over the web creates delays ;
remote or non-proprietary components are not “deterministic”, in the sense that their behavior is uncertain.
Time and probability are thus parameters that management of distributed systems must be able to handle; along with both, the cost of operations is often subject to restrictions, or its minimization is at least desired. The mathematical treatment of these features in distributed systems is an important challenge, which MExICo is addressing; the following describes our activities concerning probabilistic and timed systems. Note that cost optimization is not a current activity but enters the picture in several intended activities.
Probabilistic distributed Systems
Participants : Stefan Haar, Serge Haddad.
Non-sequential probabilistic processes.
Practical fault diagnosis requires to select explanations of maximal likelihood ; this leads therefore to the question what the probability of a given partially ordered execution is. In Benveniste et al.  ,  , we presented a model of stochastic processes, whose trajectories are partially ordered, based on local branching in Petri net unfoldings; an alternative and complementary model based on Markov fields is developed in  , which takes a different view on the semantics and overcomes the first model's restrictions on applicability.
Both approaches abstract away from real time progress and randomize choices in logical time. On the other hand, the relative speed - and thus, indirectly, the real-time behavior of the system's local processes - are crucial factors determining the outcome of probabilistic choices, even if non-determinism is absent from the system.
Recently, we started a new line of research with Anne Bouillard, Sidney Rosario, and Albert Benveniste in the DistribCom team at INRIA Rennes, studying the likelihood of occurrence of non-sequential runs under random durations in a stochastic Petri net setting.
Once the properties of the probability measures thus obtained are understood, it will be interesting to relate them with the two above models in logical time, and understand their differences. Another mid-term goal, in parallel, is the transfer to diagnosis with possible cooperation with René Boel's group in Ghent/Belgium.
Distributed Markov Decision Processes
Distributed systems featuring non-deterministic and probabilistic aspects are usually hard to analyze and, more specifically, to optimize. Furthermore, high complexity theoretical lower bounds have been established for models like partially observed Markovian decision processes and distributed partially observed Markovian decision processes. We believe that these negative results are consequences of the choice of the models rather than the intrinsic complexity of problems to be solved. Thus we plan to introduce new models in which the associated optimization problems can be solved in a more efficient way. More precisely, we start by studying connection protocols weighted by costs and we look for online and offline strategies for optimizing the mean cost to achieve the protocol. We cooperate on this subject with Eric Fabre in the DistribCom team at INRIA Rennes, in the context of the DISC project.
Real time distributed systems
Nowadays, software systems largely depend on complex timing constraints and usually consist of many interacting local components. Among them, railway crossings, traffic control units, mobile phones, computer servers, and many more safety-critical systems are subject to particular quality standards. It is therefore becoming increasingly important to look at networks of timed systems, which allow real-time systems to operate in a distributed manner.
Timed automata are a well-studied formalism to describe reactive systems that come with timing constraints. For modeling distributed real-time systems, networks of timed automata have been considered, where the local clocks of the processes usually evolve at the same rate   . It is, however, not always adequate to assume that distributed components of a system obey a global time. Actually, there is generally no reason to assume that different timed systems in the networks refer to the same time or evolve at the same rate. Any component is rather determined by local influences such as temperature and workload.
Distributed timed systems with independently evolving clocks
Participants : Benedikt Bollig, Paul Gastin.
A first step towards formal models of distributed timed systems with independently evolving clocks was done in  . As the precise evolution of local clock rates is often too complex or even unknown, the authors study different semantics of a given system: The existential semantics exhibits all those behaviors that are possible under some time evolution. The universal semantics captures only those behaviors that are possible under all time evolutions. While emptiness and universality of the universal semantics are in general undecidable, the existential semantics is always regular and offers a way to check a given system against safety properties. A decidable under-approximation of the universal semantics, called reactive semantics , is introduced to check a system for liveness properties. It assumes the existence of a global controller that allows the system to react upon local time evolutions. A short term goal is to investigate a distributed reactive semantics where controllers are located at processes and only have local views of the system behaviors.
Several questions, however, have not yet been tackled in this previous work or remain open. In particular, we plan to exploit the power of synchronization via local clocks and to investigate the synthesis problem : For which (global) specifications can we generate a distributed timed system with independently evolving clocks (over some given system architecture) such that both the reactive and the existential semantics of are precisely (the semantics of) ? In this context, it will be favorable to have partial-order based specification languages and a partial-order semantics for distributed timed systems. The fact that clocks are not shared may allow us to apply partial-order–reduction techniques.
If, on the other hand, a system is already given and complemented with a specification, then one is usually interested in controlling the system in such a way that it meets its specification. The interaction between the actual system and the environment (i.e., the local time evolution) can now be understood as a 2-player game: the system's goal is to guarantee a behavior that conforms with the specification, while the environment aims at violating the specification. Thus, building a controller of a system actually amounts to computing winning strategies in imperfect-information games with infinitely many states where the unknown or unpredictable evolution of time reflects an imperfect information of the environment. Only few efforts have been made to tackle those kinds of games. One reason might be that, in the presence of imperfect information and infinitely many states, one is quickly confronted with undecidability of basic decision problems.
Equivalences between Models with Time and Concurrency (EMoTiCon)
Participants : Thomas Chatain, Stefan Haar, Serge Haddad.
This is the subject of a project of the Farman institute of ENS Cachan in collaboration with the LURPA (laboratory for automated production at ENS Cachan).
Due to the dramatic development of the techniques that aim at improving the security of automated systems (synthesis, verification, test...), several classes of models are often needed to study a complex system, either to give several views of the system or to study the same aspect of the system using several techniques. Thus one often needs to transform a model from one formalism to another or to compare models written in different formalisms (time Petri nets, networks of timed automata...), that have common features: they allow one to model both (dense) time and concurrency. These transformations are usually done by hand and rely on natural equivalences between the basic components of the models. For instance, a state of an automaton corresponds intuitively to a place of a Petri net; a transition of a Petri net corresponds to a tuple of synchronized transitions in a network of automata; the interval of possible delays associated with a transition of a time Petri net corresponds to a pair invariant/guard in a timed automaton. But these natural equivalences do not apply easily to general models. And since the transformations are usually built on case studies and for ad-hoc reasons, no effort is made to generalize them and most often the relations between the initial model and the transformed one are not formalized.
Nevertheless we see clearly that the transformations on case studies tend naturally to preserve concurrency. Moreover this property is appreciated because it improves the readability of the transformation and makes the transformed model faithful to the initial one and to the modeled system. But these ad hoc transformations are difficult generalize. Thus, not surprisingly, the first works about formal comparison of the expressiveness of different models  ,  ,   ,  ,  ,  ,  did not take preservation of concurrency into account. These works make extensive use of tricks that destroy concurrency and focus only on the preservation of sequential (interleaving) timed semantics.
In contrast, we aim at formalizing and automating translations that preserve both the timed semantics and the concurrent semantics. This effort is crucial for extending concurrency-oriented methods for logical time. In fact, validation and management - in a broad sense - of distributed systems is not realistic in general without understanding and control of their real-time dependent features; the link between real-time and logical-time behaviors is thus crucial for many aspects of MExICo 's work.
Weighted Automata and Weighted Logics
Participants : Benedikt Bollig, Paul Gastin.
Time and probability are only two facets of quantitative phenomena. A generic concept of adding weights to qualitative systems is provided by the theory of weighted automata  . They allow one to treat probabilistic or also reward models in a unified framework. Unlike finite automata, which are based on the Boolean semiring, weighted automata build on more general structures such as the natural or real numbers (equipped with the usual addition and multiplication) or the probabilistic semiring. Hence, a weighted automaton associates with any possible behavior a weight beyond the usual Boolean classification of “acceptance” or “non-acceptance”. Automata with weights have produced a well-established theory and come, e.g., with a characterization in terms of rational expressions, which generalizes the famous theorem of Kleene in the unweighted setting. Equipped with a solid theoretical basis, weighted automata finally found their way into numerous application areas such as natural language processing and speech recognition, or digital image compression.
What is still missing in the theory of weighted automata are satisfactory connections with verification-related issues such as (temporal) logic and bisimulation that could lead to a general approach to corresponding satisfiability and model-checking problems. A first step towards a more satisfactory theory of weighted systems was done in  . That paper, however does not give final solutions to all the aforementioned problems. It identifies directions for future research that we will be tackling.