Section: New Results
Dependable Distributed Realtime Embedded Systems
Participants : Javier CámaraMoreno, Gwenaël Delaval, Pascal Fradet, Alain Girault [contact person] , Gregor Gössler, Bertrand Jeannet, Emil Dumitrescu.
Static Multiprocessor Scheduling with Tradeoff Between Performance and Reliability
We have extended our work on bicriteria (length, reliability) scheduling [55] , [59] in two directions. The first direction takes into account the power consumption as a third criterion to be minimized. We have designed a scheduling heuristics called TSH that, given a software application graph and a multiprocessor architecture, produces a static multiprocessor schedule that optimizes three criteria: its length (crucial for realtime systems), its reliability (crucial for dependable systems), and its power consumption (crucial for autonomous systems). Our tricriteria scheduling heuristics, TSH, uses the active replication of the operations and the datadependencies to increase the reliability, and uses dynamic voltage scaling to lower the power consumption. By setting a bound on the minimal reliability, a bound on the maximal power consumption, and making theses two bounds vary, we are able to produce with TSH a Pareto surface of the best compromises found in the 3D space (length, reliability, power consumption). TSH is implemented within the SynDEx tool. This work is conducted in collaboration with Hamoudi Kalla (University of Batna, Algeria) and Ismail Assayad (University of Casablanca, Morocco).
The second direction studies the mapping of chains of tasks on multiprocessor platforms. We have proposed mapping by interval techniques, where the chain of tasks is divided in a sequence of intervals, each interval being executed on a different processor in a pipelined manner, and each processor executing no more than one interval. Because of this pipelined execution, we have two antagonistic criteria, the inputoutput latency and the period. Then, to increase the reliability, we replicate the intervals by mapping them to several processors. We have proved that, for homogeneous platforms, computing a mapping that optimizes the reliability only is polynomial, but that optimizing both the reliability and the period is NPcomplete, as well as optimizing both the reliability and the latency. For heterogeneous platforms, we have proved that optimizing the reliability only is NPcomplete, and hence all the multicriteria mapping problems that include the reliability in their criteria are also NPcomplete [16] . Finally, we have proposed heuristics to find solutions in the NPcomplete cases. This work is done in collaboration with Anne Benoit, Fanny Dufossé, and Yves Robert (ENS Lyon and Graal team).
Unlike most work found in the literature, all our contributions are truly bicriteria, in the sense that the user can gain several orders of magnitude on the reliability of his schedule thanks to the active replication of tasks onto processors. In contrast, most of the other algorithms do not replicate the tasks, and hence have a very limited impact on the reliability.
Automating the Addition of Fault Tolerance with Discrete Controller Synthesis
We have defined a new framework for the automatic design of fault tolerant embedded systems, based on discrete controller synthesis (DCS), a formal approach based on the same statespace exploration algorithms as modelchecking [80] . Its interest lies in the ability to obtain automatically systems satisfying by construction formal properties specified a priori. Our aim is to demonstrate the feasibility of this approach for fault tolerance. We start with a fault intolerant program, modeled as the synchronous parallel composition of finite labeled transition systems. We specify formally a fault hypothesis, state fault tolerance requirements and use DCS to obtain automatically a program having the same behavior as the initial fault intolerant one in the absence of faults, and satisfying the fault tolerance requirements under the fault hypothesis. Our original contribution resides in the demonstration that DCS can be elegantly used to design fault tolerant systems, with guarantees on key properties of the obtained system, such as the fault tolerance level, the satisfaction of quantitative constraints, and so on. We have shown with numerous examples taken from case studies that our method can address different kinds of failures (crash, value, or Byzantine) affecting different kinds of hardware components (processors, communication links, actuators, or sensors). Besides, we have shown that our method also offers an optimality criterion very useful to synthesize fault tolerant systems compliant to the constraints of embedded systems, like power consumption. In summary, our framework for fault tolerance has the following advantages [58] :

The automation, because DCS produces automatically a fault tolerant system from an initial fault intolerant one.

The separation of concerns, because the fault intolerant system can be designed independently from the fault tolerance requirements.

The flexibility, because, once the system is entirely modeled, it is easy to try several fault hypotheses, several environment models, several fault tolerance goals, several degraded modes, and so on.

The safety, because, in case of positive result obtained by DCS, the specified fault tolerance properties are guaranteed by construction on the controlled system.

The optimality when optimal synthesis is used, modulo the potential numerical equalities (hence a non strict optimality).
In collaboration with Emil Dumitrescu (INSA Lyon), Hervé Marchand (Vertecs team from Rennes), and Eric Rutten (Sardes team from Grenoble), we have extended this work in the direction of optimal synthesis considering weights cumulating along boundedlength paths, and its application to the control of sequences of reconfigurations. We have adapted our models in order to take into account the additive costs of, e.g., execution time or power consumption, and adapting synthesis algorithms in order to support the association of costs with transitions, and the handling of these new weight functions in the optimal synthesis. We therefore combine, on the one hand, guarantees on the safety of the execution by tolerating faults, and on the other hand, guarantees on the worst cumulated consumption of the resulting dynamically reconfiguring fault tolerant system [19] .
In collaboration with Tolga Ayav (University of Izmir, Turkey), we are also working on an AOP approach for fault tolerance. This is described in details in Section 6.6.3 .
Synthesis of Switching Controllers using Approximately Bisimilar Multiscale Abstractions
The use of discrete abstractions for continuous dynamics has become standard in hybrid systems design (see e.g. [85] and the references therein). The main advantage of this approach is that it offers the possibility to leverage controller synthesis techniques developed in the areas of supervisory control of discreteevent systems [80] or algorithmic game theory [32] . The first attempts to compute discrete abstractions for hybrid systems were based on traditional systems behavioral relationships such as simulation or bisimulation [78] , initially proposed for discrete systems most notably in the area of formal methods. These notions require inclusion or equivalence of observed behaviors which is often too restrictive when dealing with systems observed over metric spaces. For such systems, a more natural abstraction requirement is to ask for closeness of observed behaviors. This leads to the notions of approximate simulation and bisimulation introduced in [53] .
These notions enabled the computation of approximately equivalent discrete abstractions for several classes of dynamical systems, including nonlinear control systems with or without disturbances, and switched systems. These approaches are based on sampling of time and space where the sampling parameters must satisfy some relation in order to obtain abstractions of a prescribed precision. In particular, the smaller the time sampling parameter, the finer the lattice used for approximating the state space; this may result in abstractions with a very large number of states when the sampling period is small. However, there are a number of applications where sampling has to be fast; though this is generally necessary only on a small part of the statespace.
In [17] we have presented a novel class of multiscale discrete abstractions for incrementally stable switched systems that allows us to deal with fast switching while keeping the number of states in the abstraction at a reasonable level. We assume that the controller of the switched system has to decide the control input and the time period during which it will be applied before the controller executes again. In this context, it is natural to consider abstractions where transitions have various durations. For transitions of longer duration, it is sufficient to consider abstract states on a coarse lattice. For transitions of shorter duration, it becomes necessary to use finer lattices. These finer lattices are effectively used only on a restricted area of the statespace where the fast switching occurs.
These abstractions allow us to use multiscale iterative approaches for controller synthesis as follows. An initial controller is synthesized based on the dynamics of the abstraction at the coarsest scale where only transitions of longer duration are enabled. An analysis of this initial controller allows us to identify regions of the statespace where transitions of shorter duration may be useful (e.g. to improve the performance of the controller). Then, the controller is refined by enabling transitions of shorter duration in the identified regions. The last two steps can be repeated until we are satisfied with the obtained controller.
Modular Discrete Controller Synthesis
Discrete controller synthesis (DCS) [80] allows to design programs in a mixed imperative/declarative way. From a program with some freedom degrees left by the programmer (e.g., free controllable variables), and a temporal property to enforce which is not a priori verified by the initial program, DCS tools compute offline automatically a controller which will constrain the program (by e.g., giving values to controllable variables) such that, whatever the values of inputs from the environment, the controlled program verifies the temporal property.
Our motivation w.r.t. DCS concerns its modular application, improving the scalability of the technique by using contract enforcement and abstraction of components. Moreover, our aim is to integrate DCS into a compilation chain, and thereby improve its usability by programmers, not experts in discrete control. This work has been implemented into the Heptagon /BZR language and compiler [49] . This work is done in collaboration with Hervé Marchand (Vertecs team from Rennes) and Éric Rutten (Sardes team from Grenoble).
The implemented tool allows the generation of the synthesized controller under the form of an Heptagon node, which can in turn be analyzed and compiled, together with the Heptagon source from which it has been generated. This full integration allows this method to aim different target languages (currently C, Java or VHDL), and its integrated use in different contexts.
Several case studies are currently explored. In [18] , we show how Heptagon /BZR can be used in an Autonomic System context: system administrators have to manage the tradeoff between system performances and energy saving goals. Autonomic computing is a promising approach to automate the control of the QoS and the energy consumed by a system. This paper precisely investigates the use of synchronous programming and discrete controller synthesis to automate the generation of a controller that enforces the required coordination between QoS and energy managers. We illustrate our approach by describing the coordination between an admission controller and an energy controller.