New Software and Platforms
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
Bibliography
 PDF e-Pub

## Section: New Results

### Real-Time multicore programming

Participants : Pascal Fradet, Alain Girault, Gregor Goessler, Xavier Nicollin, Sophie Quinton.

#### Dynamicity in dataflow models

Recent dataflow programming environments support applications whose behavior is characterized by dynamic variations in resource requirements. The high expressive power of the underlying models (e.g., Kahn Process Networks or the CAL actor language) makes it challenging to ensure predictable behavior. In particular, checking liveness (i.e., no part of the system will deadlock) and boundedness (i.e., the system can be executed in finite memory) is known to be hard or even undecidable for such models. This situation is troublesome for the design of high-quality embedded systems. In the past few years, we have proposed several parametric dataflow models of computation.

We have written a survey that provides a comprehensive description of the existing parametric dataflow MoCs (constructs, constraints, properties, static analyses) and compares them using a common example [10]. The main objectives are to help designers of streaming applications to choose the most suitable model for their needs and to pave the way for the design of new parametric MoCs.

We have studied symbolic analyses of dataflow graphs [11]. Symbolic analyses express the system performance as a function of parameters (i.e., input and output rates, execution times). Such functions can be quickly evaluated for each different configuration or checked w.r.t. different quality-of-service requirements. These analyses are useful for parametric MoCs, partially specified graphs, and even for completely static SDF graphs. Our analyses compute the maximal throughput of acyclic synchronous dataflow graphs, the minimum required buffers for which as soon as possible (asap) scheduling achieves this throughput, and finally the corresponding input-output latency of the graph.

We have proposed an original method to deal with lossy communication channels in dataflow graphs. Lossy channels intrinsically violate the dataflow model of computation. Yet, many real-life applications encounter some form of lossy channels, for instance IoT applications. The challenge that is raised is how to manage the retransmissions in case of lost or corrupted tokens. The solution that we have proposed involves decomposing the execution of the dataflow graph into three phases: (i) an upstream phase where all the actors before the lossy channel are executed as usual; (ii) a lossy phase where only the two actors linked by the lossy channel are executed, as many times as required until all the tokens are correctly transmitted; and (iii) a downstream phase where all the actors after the lossy channel are executed as usual. When a graph includes several lossy channels, things become more complex. We rely on the Boolean parameters of BPDF  [32] to encode enabling conditions on channels so that the execution follows this upstream-lossy-downstream semantics [12].

We are now studying models allowing dynamic reconfigurations of the topology of the dataflow graphs. This would be of interest for C-RAN and 5G telecommunication applications. This is one of the research topic of Arash Shafiei's PhD in collaboration with Orange Labs.

#### Synthesis of switching controllers using approximately bisimilar multiscale abstractions

The use of discrete abstractions for continuous dynamics has become standard in hybrid systems design (see e.g.,   [71] and the references therein). The main advantage of this approach is that it offers the possibility to leverage controller synthesis techniques developed in the areas of supervisory control of discrete-event systems  [66]. The first attempts to compute discrete abstractions for hybrid systems were based on traditional systems behavioral relationships such as simulation or bisimulation, initially proposed for discrete systems most notably in the area of formal methods. These notions require inclusion or equivalence of observed behaviors which is often too restrictive when dealing with systems observed over metric spaces. For such systems, a more natural abstraction requirement is to ask for closeness of observed behaviors. This leads to the notions of approximate simulation and bisimulation introduced in  [45].

These approaches are based on sampling of time and space where the sampling parameters must satisfy some relation in order to obtain abstractions of a prescribed precision. In particular, the smaller the time sampling parameter, the finer the lattice used for approximating the state-space; this may result in abstractions with a very large number of states when the sampling period is small. However, there are a number of applications where sampling has to be fast; though this is generally necessary only on a small part of the state-space. We have been exploring two approaches to overcome this state-space explosion [4].

We are currently investigating an approach using mode sequences of given length as symbolic states for our abstractions. By using mode sequences of variable length we are able to adapt the granularity of our abstraction to the dynamics of the system, so as to automatically trade off precision against controllability of the abstract states.

#### Schedulability of weakly-hard real-time systems

We have developed an extension of sensitivity analysis for budgeting in the design of weakly-hard real-time systems [18]. During design, it often happens that some parts of a task set are fully specified while other parameters, e.g., regarding recovery or monitoring tasks, will be available only much later. In such cases, sensitivity analysis can help anticipate how these missing parameters can influence the behavior of the whole system so that a resource budget can be allocated to them. We have developed an extension of sensitivity analysis for deriving task budgets for systems with hard and weakly-hard requirements. This approach has been validated on synthetic test cases and a realistic case study given by our partner Thales.

Finally, in collaboration with TU Braunschweig and Daimler we have worked on the application of the Logical Execution Time (LET) paradigm, according to which data are read and written at predefined time instants, to the automotive industry. Specifically, we have bridged the gap between LET, as it was originally proposed  [59], and its current use in the automotive industry. One interesting outcome of this research is that it can nicely be combined with the use of TWCA. This work has not been published yet.

#### A Markov Decision Process approach for energy minimization policies

In the context of independent real-time sporadic jobs running on a single-core processor equipped with Dynamic Voltage and Frequency Scaling (DVFS), we have proposed a Markov Decision Process approach (MDP) to compute the scheduling policy that dynamically chooses the voltage and frequency level of the processor such that each job meets its deadline and the total energy consumption is minimized. We distinguish two cases: the finite case (there is a fixed time horizon) and the infinite case. In the finite case, several offline solutions exist, which all use the complete knowledge of all the jobs that will arrive within the time horizon  [74], i.e., their size and deadlines. But clearly this is unrealistic in the embedded context where the characteristics of the jobs are not known in advance. Then, an optimal offline policy called Optimal Available (OA) has been proposed in  [30]. Our goal was to improve this result by taking into account the statistical characteristics of the upcoming jobs. When such information is available (for instance by profiling the jobs based on execution traces), we have proposed several speed policies that optimize the expected energy consumption. We have shown that this general constrained optimization problem can be modeled as an unconstrained MDP by choosing a proper state space that also encodes the constraints of the problem. In particular, this implies that the optimal speed at each time can be computed using a dynamic programming algorithm, and that the optimal speed at any time $t$ will be a deterministic function of the current state at time $t$ [21]. This is the topic of Stephan Plassart's PhD, funded by the CASERM Persyval project.

#### Formal proofs for schedulability analysis of real-time systems

We have started to lay the foundations for computer-assisted formal verification of schedulability analysis results. Specifically, we contribute to Prosa  [26], a foundational Coq library of reusable concepts and proofs for real-time schedulability analysis. A key scientific challenge is to achieve a modular structure of proofs for response time analysis. We intend to use this library for:

1. a better understanding of the role played by some assumptions in existing proofs;

2. a formal comparison of different analysis techniques; and

3. the verification of proof certificates generated by instrumenting (existing and efficient) analysis tools.

Two schedulability analyses for uniprocessor systems have been formalized and mechanically verified in Coq for:

• sporadic task sets scheduled according to the Time Division Multiple Access (TDMA) policy.

• periodic task sets with offsets scheduled according to the Fixed Priority Preemptive (FPP) policy [15].

The analysis for TDMA has mainly served to familiarize ourselves with the Prosa library. Schedulability analysis in presence of offsets is a non-trivial problem with a high computational complexity. In contrast to the traditional (offset oblivious) analysis, many scenarios must be tested and compared to identify which one represents the worst-case scenario. We have formalized and proved in Coq the basic analysis presented by Tindell  [72]. This has allowed us to: (1) underline implicit assumptions made in Tindell’s informal analysis; (2) ease the generalization of the verified analysis; (3) generate a certifier and an analyzer. We are investigating these two tools in terms of computational complexity and implementation effort, in order to provide a good solution to guarantee schedulability of industrial systems.

In parallel, we have worked on a Coq formalization of Typical Worst Case Analysis (TWCA). We aim to provide certified generic results for weakly-hard real-time systems in the form of $\left(m,k\right)$ guarantees (a task may miss at most $m$ deadlines out of $k$ consecutive activations). So far, we have adapted the initial TWCA for arbitrary schedulers. The proof relies on a practical definition of the concept of busy window which amounts to being able to perform a local response time analysis. We provide such an instantiation for Fixed Priority Preemptive (FPP) schedulers as in the original paper. Future work includes making the state of the art TWCA suitable for formal proofs, exploring more complex systems (e.g., bounded buffers) and providing instantiations of our results for other scheduling policies.