Section: New Results
RealTime multicore programming
Participants : Pascal Fradet, Alain Girault, Gregor Goessler, Xavier Nicollin, Sophie Quinton.
Dynamicity in dataflow models
Recent dataflow programming environments support applications whose behavior is characterized by dynamic variations in resource requirements. The high expressive power of the underlying models (e.g., Kahn Process Networks or the CAL actor language) makes it challenging to ensure predictable behavior. In particular, checking liveness (i.e., no part of the system will deadlock) and boundedness (i.e., the system can be executed in finite memory) is known to be hard or even undecidable for such models. This situation is troublesome for the design of highquality embedded systems. In the past few years, we have proposed several parametric dataflow models of computation.
We have written a survey that provides a comprehensive description of the existing parametric dataflow MoCs (constructs, constraints, properties, static analyses) and compares them using a common example [10]. The main objectives are to help designers of streaming applications to choose the most suitable model for their needs and to pave the way for the design of new parametric MoCs.
We have studied symbolic analyses of dataflow graphs [11]. Symbolic analyses express the system performance as a function of parameters (i.e., input and output rates, execution times). Such functions can be quickly evaluated for each different configuration or checked w.r.t. different qualityofservice requirements. These analyses are useful for parametric MoCs, partially specified graphs, and even for completely static SDF graphs. Our analyses compute the maximal throughput of acyclic synchronous dataflow graphs, the minimum required buffers for which as soon as possible (asap) scheduling achieves this throughput, and finally the corresponding inputoutput latency of the graph.
We have proposed an original method to deal with lossy communication channels in dataflow graphs. Lossy channels intrinsically violate the dataflow model of computation. Yet, many reallife applications encounter some form of lossy channels, for instance IoT applications. The challenge that is raised is how to manage the retransmissions in case of lost or corrupted tokens. The solution that we have proposed involves decomposing the execution of the dataflow graph into three phases: (i) an upstream phase where all the actors before the lossy channel are executed as usual; (ii) a lossy phase where only the two actors linked by the lossy channel are executed, as many times as required until all the tokens are correctly transmitted; and (iii) a downstream phase where all the actors after the lossy channel are executed as usual. When a graph includes several lossy channels, things become more complex. We rely on the Boolean parameters of BPDF [32] to encode enabling conditions on channels so that the execution follows this upstreamlossydownstream semantics [12].
We are now studying models allowing dynamic reconfigurations of the topology of the dataflow graphs. This would be of interest for CRAN and 5G telecommunication applications. This is one of the research topic of Arash Shafiei's PhD in collaboration with Orange Labs.
Synthesis of switching controllers using approximately bisimilar multiscale abstractions
The use of discrete abstractions for continuous dynamics has become standard in hybrid systems design (see e.g., [71] and the references therein). The main advantage of this approach is that it offers the possibility to leverage controller synthesis techniques developed in the areas of supervisory control of discreteevent systems [66]. The first attempts to compute discrete abstractions for hybrid systems were based on traditional systems behavioral relationships such as simulation or bisimulation, initially proposed for discrete systems most notably in the area of formal methods. These notions require inclusion or equivalence of observed behaviors which is often too restrictive when dealing with systems observed over metric spaces. For such systems, a more natural abstraction requirement is to ask for closeness of observed behaviors. This leads to the notions of approximate simulation and bisimulation introduced in [45].
These approaches are based on sampling of time and space where the sampling parameters must satisfy some relation in order to obtain abstractions of a prescribed precision. In particular, the smaller the time sampling parameter, the finer the lattice used for approximating the statespace; this may result in abstractions with a very large number of states when the sampling period is small. However, there are a number of applications where sampling has to be fast; though this is generally necessary only on a small part of the statespace. We have been exploring two approaches to overcome this statespace explosion [4].
We are currently investigating an approach using mode sequences of given length as symbolic states for our abstractions. By using mode sequences of variable length we are able to adapt the granularity of our abstraction to the dynamics of the system, so as to automatically trade off precision against controllability of the abstract states.
Schedulability of weaklyhard realtime systems
We focus on the problem of computing tight deadline miss models for realtime systems, which bound the number of potential deadline misses in a given sequence of activations of a task. In practical applications, such guarantees are often sufficient because many systems are in fact not hard realtime [3].
We have developed an extension of sensitivity analysis for budgeting in the design of weaklyhard realtime systems [18]. During design, it often happens that some parts of a task set are fully specified while other parameters, e.g., regarding recovery or monitoring tasks, will be available only much later. In such cases, sensitivity analysis can help anticipate how these missing parameters can influence the behavior of the whole system so that a resource budget can be allocated to them. We have developed an extension of sensitivity analysis for deriving task budgets for systems with hard and weaklyhard requirements. This approach has been validated on synthetic test cases and a realistic case study given by our partner Thales.
A second contribution in this area is the application of our method for computing deadline miss models, called Typical WorstCase Analysis (TWCA), to systems with finite queue capacity [9]. Finite ready queues, implemented by buffers, are a system reality in embedded realtime computing systems and networks. The dimensioning of queues is subject to constraints in industrial practice, and often the queue capacity is sufficient for typical system behavior, but is not sufficient in peak overload conditions. This may lead to overflow and consequently to the discarding of jobs. In this paper, we explore whether finite queue capacity can also be used as a mean of design in order to reduce workload peaks and thus shorten a transient overload phase. We have proposed an analysis method which is to the best of our knowledge the first one able to give (a) worstcase response times guarantees as well as (b) weaklyhard guarantees for tasks which are executed on a computing system with finite queues. Experimental results show that finite queue capacity may only have weak overload limiting effect. This unexpected outcome can be explained by the system behavior in the worstcase corner cases. The analysis shows nevertheless that a tradeoff between weaklyhard guarantees and queue sizes is possible.
Finally, in collaboration with TU Braunschweig and Daimler we have worked on the application of the Logical Execution Time (LET) paradigm, according to which data are read and written at predefined time instants, to the automotive industry. Specifically, we have bridged the gap between LET, as it was originally proposed [59], and its current use in the automotive industry. One interesting outcome of this research is that it can nicely be combined with the use of TWCA. This work has not been published yet.
A Markov Decision Process approach for energy minimization policies
In the context of independent realtime sporadic jobs running on a singlecore processor equipped with Dynamic Voltage and Frequency Scaling (DVFS), we have proposed a Markov Decision Process approach (MDP) to compute the scheduling policy that dynamically chooses the voltage and frequency level of the processor such that each job meets its deadline and the total energy consumption is minimized. We distinguish two cases: the finite case (there is a fixed time horizon) and the infinite case. In the finite case, several offline solutions exist, which all use the complete knowledge of all the jobs that will arrive within the time horizon [74], i.e., their size and deadlines. But clearly this is unrealistic in the embedded context where the characteristics of the jobs are not known in advance. Then, an optimal offline policy called Optimal Available (OA) has been proposed in [30]. Our goal was to improve this result by taking into account the statistical characteristics of the upcoming jobs. When such information is available (for instance by profiling the jobs based on execution traces), we have proposed several speed policies that optimize the expected energy consumption. We have shown that this general constrained optimization problem can be modeled as an unconstrained MDP by choosing a proper state space that also encodes the constraints of the problem. In particular, this implies that the optimal speed at each time can be computed using a dynamic programming algorithm, and that the optimal speed at any time $t$ will be a deterministic function of the current state at time $t$ [21]. This is the topic of Stephan Plassart's PhD, funded by the CASERM Persyval project.
Formal proofs for schedulability analysis of realtime systems
We have started to lay the foundations for computerassisted formal verification of schedulability analysis results. Specifically, we contribute to Prosa [26], a foundational Coq library of reusable concepts and proofs for realtime schedulability analysis. A key scientific challenge is to achieve a modular structure of proofs for response time analysis. We intend to use this library for:

a better understanding of the role played by some assumptions in existing proofs;

the verification of proof certificates generated by instrumenting (existing and efficient) analysis tools.
Two schedulability analyses for uniprocessor systems have been formalized and mechanically verified in Coq for:

sporadic task sets scheduled according to the Time Division Multiple Access (TDMA) policy.

periodic task sets with offsets scheduled according to the Fixed Priority Preemptive (FPP) policy [15].
The analysis for TDMA has mainly served to familiarize ourselves with the Prosa library. Schedulability analysis in presence of offsets is a nontrivial problem with a high computational complexity. In contrast to the traditional (offset oblivious) analysis, many scenarios must be tested and compared to identify which one represents the worstcase scenario. We have formalized and proved in Coq the basic analysis presented by Tindell [72]. This has allowed us to: (1) underline implicit assumptions made in Tindell’s informal analysis; (2) ease the generalization of the verified analysis; (3) generate a certifier and an analyzer. We are investigating these two tools in terms of computational complexity and implementation effort, in order to provide a good solution to guarantee schedulability of industrial systems.
In parallel, we have worked on a Coq formalization of Typical Worst Case Analysis (TWCA). We aim to provide certified generic results for weaklyhard realtime systems in the form of $(m,k)$ guarantees (a task may miss at most $m$ deadlines out of $k$ consecutive activations). So far, we have adapted the initial TWCA for arbitrary schedulers. The proof relies on a practical definition of the concept of busy window which amounts to being able to perform a local response time analysis. We provide such an instantiation for Fixed Priority Preemptive (FPP) schedulers as in the original paper. Future work includes making the state of the art TWCA suitable for formal proofs, exploring more complex systems (e.g., bounded buffers) and providing instantiations of our results for other scheduling policies.