We work on the problem of the safe design of real-time control systems. This area is related to control theory as well as computer science. Application domains are typically safety-critical systems, as in transportation (avionics, railways), production, medical, or energy production systems. Both methods and formal models for the construction of correct systems, as well as their implementation in computer assisted design tools, targeted to specialists of the applications, are needed. We contribute to propose solutions all along the design flow, from the specification to the implementation: we develop techniques for the specification and automated generation of safe real-time executives for control systems, as well as static analysis techniques to check additional properties on the generated systems. Our research themes concern:

implementations of synchronous
reactive programs, generated automatically by
compilation, particularly from the point of view of
automatic distribution (in relation with the
Heptagonand
Lucid
Synchronelanguages

high-level design and programming
methods, with support for automated code generation,
including: the automated generation of correct
controllers using discrete control synthesis (in relation
with the
Sigalisynthesis tool

static analysis and abstract interpretation techniques, which are applied both to low-level synchronous models/programs and to more general imperative programs; this includes the verification of general safety properties and the absence of runtime errors.

Our applications are in embedded systems, typically in the
robotics, automotive, and telecommunications domains with a
special emphasis on dependability issues (
*e.g.*, fault tolerance, availability).
International and industrial relations feature:

an
ISTEuropean
FP7 network of excellence:
ArtistDesign

an FP7 European STREP project:
Combest

an
ArtemisiaEuropean project:
Cesar

three ANR French projects: Asopt(on static analysis), AutoChem(on chemical programming), and Vedecy(on cyber-physical systems);

a MinalogicPôle de Compétitivité project: OpenTLM, dedicated to the design flow for next generation SoC and SystemC;

an Inrialarge scale action: Synchronicson a language platform for embedded system design;

an
Inriaassociated team with the University of
Auckland (New Zealand), called
Afmes

The context of our work is the area of embedded real-time control systems, at the intersection between control theory and computer science. Our contribution consists of methods and tools for their safe design. The systems we consider are intrinsically safety-critical because of the interaction between the embedded, computerized controller, and a physical process having its own dynamics. What is important is to analyze and design the safe behavior of the whole system, which introduces an inherent complexity. This is even more crucial in the case of systems whose malfunction can have catastrophic consequences, for example in transport systems (avionics, trains), production, medical, or energy production systems.

Therefore, there is a need for methods and tools for the design of safe systems. The definition of adequate mathematical models of the behavior of the systems allows the definition of formal calculi. They in turn form a basis for the construction of algorithms for the analysis, but also for the transformation of specifications towards an implementation. They can then be implemented in software environments made available to the users. A necessary complement is the setting-up of software engineering, programming, modeling, and validation methodologies. The motivation of these problems is at the origin of significant research activity, internationally and, in particular, in the European ISTnetwork of excellence ArtistDesign(Advanced Real-Time Systems).

The state of the art upon which we base our contributions is twofold.

From the point of view of discrete control, there is a set of theoretical results and tools, in particular in the synchronous approach, often founded on finite or infinite labeled transition systems , . During the past years, methodologies for the formal verification , , control synthesis and compilation, as well as extensions to timed and hybrid systems , have been developed. Asynchronous models consider the interleaving of events or messages, and are often applied in the field of telecommunications, in particular for the study of protocols. A well-known formalism for reactive systems is StateCharts , which can be encoded in a synchronous model .

From the point of view of verification, we use the
methods and tools of symbolic model-checking and of
abstract interpretation. From symbolic model-checking, we
reuse BDD techniques
for manipulating Boolean
functions and sets, and their MTBDD extension for more
general functions. Abstract interpretation
is used to formalize complex
static analysis, in particular when one wants to analyze
the possible values of variables and pointers of a program.
Abstract interpretation is a theory of approximate solving
of fix-point equations applied to program analysis. Most
program analysis problems, among which reachability
analysis, come down to solving a fix-point equation on the
state space of the program. The exact computation of such
an equation is generally not possible for undecidability
(or complexity) reasons. The fundamental principles of
abstract interpretation are:
(
i)to substitute to the
state-space of the program a simpler domain and to
transpose the equation accordingly (static approximation);
and
(
i
i)to use extrapolation (widening)
to force the convergence of the iterative computation of
the fix-point in a finite number of steps (dynamic
approximation). Examples of static analysis based on
abstract interpretation are linear relation analysis
and shape analysis
.

The synchronous approach

The design of safe real-time control systems is difficult due to various issues, among them their complexity in terms of the number of interacting components, their parallelism, the difference of the considered time scales (continuous or discrete), and the distance between the various theoretical concepts and results that allow the study of different aspects of their behaviors, and the design of controllers.

A currently very active research direction focuses on the models and techniques that allow the automatic use of formal methods. In the field of verification, this concerns in particular the technique of model checking. The verification takes place after the design phase, and requires, in case of problematic diagnostics, expensive backtracks on the specification. We want to provide a more constructive use of formal models, employing them to derive correct executives by formal computation and synthesis, integrated in a compilation process. We therefore use models throughout the design flow from specification to implementation, in particular by automatic generation of embeddable executives.

Applicative needs initially come from the fields of safety-critical systems (avionics, energy) and complex systems (telecommunications), embedded in an environment with which they strongly interact (comprising aspects of computer science and control theory). Fields with less criticality, or which support variable degrees of quality of service, such as in the multi-media domain, can also take advantage of methodologies that improve the quality and reliability of software, and reduce the costs of test and correction in the design.

Industrial acceptance, the dissemination, and the deployment of the formal techniques inevitably depend on the usability of such techniques by specialists in the application domain — and not in formal techniques themselves — and also on the integration in the whole design process, which concerns very different problems and techniques. Application domains where the actors are ready to employ specialists in formal methods or advanced control theory are still uncommon. Even then, design methods based on the systematic application of these theoretical results are not ripe. In fields like industrial control, where the use of PLC (Programmable Logic Controller ) is dominant, this question can be decisive.

Essential elements in this direction are the proposal of
realistic formal models, validated by experiments, of the
usual entities in control theory, and functionalities (
*i.e.*, algorithms) that correspond indeed to
services useful for the designer. Take for example the
compilation and optimization taking into account the
platforms of execution, possible failures, or the
interactions between the defined automatic control and its
implementation. A notable example for the existence of an
industrial need is the activity of the
Athyscompany
(now belonging to
Dassault
Systemes)
concerning the development of a specialized programming
environment,
CellControl,
which integrates synchronous tools for compilation and
verification, tailored to the application domain. In these
areas, there are functionalities that commercial tools do
not have yet, and to which our results contribute.

We are proposing effective trade-offs between, on the one hand, expressiveness and formal power, and on the other hand, usability and automation. We focus on the area of specification and construction of correct real-time executives for discrete and continuous control, while keeping an interest in tackling major open problems, relating to the deployment of formal techniques in computer science, especially at the border with control theory. Regarding the applications, we propose new automated functionalities, to be provided to the users in integrated design and programming environments.

The objective of the
Pop Artteam
is the
**safe design of real-time control systems**. This area is
related to control theory as well as computer science.
Application domains are typically safety-critical systems, as
in transportation (avionics, railways), production, medical,
or energy production systems. Both methods and formal models
for the construction of correct systems are needed. Such
methods must be implemented in computer-assisted design
tools, targeted at specialists of the application
domains.

Our contribution is to propose solutions covering the entire design flow, from the specification to the implementation. We develop techniques for the specification and automated generation of safe real-time executives for control systems, as well as static analysis techniques to check additional properties on the generated systems.

The integration of formal methods in an automated process of generation/compilation is founded on the formal modeling of the considered mechanisms. This modeling is the base for the automation, which operates on models well-suited for their efficient exploitation, by analysis and synthesis techniques that are difficult to use by end-users.

The creation of easily usable models aims at giving the
user the role rather of a pilot than of a mechanics
*i.e.*, to offer her/him pre-defined
functionalities which respond to concrete demands, for
example in the generation of fault tolerant or distributed
executives, by the intermediary use of dedicated environments
and languages.

The proposal of validated models with respect to their faithful representation of the application domain is done through case studies in collaboration with our partners, where the typical multidisciplinarity of questions across control theory and computer science is exploited.

The overall consistency of our approach comes from the
fact that the main research directions address, under
different aspects, the specification and generation of safe
real-time control executives based on
*formal models*.

We explore this field by linking, on the one hand, the techniques we use, with on the other hand, the functionalities we want to offer. We are interested in questions related to:

We investigate two main directions: (i) compositional analysis and design techniques; (ii) adapter synthesis and converter verification.

Programming for embedded real-time systems is considered within Pop Artalong three axes: (i) synchronous programming languages, (ii) aspect-oriented programming, (iii) static analysis (type systems, abstract interpretation, ...).

Here we address the following research axes: (i) static multiprocessor scheduling for fault-tolerance, (ii) multi-criteria scheduling for reliability, (iii) automatic program transformations, (iv) formal methods for fault-tolerant real-time systems.

Component-based construction techniques are crucial to overcome the complexity of embedded systems design. However, two major obstacles need to be addressed: the heterogeneous nature of the models, and the lack of results to guarantee correction of the composed system.

The heterogeneity of embedded systems comes from the need to integrate components using different models of computation, communication, and execution, on different levels of abstraction and different time scales. The BIP component framework has been designed, in cooperation with Verimag, to support this heterogeneous nature of embedded systems.

Our work focuses on the underlying analysis and construction algorithms, in particular compositional techniques and approaches ensuring correctness by construction (adapter synthesis, strategy mapping). This work is motivated by the strong need for formal, heterogeneous component frameworks in embedded systems design.

Programming for embedded real-time systems is considered along three directions: (i) synchronous programming languages to implement real-time systems; (ii) aspect-oriented programming to specify non-functional properties separately from the base program; (iii) abstract interpretation to ensure safety properties of programs at compile time. We advocate the need for well defined programming languages to design embedded real-time systems with correct-by-construction guarantees, such as bounded time and bounded memory execution. Our original contribution resides in programming languages inheriting features from both synchronous languages and functional languages. In collaboration with Marc Pouzet (ENS Uml, Parkasteam), we have designed the programming language Heptagon, the key features of which are: data-flow formal synchronous semantics, strong typing, modular compilation. In particular, we are working on type systems for the clock calculus and the spatial modular distribution.

The goal of Aspect-Oriented Programming (AOP) is to isolate aspects (such as security, synchronization, or error handling) that cross-cut the program basic functionality and whose implementation usually yields tangled code. In AOP, such aspects are specified separately and integrated into the program by an automatic transformation process called weaving. Although this new paradigm has great practical potential, it still lacks formalization and undisciplined uses make reasoning on programs very difficult. Our work on AOP addresses these issues by studying foundational issues of AOP (semantics, analysis, verification) and by considering domain-specific aspects (availability or fault tolerance aspects) as formal properties.

Finally, the aim of the verification activity in Pop Artis to check (safety) properties on programs, with emphasis on the analysis of the values of data variables (numerical variables, memory heap), mainly in the context of embedded and control-command systems, which exibit concurrency features. The applications are not only the proof of functional properties on programs, but also test selection and generation, program transformation, controller synthesis, and fault-tolerance. Our approach is based on abstract interpretation, which consists in inferring properties of the program by solving semantic equations on abstract domains. Much effort is spent on implementing developed techniques in tools for experimentation and diffusion.

Embedded systems must often satisfy safety critical constraints. We address this issue by providing methods and algorithms to design embedded real-time systems with guarantees on their fault-tolerance and/or reliability level.

A research direction concerns static multiprocessor
scheduling of an application specification on a distributed
target architecture. We increase the fault-tolerance level
of the system by replicating the computations and the
communications, and we schedule the redundant computations
according to the faults to be tolerated. We also optimize
the schedule
*w.r.t.*several criteria, including the schedule
length, the reliability, and the power consumption.

A second research direction concerns the fault-tolerance management, by reconfigurating the system (for instance by migrating the tasks that were running on a processor upon the failure of this processor) following objectives of fault-tolerance, consistent execution, functionality fulfillment, boundedness and optimality of response time. We base such formal methods on discrete controller synthesis.

A third research direction concerns AOP to weave fault-tolerance aspects in programs and electronic circuits (seen as synthesizable HDL programs) as mentioned in the previous section.

Our applications are the embedded system area, typically: robotics, automotive, telecommunications, systems on chip (SoC). In some areas, safety is critical, and motivates the investment in formal methods and techniques for design. But even in less critical contexts, like telecommunications and multimedia, these techniques can be beneficial in improving the efficiency and quality of designs, as well as the design, production, and test costs themselves.

Industrial acceptance of formal techniques, as well as
their deployment, goes necessarily through their usability by
specialists of the application domain, rather than of the
formal techniques themselves. Hence our orientation towards
the proposal of domain-specific (but generic) realistic
models, validated through experience (
*e.g.*, control tasks systems), based on formal
techniques with a high degree of automation (
*e.g.*, synchronous models), and tailored for
concrete functionalities (
*e.g.*, code generation).

The commercially available design tools (such as
UMLwith real-time
extensions,
Matlab/
Simulink/ d
Space

Regarding the synchronous approach, commercial tools are
available:
Scade(based on
Lustre),
Esterel Studio

Regarding applications and case studies with industrial end-users of our techniques, we cooperate with STMicroelectronics on two topics: (i) compositional analysis and abstract interpretation for the TLM-based System-on-Chip design flow, and (ii) dynamic models of computation for streaming applications.

NBac(Numerical and
Boolean Automaton Checker)

NBacis connected to two input languages: the synchronous dataflow language Lustre, and a symbolic automaton-based language, AutoC/Auto, where a system is defined by a set of symbolic hybrid automata communicating via valued channels. It can perform reachability analysis, co-reachability analysis, and combination of the above analyses. The result of an analysis is either a verdict to a verification problem, or a set of states together with a necessary condition to stay in this set during an execution. NBacis founded on the theory of abstract interpretation: sets of states are approximated by abstract values belonging to an abstract domain, on which fix-point computations are performed.

It has been used for verification and debugging of
Lustreprograms
. It is connected to the
Lustretoolset

The BIP component model (Behavior, Interaction model, Priority) has been designed to support the construction of heterogeneous embedded systems involving different models of computation, communication, and execution, on different levels of abstraction. By separating the notions of behavior, interaction model, and execution model, it enables both heterogeneous modeling, and separation of concerns.

The verification and design tool Prometheus implements the BIP component framework. Prometheus is regularly updated to implement new developments in the framework and the analysis algorithms. It has allowed us to carry out several complex case studies from the system-on-chip and bioinformatics domains.

We have been cooperating for several years with the
Inriateam
Aoste(
InriaSophia-Antipolis and Rocquencourt) on the topic
of fault tolerance and reliability of safety critical
embedded systems. In particular, we have implemented
several new heuristics for fault tolerance and reliability
within their software
SynDEx
*i.e.*, buses)
. Our second scheduling
heuristic is multi-criteria: it produces a static schedule
multiprocessor schedule such that the reliability is
maximized, the power consumption is minimized, and the
execution time is minimized. Our results on fault tolerance
are summarized in a web page

The
Apronlibrary

The Apronlibrary aims to provide:

a uniform API for existing numerical abstract domains;

a higher-level interface to the client tools, by factorizing functionalities that are largely independent of abstract domains.

From an abstract domain implementor point of view, the benefits of the Apronlibrary are:

the ability to focus on core, low-level functionalities;

the help of generic services adding higher-level services for free.

For the client static analysis community, the benefits are a unified, higher-level interface, that allows experimenting, comparing, and combining abstract domains.

In 2010, the Taylor1plus domain , which is the underlying abstract domain of the tool Fluctuat has been integrated in APRON.

The
BDDApronlibrary
n-bits integers) and numerical variables (integers,
rationals, floating-point numbers). It first allows to
manipulate expressions that freely mix, using BDDs and
MTBDDs, finite-type and numerical
Apronexpressions
and conditions. It then provides abstract domains that
combines BDDs and
Apronabstract
values for representing invariants holding on both
finite-type variables and numerical variables.

The
Apronlibrary
(Fig.
) is written in ANSI C, with an
object-oriented and thread-safe design. Both
multi-precision and floating-point numbers are supported. A
wrapper for the
Ocamllanguage is
available, and a C++ wrapper is on the way. It is
distributed since June 2006 under the LGPL license and
available at
http://

The BDDApronlibrary is written in Ocaml, using polymorphism features of Ocamlto make it generic. It is also thread-safe. It provides two different implementations of the same domain, each one presenting pros and cons depending on the application. It is currently used by the ConcurInterprocinterprocedural and concurrent program analyzer.

We have developed a software tool chain to allow the
specification of models, the controller synthesis, and the
execution or simulation of the results. It is based on
existing synchronous tools, and thus consists primarily in
the use and integration of
Sigali

Useful component templates and relevant properties can be materialized, on one hand by libraries of task models, and, on the other hand, by properties and synthesis objectives.

Rapture*à la*CSP. Processes can also manipulate local and
global variables of finite type. Probabilistic reachability
properties are specified by defining two sets of initial
and final states together with a probability bound. The
originality of the tool is to provide two reduction
techniques that limit the state space explosion problem:
automatic abstraction and refinement algorithms, and the
so-called essential states reduction.

We also develop and maintain smaller libraries of general use for people working in the static analysis and abstract interpretation community.

extends Interproc with concurrency,
for the analysis of multithreaded programs interacting
via shared global variables. It is also deployed
through a web-interface

extends Interproc with pointers to
local variables. It is also deployed through a
web-interface

Heptagonis a
dataflow synchronous language, inspired from
Lucid Synchrone

Heptagonhas been
used to built BZR, which is an extension of the former with
contracts constructs. These contracts allow to express
dynamic temporal properties on the inputs and outputs of
Heptagonnode.
These properties are then enforced, within the compilation
of a BZR program, by discrete controller synthesis, using
the
Sigalitool

We have extended our work on bicriteria (length,
reliability) scheduling
,
in two directions. The first
direction takes into account the power consumption as a
third criterion to be minimized. We have designed a
scheduling heuristics called TSH that, given a software
application graph and a multiprocessor architecture,
produces a static multiprocessor schedule that optimizes
three criteria: its
*length*(crucial for real-time systems), its
*reliability*(crucial for dependable systems), and its
*power consumption*(crucial for autonomous systems).
Our tricriteria scheduling heuristics, TSH, uses the
*active replication*of the operations and the
data-dependencies to increase the reliability, and uses
*dynamic voltage scaling*to lower the power
consumption. By setting a bound on the minimal reliability,
a bound on the maximal power consumption, and making theses
two bounds vary, we are able to produce with TSH a Pareto
surface of the best compromises found in the 3D space
(length, reliability, power consumption). TSH is
implemented within the
SynDExtool. This
work is conducted in collaboration with Hamoudi Kalla
(University of Batna, Algeria) and Ismail Assayad
(University of Casablanca, Morocco).

The second direction studies the mapping of chains of
tasks on multi-processor platforms. We have proposed
*mapping by interval techniques*, where the chain of
tasks is divided in a sequence of intervals, each interval
being executed on a different processor in a pipe-lined
manner, and each processor executing no more than one
interval. Because of this pipe-lined execution, we have two
antagonistic criteria, the input-output latency and the
period. Then, to increase the reliability, we replicate the
intervals by mapping them to several processors. We have
proved that, for homogeneous platforms, computing a mapping
that optimizes the reliability only is
*polynomial*, but that optimizing both the reliability
and the period is
*NP-complete*, as well as optimizing both the
reliability and the latency. For heterogeneous platforms,
we have proved that optimizing the reliability only is
*NP-complete*, and hence all the multi-criteria
mapping problems that include the reliability in their
criteria are also
*NP-complete*
. Finally, we have proposed
heuristics to find solutions in the NP-complete cases. This
work is done in collaboration with Anne Benoit, Fanny
Dufossé, and Yves Robert (ENS Lyon and
Graalteam).

Unlike most work found in the literature, all our contributions are truly bicriteria, in the sense that the user can gain several orders of magnitude on the reliability of his schedule thanks to the active replication of tasks onto processors. In contrast, most of the other algorithms do not replicate the tasks, and hence have a very limited impact on the reliability.

We have defined a new framework for the
*automatic*design of fault tolerant embedded systems,
based on discrete controller synthesis (DCS), a formal
approach based on the same state-space exploration
algorithms as model-checking
. Its interest lies in the
ability to obtain automatically systems satisfying by
construction formal properties specified
*a priori*. Our aim is to demonstrate the feasibility
of this approach for fault tolerance. We start with a fault
intolerant program, modeled as the synchronous parallel
composition of finite labeled transition systems. We
specify formally a fault hypothesis, state fault tolerance
requirements and use DCS to obtain automatically a program
having the same behavior as the initial fault intolerant
one in the absence of faults, and satisfying the fault
tolerance requirements under the fault hypothesis. Our
original contribution resides in the demonstration that DCS
can be elegantly used to design fault tolerant systems,
with guarantees on key properties of the obtained system,
such as the fault tolerance level, the satisfaction of
quantitative constraints, and so on. We have shown with
numerous examples taken from case studies that our method
can address different kinds of failures (crash, value, or
Byzantine) affecting different kinds of hardware components
(processors, communication links, actuators, or sensors).
Besides, we have shown that our method also offers an
optimality criterion very useful to synthesize fault
tolerant systems compliant to the constraints of embedded
systems, like power consumption. In summary, our framework
for fault tolerance has the following advantages
:

The
**automation**, because DCS produces automatically a
fault tolerant system from an initial fault intolerant
one.

The
**separation of concerns**, because the fault
intolerant system can be designed independently from
the fault tolerance requirements.

The
**flexibility**, because, once the system is
entirely modeled, it is easy to try several fault
hypotheses, several environment models, several fault
tolerance goals, several degraded modes, and so on.

The
**safety**, because, in case of positive result
obtained by DCS, the specified fault tolerance
properties are guaranteed by construction on the
controlled system.

The
**optimality**when optimal synthesis is used, modulo
the potential numerical equalities (hence a non strict
optimality).

In collaboration with Emil Dumitrescu (INSA Lyon), Hervé
Marchand (
Vertecsteam from
Rennes), and Eric Rutten (
Sardesteam from
Grenoble), we have extended this work in the direction of
optimal synthesis considering weights cumulating along
bounded-length paths, and its application to the control of
sequences of reconfigurations. We have adapted our models
in order to take into account the additive costs of,
*e.g.*, execution time or power consumption, and
adapting synthesis algorithms in order to support the
association of costs with transitions, and the handling of
these new weight functions in the optimal synthesis. We
therefore combine, on the one hand, guarantees on the
safety of the execution by tolerating faults, and on the
other hand, guarantees on the worst cumulated consumption
of the resulting dynamically reconfiguring fault tolerant
system
.

In collaboration with Tolga Ayav (University of Izmir, Turkey), we are also working on an AOP approach for fault tolerance. This is described in details in Section .

The use of discrete abstractions for continuous dynamics has become standard in hybrid systems design (see e.g. and the references therein). The main advantage of this approach is that it offers the possibility to leverage controller synthesis techniques developed in the areas of supervisory control of discrete-event systems or algorithmic game theory . The first attempts to compute discrete abstractions for hybrid systems were based on traditional systems behavioral relationships such as simulation or bisimulation , initially proposed for discrete systems most notably in the area of formal methods. These notions require inclusion or equivalence of observed behaviors which is often too restrictive when dealing with systems observed over metric spaces. For such systems, a more natural abstraction requirement is to ask for closeness of observed behaviors. This leads to the notions of approximate simulation and bisimulation introduced in .

These notions enabled the computation of approximately equivalent discrete abstractions for several classes of dynamical systems, including nonlinear control systems with or without disturbances, and switched systems. These approaches are based on sampling of time and space where the sampling parameters must satisfy some relation in order to obtain abstractions of a prescribed precision. In particular, the smaller the time sampling parameter, the finer the lattice used for approximating the state space; this may result in abstractions with a very large number of states when the sampling period is small. However, there are a number of applications where sampling has to be fast; though this is generally necessary only on a small part of the state-space.

These abstractions allow us to use multiscale iterative approaches for controller synthesis as follows. An initial controller is synthesized based on the dynamics of the abstraction at the coarsest scale where only transitions of longer duration are enabled. An analysis of this initial controller allows us to identify regions of the state-space where transitions of shorter duration may be useful (e.g. to improve the performance of the controller). Then, the controller is refined by enabling transitions of shorter duration in the identified regions. The last two steps can be repeated until we are satisfied with the obtained controller.

Our motivation w.r.t. DCS concerns its modular application, improving the scalability of the technique by using contract enforcement and abstraction of components. Moreover, our aim is to integrate DCS into a compilation chain, and thereby improve its usability by programmers, not experts in discrete control. This work has been implemented into the Heptagon/BZR language and compiler . This work is done in collaboration with Hervé Marchand ( Vertecsteam from Rennes) and Éric Rutten ( Sardesteam from Grenoble).

The implemented tool allows the generation of the synthesized controller under the form of an Heptagonnode, which can in turn be analyzed and compiled, together with the Heptagonsource from which it has been generated. This full integration allows this method to aim different target languages (currently C, Javaor VHDL), and its integrated use in different contexts.

Synchronous programming languages describe functionally
centralized systems, where every value, input, output, or
function is always directly available for every operation.
However, most embedded systems are nowadays composed of
several computing resources. The aim of this work is to
provide a language-oriented solution to describe
*functionally distributed reactive systems*. This
research is conducted within the INRIA large scale action
Synchronicsand
is a joint work with Marc Pouzet (ENS,
Parkasteam from
Saclay).

We are working on type systems to formalize, in an
uniform way, both the clock calculus and the location
calculus of a synchronous data-flow programming language
(the
Heptagonlanguage, inspired from
Lucid Synchrone
). On one hand, the clock
calculus infers the clock of each variable in the program
and checks the clock consistency: e.g., a time-homogeneous
function, like
`+`, should not be applied to variables of different
clocks. On the other hand, the location calculus infers the
spatial distribution of computations and checks the spatial
consistency: e.g., a centralized operator, like
`+`, should not be applied to variables located on
different locations. Compared to the recent PhD of Gwenaël
Delaval
,
, the goal is to achieve
*modular*distribution. By modular, we mean that we
want to compile each function of the program into a single
function capable of running on any computing location. We
make use of our uniform type system to express the
computing locations as first-class abstract types, exactly
like clocks, which allows us to compile a typed variable
(typed by both the clock and the location calculi) into
`
if ... then ... else ...`structures,
whose conditions will be valuations of the clock and
location variables.

Model-based design (MBD) involves designing a model of a
control system, simulating and debugging it with dedicated
tools, and finally generating automatically code
corresponding to this model. In the domain of embedded
systems, it offers the huge advantage of avoiding the
time-consuming and error-prone final coding phase. The main
issue raised by MBD is the faithfulness of the generated
code with respect to the initial model, the latter being
defined by the simulation semantics. To bridge the gap
between the high-level model and the low-level
implementation, we use the synchronous programming language
Lustreas an
intermediary formal model
. Concretely, starting from a
high-level model specified in the de-facto standard
Simulink, we
proceed in two steps. Firstly, we generate
Lustrecode along
with some necessary structured “glue code”; this is based
on new “meta-operators” that extend
Lustrewith the
non-functional properties extracted from the
Simulinkmodel
(related, e.g., to the activation conditions and the real
time). Secondly, from this intermediate format written in
Lustrewith
meta-operators, we generate embedded real-time code for the
Xenomai RTOS

We have continued our work on the Pret-Clanguage (Precision Timed C), for predictable and lightweight multi-threading in C. Pret-Csupports synchronous concurrency, preemption, and a high-level construct for logical time. In contrast to existing synchronous languages, Pret-Coffers C-based shared memory communications between concurrent threads that is guaranteed to be thread safe. Due to the proposed synchronous semantics, the mapping of logical time to physical time can be achieved much more easily than with plain C, thanks to a Worst Case Reaction Time (WCRT) analyzer (not presented here). Associated to the Pret-Cprogramming language, we present a dedicated target architecture, called ARPRET, which combines a hardware accelerator associated to an existing softcore processor. This allows us to improve the throughput while preserving the predictability. With extensive benchmarking, we have demonstrated that ARPRET not only achieves completely predictable execution of Pret-Cprograms, but also improves the throughput when compared to the pure software execution of Pret-C. We have also shown that the Pret-Csoftware approach is significantly more efficient in comparison to two other light-weight concurrent C variants (namely SC and Protothreads), as well as the well-known Esterelsynchronous programming language , . This work has been done in collaboration with Partha Roop and Sidharta Andalam (University of Auckland).

We have studied the verification of hybrid systems built as the composition of a discrete software controller interacting with a physical environment exhibiting a continuous behavior. Our goal is to tackle the problem of the combinatorial explosion of discrete states that may happen when a complex software controller is considered. We propose to extend an existing abstract interpretation technique, namely dynamic partitioning, to hybrid systems. Dynamic partitioning, which shares some common principles with predicate abstraction, allows us to finely tune the tradeoff between precision and efficiency in the analysis.

We have extended the NBactool (Section ) according to these principle, and showed the efficiency of the approach by a case study that combines a non trivial controller specified in the synchronous dataflow programming language Lustrewith its physical environment .

Acceleration methods are commonly used for computing precisely the effects of loops in the reachability analysis of counter machine models. Applying these methods on synchronous data-flow programs with Boolean and numerical variables, e.g., Lustreprograms, firstly requires the enumeration of the Boolean states in order to obtain a control graph with numerical variables only. Secondly, acceleration methods have to deal with the non-determinism introduced by numerical input variables.

We addressed in
the latter problem by extending
the concept of abstract acceleration of Gonnord et al.
,
to numerical input variables.
This extension raises some subtle points. We show how to
accelerate loops composed of a translation with resets
*and inputs*, provided that the guard of the loop
constrains separately state and input variables, and we
evaluate the gain in precision that we obtain with this
method, compared to the more traditional approach based on
the use of widening. A journal version has been submitted
to a special issue of Journal of Symbolic Computation,
focusing on invariant generation and advanced techniques
for reasoning about loops.

We worked more recently on the first point. Our goal is
to apply acceleration techniques to data-flow programs
without resorting to an exhaustive enumeration of Boolean
states. To this end, we are studying (1) methods for
*applying abstract acceleration*to general control
flow graphs, and (2) heuristics for
*controlled partitioning*, i.e., partially unfolding
the control structure in order to gain precision on
numerical variables during analysis while treating
symbolically Boolean states as much as possible.

This work addresses the verification of properties of imperative programs with recursive procedure calls, heap-allocated storage, and destructive updating of pointer-valued fields, i.e., interprocedural shape analysis. It presents a way to apply some previously known approaches to interprocedural dataflow analysis — which in past work have been applied only to a much less rich setting — so that they can be applied to programs that use heap-allocated storage and perform destructive updating.

Our submission to ACM TOPLAS has been published this year . This work has been done in collaboration with T. Reps (Univ. of Madison-Wisconsin), M. Sagiv (Univ. of Tel Aviv) and A. Loginov (GrammaTech).

The purpose of shape analysis is to infer properties on the runtime structure of the memory heap. Like most static analyses, shape analyses perform approximations. One has thus to distinguish the concrete memory model that a shape analysis tackles, and the abstract memory model/representation used by the analysis to express properties. For instance, in and in the concrete memory model is an unbounded 2-valued logical structure, and the abstract memory representation is a bounded 3-valued logical structure. But other analyses describe concrete (and abstract) memory models with separation-logic formulas .

These concrete models do actually abstract some properties, as they do not completely model the physical memory of a computer. For instance, the physical numerical addresses may be ignored, as it is the case for which cannot define the semantics of C pointer arithmetics.

We have studied the extension of the relational approach to interprocedural analysis of sequential programs to concurrent programs, composed of a fixed number of threads .

In the relational approach, a sequential program is analyzed by computing summaries of procedures, and by propagating reachability information using these summaries. We propose an extension to concurrent programs, which is technically based on an instrumentation of the standard operational semantics, followed by an abstraction of tuple of call-stacks into sets. This approach allows us to extend relational interprocedural analysis to concurrent programs. We have implemented it for programs with scalar variables, in the ConcurInterproconline analyzer (see § ).

We have experimented several classical synchronisation protocols in order to investigate the precision of our technique, but also to analyze the approximations it performs.

This year a journal version has been submitted to SOSYM (Software and Systems Modeling) and is currently under revision process. The journal version improves on the conference version with better notation and generalization to backward analysis.

We also worked on new techniques for applying the widening extrapolation operator in the context of concurrent programs. This is the topic of the PhD of Lies Lakhdar-Chaouch, co-advised by Bertrand Jeannet and Alain Girault, and funded by OpenTLM. A conference paper is in preparation.

In a language with procedure calls and pointers as parameters, an instruction can modify memory locations anywhere in the call-stack. The presence of such side effects breaks most generic interprocedural analysis methods (such as the one described in the sections and ) which assume that only the top of the stack may be modified.

We present a method that addresses this issue, based on
the definition of an equivalent local semantics in which
writing through pointers has a local effect on the stack.
The idea of this local semantics, inspired by
, is that a procedure works on
local copies (called
*external locations*) of the locations that it can
reach with its pointer parameters. When the procedure
returns to its caller, the side-effects performed on such
copies will be propagated back the corresponding locations
in the caller, which may be themselves local or external
w.r.t. their own caller.

Our second contribution in this context is an adequate representation of summary functions that models the effect of a procedure, not only on the values of its scalar and pointer variables, but also on the values contained in pointed memory locations. Our implementation in the interprocedural analyser PInterproc(see section ) results in a verification tool that infers relational properties on the value of Boolean, numerical and pointer variables.

We submitted this year a paper to the ESOP'2011 conference, which has been accepted .

The “right” way of writing and structuring compilers is well-known. The situation is a bit less clear for static analysis tools. It seems to us that a static analysis tool is ideally decomposed into three building blocks: (1) a front-end, which parses programs, generates semantic equations, and supervises the analysis process; (2) a fixpoint equation solver, which takes equations and solves them; (3) and an abstract domain, on which equations are interpreted. The expected advantages of such a modular structure is the ability of sharing development efforts between analyzers for different languages, using common solvers and abstract domains. However putting in practice such ideal concepts is not so easy, and some static analyzers merge for instance the blocks (1) and (2).

In , we describe how we instantiated these principles with three different static analyzers (addressing resp. imperative sequential programs ( Interproc), imperative concurrent programs ( ConcurInterproc), and synchronous dataflow programs ( NBac), a generic fixpoint solver ( Fixpoint), and two different abstract domains ( Apronand BddApron), see sections , , and . We discuss our experience on the advantages and the limits of this approach compared to related work.

Protocol conversion deals with the automatic synthesis
of an additional component or glue logic, often referred to
as an
*adaptor*or an
*interface*, to bridge mismatches between interacting
components, often referred to as
*protocols*. A formal solution, called convertibility
verification, has been recently proposed, which produces
such a glue logic, termed as a
*converter*, so that the parallel composition of the
protocols and the converter also satisfies some desired
specification. A converter is responsible for bridging
different kinds of mismatches such as
*control*and
*clock*mismatches. Mismatches are usually removed by
the converter (similar to controllers in supervisory
control of discrete event systems) by
*disabling undesirable paths*in the protocol
composition. In
we defined a solution to the
converter synthesis problem for control mismatches based on
a refinement relation called
*Specification Enforcing Refinement*(SER) between a
protocol composition and a desired specification.

We are currently working on a generalization of this convertibility verification problem in order to take data exchange — and hence, incompatibilities stemming from inconsistent protocol specifications of how data are exchanged — into account.

Contracts have first been introduced as a type system
for classes
: a method guarantees some
post-condition under the assumption that its pre-condition
is satisfied. In the component-based programming community,
contracts are increasingly focus of research as a means to
achieve one of the main goals of the component paradigm,
namely the deployment and reuse of components in different,
*a priori*unknown contexts. As components may interact
under various models of communication, the notion of
contract has been generalized from pre- and post-conditions
in the form of predicates to
*behavioral interfaces*such as
*interface automata*
, allowing to reason about the
temporal behavior of environments with which a component
can be composed.

Typical embedded and distributed systems often
encompass unreliable software or hardware components, as
it may be technically or economically impossible to make
a system entirely reliable. As a result, system designers
have to deal with probabilistic specifications such as
“the probability that this component fails at this point
of its behavior is less than or equal to
10
^{-4}”. More generally, uncertainty in the
observed behavior is introduced by abstraction of
black-box — or simply too complex — behavior of
components, the environment, or the execution
platform.

In , we have introduced a framework for the design of correct systems from probabilistic, interacting components. To model components, we adopt the discrete time Interactive Markov Chain (IMC) semantic model , which combines Labeled Transition System (LTS) and Markov Chain. Components communicate through interactions, that is, synchronized action transitions. Interactions are essential in component frameworks, as they allow the modeling of how components cooperate and communicate. We use the BIP framework to model interactions between components.

Since the deploying context of a component is not
known at design time, we use probabilistic
*contracts*to specify and reason about correct
behaviors of a component. Contracts allow us to specify
what a component can expect from its context, what it
must guarantee, and explicitly limit the responsibilities
of both.

The framework we have proposed allows us to model
components, their interactions, and uncertainty in their
observed behavior. It supports different steps in a
design flow: refinement and abstraction, parallel
composition, and conjunction (shared refinement). We have
proved that these operations satisfy the desired
properties of
*independent implementability*and
*congruence*for parallel composition, and
*soundness*for conjunction. Thus,

refinement is compositional, that is, contracts over different components can be refined and implemented independently;

the parallel composition of two contracts is satisfied by the parallel composition of any two implementations of the contracts; and

several contracts
C_{i}over the same component may be used to
independently specify different requirements,
possibly over different subsets of the component
interactions. The conjunction is a common refinement
of all
C_{i}. Conjunction of probabilistic specifications
is non trivial, as a straight-forward approach would
introduce spurious behaviors.

Establishing liabilities in case of litigation is generally a delicate matter. It becomes even more challenging when IT systems are involved. Generally speaking, a party can be declared liable for a damage if a fault can be attributed to that party and that fault has caused the damage. The two key issues are thus to establish convincing evidence with respect to (1) the occurrence of the fault and (2) the causality relation between the fault and the damage. The first issue concerns the technique used to log the relevant events of the system and to ensure that the logs can be produced (and have some value) in court. The second issue is especially complex when several faults are detected in the logs and the impact of these faults on the occurrence of the failure has to be assessed. In we have focused on this second issue and proposed a formal framework for reasoning about causality. A system based on this framework could be used to provide relevant information to the expert, the judge, or the parties themselves (in case of amicable settlement) to analyze the origin of the failure of an IT system.

The notion of causality has been studied for a long
time in computer science, but with very different
perspectives and goals. In the distributed systems
community, causality (following Lamport's seminal paper
) is seen essentially as a
temporal property. In our context, the temporal ordering
contributes to the analysis, but it is obviously not
sufficient to establish the
*logical causality*required to rule on a matter of
liability: the fact that an event
e_{1}has occurred before an event
e_{2}does not imply that
e_{1}was the cause for
e_{2}(or that
e_{2}would not have occurred if
e_{1}had not occurred).

Our formal model is based on components interacting
according to well identified
*interaction models*
. Each component is
associated with an individual
*contract*which specifies its expected behavior. The
system itself is associated with a
*global contract*which is assumed to be implied by
the composition of the individual contracts.

We have defined several variants of logical causality.
The first variant,
*necessary causality*, characterizes cases when the
global contract would not have been violated if the local
contract had been fulfilled. The second variant,
*sufficient causality*, characterizes cases when the
global contract would have been violated even if all the
other components had fulfilled their contracts. In other
words, the violation of its contract by a single
component was sufficient to violate the global
contract.

We have further shown that our definitions of causality are decidable in the introduced setting. We have also provided conditions for decidability on trace suffixes. Such a possibility is of great practical significance because it makes it possible to analyze traces back to a given point in the past. Indeed, the analysis of liability in real cases can hardly assume that all traces of the past can always be produced and analyzed.

In order to be able to trace the propagation of
faults, we have defined
*horizontal causality*, which relates prefixes of
local traces of components on the same level of
hierarchy. Horizontal causality allows to analyze
causality among violations of component contracts.

This work opens a number of new directions for further research, in particular, the generalization to different models of communication, and to a setting where the result of causality analysis is not Boolean but a probability.

This research has been conducted as part of the LISE project on liability issues in software engineering .

The goal of Aspect-Oriented Programming (
AOP) is to isolate
aspects (such as security, synchronization, or error
handling) which cross-cut the program basic functionality and
whose implementation usually yields tangled code. In
AOP, such aspects
are specified separately and integrated into the program by
an automatic transformation process called
*weaving*.

Although this paradigm has great practical potential, it still lacks formalization and undisciplined uses make reasoning on programs very difficult. Our work on AOPaddresses these issues by studying foundational issues (semantics, analysis, verification) and by considering domain-specific aspects (availability or fault tolerance aspects) as formal properties.

Aspect Oriented Programming can arbitrarily distort the semantics of programs. In particular, weaving can invalidate crucial safety and liveness properties of the base program.

We have identified categories of aspects that preserve
some classes of properties
. Our categories of aspects
comprise, among others, observers, aborters, and confiners.
For example, observers do not modify the base program's
state and control-flow (
*e.g.*, persistence, profiling, and debugging
aspects). These categories are defined formally based on a
language independent abstract semantic framework. The
classes of properties are defined as subsets of LTL for
deterministic programs and CTL* for non-deterministic ones.
We have formally proved that, for any program, the weaving
of any aspect in a category preserves any property in the
related class.

In a second step, we have designed for each aspect category a specialized aspect language which ensures that any aspect written in that language belongs to the corresponding category. These languages preserve the corresponding classes of properties by construction.

This work was conducted in collaboration with Rémi Douence from the Ascola Inriateam at École des Mines de Nantes.

We have proposed a domain-specific aspect language aimed
at preventing denial of service caused by resource
management (
*e.g.*, starvation, deadlocks, etc.)
. The aspects specify time or
frequency limits in the allocation of resources. They can
be seen as formal temporal properties on execution traces
that specify availability policies. The semantics of base
programs and aspects are expressed as
*timed automata*. The main advantage of such a formal
approach is two-fold:

aspects are expressed at a higher-level and the semantic impact of weaving is kept under control;

model checking tools can be used to optimize weaving and verify the enforcement of general availability properties.

Here, our objective is to design a domain-specific language for specifying fault tolerance aspects as well as efficient techniques based on static analysis, program transformation, and/or instrumentation to weave them into real-time programs.

We have studied the implementation of specific fault tolerance techniques in real-time embedded systems using program transformation . We are now investigating the use of fault-tolerance aspects in hardware description languages (HDL, for instance Verilogor VHDL). Our goal is to design an aspect language allowing users to specify and tune a wide range of fault tolerance techniques, while ensuring that the woven program remains synthesizable. The objective is to produce fault-tolerant circuits by specifying fault-tolerant strategies separately from the functional specifications.

This line of research is followed by Henri-Charles Blondeel in his PhD thesis, co-advised by Alain Girault and Pascal Fradet.

Chemical programming describes computation in terms of a
*chemical*
*solution*in which molecules (representing data)
interact freely according to
*reaction rules*(representing the program). Solutions
are represented by multisets of elements and reactions by
rewrite rules which consume and produce new elements
according to conditions. This paradigm makes it possible to
express programs without artificial sequentiality in a very
abstract way. It bridges the gap between specification and
implementation languages.

A drawback of chemical languages is that their very high-level nature usually leads to very inefficient programs. We have proposed an approach where the basic functionality is expressed as a chemical program whereas efficiency is achieved separately by:

structuring the multiset with a data type defining neighborhood relations;

describing the selection of elements according to their neighborhood;

specifying the evaluation strategy (
*i.e.*, the application of rules and
termination).

Using these three implementation aspects (data structure, selection, and strategy), the chemical program can then be refined automatically into an efficient low-level program. The crucial methodological advantage is that logical issues are decoupled from efficiency issues.

This research, that takes place within the AutoChemproject (see Section ), is the subject matter of Marnes Hoff's PhD thesis.

A central problem in the analysis of biological regulatory networks concerns the relation between their structure and dynamics. This problem can be narrowed down to the following two questions: (a) Is a hypothesized structure of the network consistent with the observed behavior? (b) Can a proposed structure generate a desired behavior?

Qualitative models of regulatory networks, such as (synchronous or asynchronous) Boolean models and piecewise-affine differential equation (PADE) models, have been proven useful for addressing the above questions. The models are coarse-grained, in the sense that they do not explicitly specify the biochemical mechanisms. However, they include the logic of gene regulation and allow different expression levels of the genes to be distinguished.

Qualitative models bring specific advantages when studying the relation between structure and dynamics. In order to answer questions (a) and (b), one has to search the parameter space to check if for some parameter values the network is consistent with the data or can attain a desired control objective. In qualitative models, the number of different parametrizations is finite and the number of possible values for each parameter is usually rather low. This makes parameter search easier to handle than in quantitative models, where exhaustive search of the continuous parameter space is in general not feasible. Moreover, qualitative models are concerned with trends rather than with precise quantitative values, which corresponds to the nature of much of the available biological data.

Nevertheless, the parametrization of qualitative models remains a complex problem. For most models of networks of biological interest the state and parameter spaces are too large to exhaustively test all combinations of parameter values. The aim of this work was to address this search problem for PADE models by treating it in the context of formal verification and symbolic model checking.

On the application side, we show that the method performs well on real problems, by means of the IRMA synthetic network and benchmark experimental datasets. More precisely, we are able to find parameter values for which the network satisfies temporal-logic properties describing observed expression profiles. Analysis of these parameter values reveals that biologically relevant constraints have been identified. Moreover, we make suggestions to improve the robustness of the external control of the IRMA behavior by proposing a rewiring of the network.

In the context of the Pôle de Compétitivité Minalogic, we participate in the four-year project OpenTLMon analysis of systems-on-chip modeled at the transaction level in SystemC . We intend to develop methods for abstraction, and interprocedural and compositional analysis of SystemC models. Interesting results have been obtained regarding the precisio of abstract interpretation based analysis of concurrent systems (see Section ). One PhD student and one engineer have been hired on this topics, resp. in April and May 2008.

We participate in the regional cluster ISLE (“Informatique, Systèmes et Logiciels Embarqués”) of the Région Rhône-Alpes, which funds the PhD of Mouaiad Alras (see Section ).

The AutoChemproject aims at investigating and exploring the use of chemical languages (see Section ) to program complex computing infrastructures such as grids and real-time deeply-embedded systems. The consortium includes InriaRennes – Bretagne Atlantique ( Paristeam, Rennes), InriaGrenoble – Rhône-Alpes ( Pop Artteam, Montbonnot), IBISC ( CNRS/Université d'Evry) and CEAList (Saclay). The project started at the end of 2007 and will terminate at the end of 2011.

The Asopt(Analyse Statique et OPTimisation) project [end of 2008-2011] brings together static analysis (INRIA- Pop Art, VERIMAG, CEA LMeASI), optimisation, and control/game theory experts (CEA LMeASI, INRIA-MAXPLUS) around some program verification problems. Pop Artis the project coordinator.

Many abstract interpretations attempt to find “good” geometric shapes verifying certain constraints; this not only applies to purely numerical abstractions (for numerical program variables), but also to abstractions of data structures (arrays and more complex shapes). This problem can often be addressed by optimisation techniques, opening the possibility of exploiting advanced techniques from mathematical programming.

The purpose of Asoptis to develop new abstract domains and new resolution techniques for embedded control programs, and in the longer run, for numerical simulation programs.

The Vedecyproject aims at pursuing fundamental research towards the development of algorithmic approaches to the verification and design of cyber-physical systems. Cyber-physical systems result from the integration of computations with physical processes: embedded computers control physical processes which in return affect computations through feedback loops. They are ubiquitous in current technology and their impact on lives of citizens is meant to grow in the future (autonomous vehicles, robotic surgery, energy efficient buildings, ...).

Cyber-physical systems applications are often safety critical and therefore reliability is a major requirement. To provide assurance of reliability, model based approaches and formal methods are appealing. Models of cyber-physical systems are heterogeneous by nature: discrete dynamic systems for computations and continuous differential equations for physical processes. The theory of hybrid systems offers a sound modeling framework for cyber-physical systems. The purpose of Vedecyis to develop hybrid systems techniques for the verification and the design of cyber-physical systems.

The Synchronics(Language Platform for Embedded System Design) project [beginning of 2008-2011] gathers 9 permanent researchers on the topic of embedded systems design: B. Caillaud ( InriaRennes – Bretagne Atlantique), A. Cohen, L. Mandel, and M. Pouzet (INRIA-Saclay and ENS Ulm), G. Delaval, A. Girault, and B. Jeannet ( InriaGrenoble – Rhône-Alpes), E. Jahier and P. Raymond (VERIMAG).

Synchronicscapitalizes on recent extensions of data-flow synchronous languages, as well as relaxed forms of synchronous composition or compilation techniques for various platform, to address two main challenges with a language-centered approach: (i) the co-simulation of mixed discrete-continuous specifications, and more generally the co-simulation of programs and properties (either discrete or continuous); (ii) the ability, inside the programming model, to account for the architecture constraints (execution time, memory footprint, energy, power, reliability, etc.).

Aosteat InriaParis – Rocquencourt is working with us on fault tolerant heuristics for their software SynDEx.

Vertecsat InriaRennes – Bretagne Atlantique is working with us on applications of discrete controller synthesis, and in particular on the tool Sigali.

P. Fradet cooperates with J.-P. Banâtre and T. Priol ( Paris, InriaRennes – Bretagne Atlantique) and with R. Douence ( Ascola, Ecole des Mines de Nantes).

A. Girault cooperates with D. Trystram ( Moais, InriaGrenoble – Rhône-Alpes) on scheduling and dependability, with E. Rutten ( Sardes, InriaGrenoble – Rhône-Alpes) and H. Marchand ( Vertecs, InriaRennes – Bretagne Atlantique) on optimal discrete controller synthesis, with A. Benoit, F. Dufossé and Y. Robert ( Graal, InriaGrenoble – Rhône-Alpes) on multi-criteria scheduling, and with P. Raymond ( Verimag, CNRS) on model-based design and a compilation tool chain from Simulinkto distributed platforms.

G. Gössler cooperates with D. Le Métayer ( Licit, InriaGrenoble – Rhône-Alpes), H. de Jong and M. Page ( Ibis, InriaGrenoble – Rhône-Alpes), G. Batt ( Contraintes, InriaParis – Rocquencourt), G. Salaün ( Vasyproject, InriaGrenoble – Rhône-Alpes), and D.N. Xu ( gallium, InriaParis – Rocquencourt).

B. Jeannet cooperates with T. Le Gall ( Vertecs, InriaRennes – Bretagne Atlantique) on the analysis of communicating systems, and with C. Constant, T. Jéron and F. Ployette ( Vertecs, InriaRennes – Bretagne Atlantique) on test generation.

G. Delaval cooperates with H. Marchand ( Vertecs, InriaRennes – Bretagne Atlantique) and É. Rutten ( Sardes, InriaGrenoble – Rhône-Alpes) on modular controller synthesis and its applications.

G. Delaval, A. Girault and B. Jeannet collaborate with the Parkasteam of ENS Ulm ( InriaParis – Rocquencourt) on the distribution of higher-order synchronous data-flow programs and on static analysis of hybrid programs.

P. Fradet cooperates with J.-L. Giavitto ( CNRS/Université d'Evry).

A. Girault cooperates with P. Raymond ( Verimag), P. Roop, Z. Salcic, and S. Andalam (University of Auckland, New Zealand), H. Kalla (University of Batna, Algeria), and I. Assayad (University of Casablanca, Morocco).

P. Fradet and A. Girault collaborate with T. Ayav (University of Izmir, Turkey).

G. Gössler cooperates with A. Girard (LJK, Grenoble), M. Bozga, T. Dang, and J. Sifakis ( Verimag), J.-B. Raclet (IRIT, Toulouse), and B. Bonakdarpour (U. of Waterloo, Canada).

A. Girault and G. Gössler collaborate with P. Roop and R. Sinha (University of Auckland, New Zealand).

B. Jeannet cooperates with N. Halbwachs and M. Péron ( Verimag) on static analysis and abstract interpretation.

J.-B. Raclet cooperates with R. Passerone (University of Trento) on interface theories.

ArtistDesignis a European Network of Excellence on embedded system design, successor of Artist II in FP7. The objective for ArtistDesignis to build on existing structures and links forged in Artist II, to become a virtual Center of Excellence in Embedded Systems Design. This will be mainly achieved through tight integration between the central players of the European research community. The long-term vision for embedded systems in Europe, established in Artist II, will advance the emergence of Embedded Systems as a mature discipline. G. Gössler is the administrator of ArtistDesignfor Inria.

Combestis a
European STREP on formal component-based design of complex
embedded systems

Cesaris a
European
Artemisiaproject
on cost-efficient methods and processes for safety relevant
embedded systems

We are particularly involved in the following sub-programs:

Task Force Safety 1.5.1 (State of the art survey on safety and diagnosability for cost-efficient safety critical emebedded systems) and 1.5.2 (Identification of requirements for comon cross domain core safety and diagnosability techniques and methods).

Requirements Engineering, along with two other Inriateams (S4 and Triskell, from InriaRennes). We shall work on contracts based design for traceability.

This collaboration involves two teams, the
Pop Artteam
and the ACEI team from the University of Auckland, New
Zealand (led by Zoran Salcic, professor at the University
of Auckland). It is funded by the Direction des Relations
Internationales of
Inriaand it
started in January 2009

We work on some of the most important challenges for the design of embedded systems. Let us recall that embedded systems are characterized by several constraints, such as enormous complexity and heterogeneity, need for determinism or bounded reaction time. Accordingly, design methods for embedded systems should, wherever possible, be automated and guarantee these properties by construction, therefore shifting the burden of checking these constraints from the programmer/system designer to the design method. In order to achieve this, our goal is to improve the existing design methods in several key directions: (i) incremental converter synthesis (see Section ); (ii) programming language for adaptive computing – SystemJand beyond (see Section ); (iii) time predictable programming language and execution architectures (see Section ). Together, these advanced formal methods will provide foundations for automated design and higher level of safety of the designed embedded systems.

Pascal Fradet served in the program
committee of AOSD'10 (
*International Conference on Aspect-Oriented Software
Development*). He is in the external review committee
of PLDI'11 (
*ACM SIGPLAN conference on Programming Language Design
and Implementation*). He was in the selection
committee for an assistant professor position at Institut
Polytechnique de Bordeaux.

Alain Girault served in the program
committee of the DATE'10

Gregor Gössler served in the program committee of the FOCLASA'10 workshop on foundations of coordination languages and software architectures.

Marnes Hoff, co-advised by Pascal Fradet and Jean-Louis Giavitto (Université d'Evry), since 04/2008, PhD in computer science, Grenoble INP.

Henri-Charles Blondeel, co-advised by P. Fradet and A. Girault, since 10/2010, PhD in computer science, Grenoble INP.

Mouaiad Alras, co-advised by Alain Girault and Pascal Raymond ( Verimag/CNRS), since 10/2006, PhD in computer science, UJF, Grenoble.

Lies Lakhdar-Chaouch, co-advised by Alain Girault and Bertrand Jeannet since 05/2008, PhD in computer science, Grenoble INP.

Peter Schrammel, co-advised by Alain Girault and Bertrand Jeannet since 07/2009, PhD in computer science, Grenoble INP.

Gideon Smeding, co-advised by Gregor Gössler and Joseph Sifakis ( Verimag/CNRS), since 12/2009, PhD in computer science, UJF, Grenoble.

Gwenaël Delaval is teaching algorithmics and programming at Université Joseph Fourier (96h in 2010–2011).