Formes

FORMES stands for FORmal Methods for Embedded Systems. FORMES is aiming at making research advances towards the development of safe and reliable embedded systems, by exploiting synergies between two different approaches, namely (real time) hardware simulation and formal proofs development.

Embedded systems have become ubiquitous in our everyday life, ranging from simple sensors to complex systems such as mobile phones, network routers, airplane, aerospace and defense apparatus. As embedded devices include increasingly sophisticated hardware and software, the development of combined hardware and software has become a key to economic success.

The development of embedded systems uses hardware with increasing capacities. As embedded devices include increasingly sophisticated hardware running complex functions, the development of software for embedded systems is becoming a critical issue for the industry. There are often stringent time to market and quality requirements for embedded systems manufacturers. Safety and security requirements are satisfied by using strong validation tools and some form of formal methods, accompanied with certification processes such as DO 178 or Common Criteria certification. These requirements for quality of service, safety and security imply to have formally proved the required properties of the system before it is deployed.

Within the context described above, the FORMES project aims at addressing the challenges of embedded systems design with a new approach, combining fast hardware simulation techniques with advanced formal methods, in order to formally prove qualitative and quantitative properties of the final system. This approach requires the construction of a simulation environment and tools for the analysis of simulation outputs and proofs of properties of the simulated system. We therefore need to connect simulation tools with code-analyzers and easy-to-use theorem provers for achieving the following tasks:

Enhance the hardware simulation technologies with new techniques to improve simulation speed, and produce program representations that are adequate for formal analysis and proofs of the simulated programs ;

Connect validation tools that can be used in conjunction with simulation outputs that can be exploited using formal methods ;

Extend and improve the theorem proving technologies and tools to support the application to embedded software simulation.

A main novelty of the project, besides improving the existing technologies and tools, relies in the application itself: to combine simulation technologies with formal methods in order to cut down the development time for embedded software and scale up its reliability. Apart from being a novelty, this combination is also a necessity: proving very large code is unrealistic and will remain so for quite some time; and relying only on simulation for assessing critical properties of embedded systems is unrealistic as well.

We assume that these properties can be localized in critical, but small, parts of the code, or dedicated hardware models. This nevertheless requires scaling up the proof activity by an order of magnitude with respect to the size of codes and the proof development time. We expect that it is realistic to rely on both combined. We plan to rely on formal proofs for assessing properties of small, critical components of the embedded system that can be analyzed independently of the environment. We plan to rely on formal proofs as well for assessing correctness of the elaboration of program representation abstractions from object code. We plan to rely on simulations for testing the whole embedded system, and to formal proofs to verify the completeness of test sets. We finally plan to rely on formal proofs again for verifying the correct functionning of our tools. Proving properties of these various abstractions requires using a certified, interactive theorem prover.

CoqMT, a new certified extension of Coq developped in the team, is now available at
http://

SimSoC, our simulator for embedded systems, will soon be released, see
http://

Rainbow, developped in the team, was the best certification back-end used by the termination tools which participated to the 2007 and 2008 editions of the international
competition of certified termination provers

The development of complex embedded systems platforms requires putting together many hardware components, processor cores, application specific co-processors, bus architectures, peripherals, etc. The hardware platform of a project is seldom entirely new. In fact, in most cases, 80 percent of the hardware components are re-used from previous projects or simply are COTS (Commercial Off-The-Shelf) components. There is no need to simulate in great detail these already proven components, whereas there is a need to run fast simulation of the software using these components.

These requirements call for an integrated, modular simulation environment where already proven components can be simulated quickly, (possibly including real hardware in the loop), new components under design can be tested more thoroughly, and the software can be tested on the complete platform with reasonable speed.

Modularity and fast prototyping also have become important aspects of simulation frameworks, for investigating alternative designs with easier re-use and integration of third party components.

The project aims at developing such a rapid prototyping, modular simulation platform, combining new hardware components modeling, verification techniques, fast software simulation for proven components, capable of running the real embedded software application without any change.

To fully simulate a complete hardware platform, one must simulate the processors, the co-processors, together with the peripherals such as network controllers, graphics controllers, USB controllers, etc. A commonly used solution is the combination of some ISS (Instruction Set Simulator) connected to a Hardware Description Language (HDL) simulator which can be implemented by software or by using a FPGA simulator. These solutions tend to present slow iteration design cycles, (implementing the FPGA means the hardware has already been designed at low level) and become very costly when using large FPGA platforms. Others have implemented a co-simulation environment, using two separate technologies, typically one using a HDL and another one using an ISS , , . Some communication and synchronization must be designed and maintained between the two using some inter-process communication (IPC), which slows down the process.

The idea we pursue is to combine hardware modeling and fast simulation into a fully integrated, software based (not using FPGA) simulation environment named SimSoC, which uses a single simulation loop thanks to Transaction Level Modeling (TLM) , combined with a new ISS technology designed specifically to fit within the TLM environment.

The most challenging way to enhance simulation speed is to simulate the processors. Processor simulation is achieved with Instruction Set Simulation (ISS). There are several alternatives to
achieve such simulation. In
*interpretive simulation*, each instruction of the target program is fetched from memory, decoded, and executed. This method is flexible and easy to implement, but the simulation speed is
slow as it wastes a lot of time in decoding. Interpretive simulation is used in Simplescalar
. Another technique to implement a fast ISS is
*dynamic translation*
,
,
. which has been favored by many
,
,
,
in the past decade. With dynamic translation, the binary target
instructions are fetched from memory at run-time, like in interpretive simulation. They are decoded on the first execution and the simulator translates these instructions into another
representation which is stored into a cache. On further execution of the same instructions, the translated cached version is used. If the code is modified during run-time, the simulator
invalidates the cached representation. Dynamic translation provides much faster simulation while keeping the advantage of interpretive simulation as it supports the simulation of programs that
have either dynamic loading or self-modifying code.

There are typically two variants of the dynamic translation technology: the target code is translated either directly into machine code for the simulation host, or into an intermediate representation that makes it possible to execute the code with fast speed. Dynamic translation introduces a compile time phase as part of the overall simulation time. But as the resulting cached code is re-used, the compilation time is amortized over time.

Processor simulation is also achieved in Virtual Machines such as QEMU and GXEMUL that emulate to a large extent the behavior of a particular hardware platform. The technique used in QEMU is a form of dynamic translation. The target code is translated directly into machine code using some pre-determined code patterns that have been pre-compiled with the C compiler. Both QEMU and GXEMUL include many device models of open-source C code, but this code is hard to reuse. The functions that emulate device accesses do not have the same profile. The scheduling process of the parallel hardware entities is not specified well enough to guarantee the compatibility between several emulators or re-usability of third-party models using the standards from the electronics industry (e.g. IEEE 1666)

A challenge in the development of simulators is to maintain simultaneously fast speed and simulation accuracy. In the FORMES project, we expect to develop a dynamic translation technology satisfying the following additional objectives:

provide different levels of translation with different degrees of accuracy so that users can choose between accurate and slow (for debugging) or less accurate but fast simulation.

to take advantage of multi-processor simulation hosts to parallelize the simulation;

to define intermediate representations of programs that optimize the simulation speed and possibly provide a more convenient format for studying properties of the simulated programs.

The SimSoC simulator is based on the TLM standard from OSCI . The hardware components are modeled as TLM models, and since TLM is itself based on SystemC, the simulation is driven by the SystemC kernel. We use standard, unmodified, SystemC (version 2.2), hence the simulator has a single simulation loop. The interconnection between components is an abstract bus similar to the TLM TAC abstract bus open sourced by ST Microelectronics . Each processor simulated in the platform is abstracted as a particular TLM class. This class is both an initiator (it can initiate transactions) and a target (it can process transactions). It acts as an initiator to initiate I/Os and it behaves as a target essentially to receive the boot or halt signals and interrupt notifications from the interrupt controller. Memory and I/O controllers are also modeled as TLM classes. The simulated platform can include multiple heterogeneous processors, for example a general purpose CPU and a DSP. Then each processor is abstracted by a TLM class and they communicate among themselves and I/O controllers via TLM transactions. Research work has been done regarding TLM models such as , , .

Coq is one of the most popular proof assistant, in the academia and in the industry. Based on the Calculus of Inductive Constructions, Coq has three kinds of basic entities: objects are used
for computations (data, programs, proofs are objects); types express properties of objects; kinds categorize types by their logical structure. Coq's type checker can decide whether a given
object satisfies a given type, and if a given type has a logical structure expressed by a given kind. Because it is possible to (uniformly) define inductive types such as lists, dependent types
such as lists-of-length-n, parametric types such as lists-of-something, inductive properties such as
(
e
v
e
n
n)for some natural number
n, etc, writing small specifications in Coq is an easy task. Writing proofs is a harder (non-automatable) task that must be done by the user with the help of tactics. Automating proofs
when possible is a necessary step for dissemination of these techniques, as is scaling up. These are the problems we are interested in.

Modeling in Coq is not always as easy as argued: Coq identifies expressions up to computation. Identifying two lists of identical content but respective lengths
m+
nand
n+
mis no problem if
mand
nare given integers, but does not work if
mand
nare unknowns, since
n+
m=
m+
nis a valid theorem of arithmetic which cannot be proved by mere computation. It follows that the statement
reverse(
l::
l^{'}) =
reverse(
l^{'})::
reverse(
l)is not typable,
::standing for appending two lists. This problem that seemingly innocent statements cannot be written in Coq because they do not type-check has been considered a major
open problem for years. Blanqui, Jouannaud and Strub have recently developed
*Coq modulo Theories*, in which computations do not operate only on closed terms (as are
1 + 2and
2 + 1) but on open expressions of a decidable theory (as is
n+
m=
m+
nin Presburger arithmetic). This preliminary work addresses three problems at once: decidable goals become solved automatically by a program taken from the shelves;
writing specifications and proofs becomes easier and closer to the mathematical practice; assuming that calls to a decision procedure return a
*proof certificate*in case of success, the correctness of a Coq proof now results from type checking the proof as well as the various certificates generated along the proof. Trusting Coq
becomes incremental, resulting from trusting each certificate checker when added in turn to Coq's kernel. The development of this new paradigm is our first research challenge here.

Scaling up is yet another challenge. Modeling a large, complex software is a hard task which has been addressed within the Coq community in two different ways. By developing a module system for Coq in the OCaML style, which makes it possible to modularize proof developments and hence to develop modular libraries. By developing a methodology for modeling real programs and proving their properties with Coq. This methodology allows to translate a JavaCard (tool Caduceus) or C (tool FRAMA-C) program into an ML-like program. The correctness of this first step is ensured by proving in Coq verification conditions generated along the translation. The correctness of the ML-like program annotated by the user is then done by Coq via another tool called Why. This methodology and the associated tools are developed by the INRIA project PROVAL in association with CEA. Part of our second challenge is to reuse these tools to prove properties at the source code level of programs used in an embedded application. As part of this effort, we are interested in the development of termination tools and automatic provers, in particular an SMT prover which is indeed complementary of our first challenge. The second part of the challenge is to ensure that these properties are still satisfied by the machine code executed on the embedded cpu. Here, we are going to rely on a different technology, certified compilers, and reuse the certified compilers from CLight to ARM or PowerPC developped in the COMPCERT INRIA project. We will be left with the development of certified compilers from source languages which are frequently used for developping embedded applications into CLight. These languages are either variants of C, or languages for the description of automata with timers in the case of Program Logic Controllers.

Our last challenge is to rely on certified tools only. In particular, we decided to certify in Coq all extensions of Coq developped in the project: the core logic of CoqMT has been certified with Coq. The most critical parts of the simulator will also be certified. As for compilers, there are two ways to certify tools: either, the code is proved correct, or it outputs a certificate that can be checked. The second approach demands less man-power, and has the other advantage to be compatible with the use of tools taken from the shelves, provided these tools are open-source since they must be equipped with a mechanism for generating certificates. This is the approach we will favour for the theories to be used in CoqMT, as well as for the SMT prover to be developped. For the simulator SimSoC itself, we shall probably combine both approaches.

Simulation is relevant to most areas where complex embedded systems are used, not only to the semiconductor industry for System-on-Chip modeling, but also to any application where a complex hardware platform must be assembled to run the application software. It has applications for example in industry automation, digital TV, telecommunications and transportation.

The simulation software made by the FORMES Team is called SimSoC. It is based on SystemC kernel and uses Transaction Level Modeling for interactions between the hardware models. The software includes:

Instruction Set Simulators. The ARM Version 5 has been implemented. Other architectures are under development.

A dynamic translator from binary programs to an internal representation. For the ARM architecture a compiler has been developed that generates the C++ translated code, using parameterized specialization options.

Some peripheral models such as a serial line controller, a flash memory controller, an interrupt controller.

Utilities software such as a utility to generate permanent storage for flash memory simulation, or a compiler tool to generate instruction binary decoder.

It is intended that the software will be distributed under open source license. See
http://

CoqMT is a modification of the Coq proof assistant allowing to dynamically load decision procedures for first-order theories in the conversion checker of the Coq kernel. Users decide which Coq symbols are handled by the decision procedures through the use of mapping primitives. Having dynamic loading and mapping facilities allows users to write their own decision procedures or take anyone from the shelves and use them in Coq without any additional modification of the Coq source code.

For the moment, CoqMT comes with a predefined decision procedure for integer linear arithmetic which generates small certificates (unlike previously existing procedures).

CoqMT (along with the decision procedure for integer linear arithmetic and the theory of dependent lists) is accessible via its GIT repository at
http://

aCiNO is a C++ implementation of Nelson-Oppen's architecture. It is intended to be a new and efficient SMT (Satisfiability Modulo Theory) solver. SMT is the problem of determining the satisfiability of a first-order logic formula in one or more decidable theories. It has been considered as the next generation of verification engines. We are going to develop this solver in an incremental way. We first aim at 2 popular theories, LRA (Linear Arithmetic) over real numbers and UF (Uninterpreted Functions). We use the simplex method to solve LRA and the congruence closure algorithm to solve UF. Both theories are combined under the Nelson-Oppen architecture. Newly discovered equalities between variables are therefore propagated to the other theory, so that the SMT solver is both SOUND and COMPLETE. We will integrate more theories into our solver on a by need basis.

CoLoR is a Coq library on rewriting theory and termination . It is intended to serve as a basis for certifying the output of automated termination provers like AProVE, MatchBox, TTT2, Torpa, TPA, etc. It contains libraries on:

Mathematical structures: relations, (ordered) semi-rings.

Data structures: lists, vectors, integer polynomials with multiple variables, finite multisets, matrices.

Term structures: strings, algebraic terms with symbols of fixed arity, algebraic terms with varyadic symbols, simply typed lambda-terms.

Transformation techniques: conversion from strings to algebraic terms, conversion from algebraic to varyadic terms, arguments filtering, rule elimination, dependency pairs, dependency graph decomposition.

Termination criteria: polynomial interpretations, multiset ordering, lexicographic ordering, first and higher order recursive path ordering, matrix interpretations.

Rainbow is a tool for automatically certifying termination proofs expressed in a given XML format. Termination proofs are translated and checked in Coq by using the CoLoR library.

Rainbow was the best certification back-end in the 2007 and 2008 editions of the international competition of certified termination provers

Since then, we improved the efficiency of proof checking and extended Rainbow and CoLoR with syntactic first-order matching, the verification of loops in term and string rewrite systems (to certify non-termination), and semantic labelling .

We also started to formalize the termination of Haskell programs (internship of Julien Bureaux, ENS Paris, from June 1st to July 31), and to formalize Rainbow itself in order to certify it and improve the efficiency of proof checking by using the extraction mechanism of Coq to OCaml.

CoLoR and Rainbow are distributed under the CeCILL license on
http://

Moca is developed by Pierre Weis (INRIA Rocquencourt) and Frédéric Blanqui.

It is a general construction functions generator for OCaML data types with invariants.

Moca allows the high-level definition and automatic management of complex invariants for data types. In addition, Moca provides the automatic generation of maximally shared values, independently or in conjunction with the declared invariants.

A relational data type is a concrete data type that declares invariants or relations that are verified by its constructors. For each relational data type definition, Moca compiles a set of construction functions that implements the declared relations.

Moca supports two kinds of relations:

algebraic relations (such as associativity or commutativity of a binary constructor),

general rewrite rules that map some pattern of constructors and variables to some arbitrary user's define expression.

Algebraic relations are primitive, so that Moca ensures the correctness of their treatment. By contrast, the general rewrite rules are under the programmer's responsibility, so that the desired properties must be verified by a programmer's proof before compilation (including for completeness, termination, and confluence of the resulting term rewriting system).

Algebraic invariants are specified by using keywords denoting equational theories like commutativity and associativity. Moca generates construction functions that allow each equivalence class to be uniquely represented by their canonical value.

Moca is distributed under QPL on
http://

The SimSoC simulator software has been developed. The ARM simulator has been completed to include the MMU and simulate ARM 9 subsystem, including a simulation model for the PrimeCell interrupt controller.

A UART controller and a flash memory controller simulation model have been also been implemented simulating some flash memory models from ST Microelectronics. Gathering the simulation models for processor, interrupt controller and other peripherals, a full system simulator has been developed for a specific System On Chip from ST Microelectronics. Embedded Linux as distributed on ST web site for this SoC can now be run over that simulator . In this simulator, the simulated UART is connected to a window of the Graphical User Interface so that users can login on Linux using the simulated serial line.

An Ethernet controller simulator has been developed, allowing for the connection of two simulated systems running TCP/IP stack. A small network simulation layer was developed so that N simulated systems using the simulated Ethernet controller can connect together as if they were connected to the real network. It is therefore possible to have several simulated systems running on the same host machine or on multiple networked machines to communicate with Ethernet frames over a simulated network. In addition, this simulated network can be connected to the real world, for example to ping 'inria.fr' from a simulated system.

A framework was added to the simulator such that simulated programs can be debugged from any debugger compliant with the GDB protocol for remote debugging. This framework is mostly architecture independent, with only architecture dependent plug-ins. A complete implementation was developed for debugging ARM platforms.

An experiment has started to explore parallell simulation in the case of multi-core System-On-Chips. The idea explored is to parallelize simulation of processors while maintaining a serialized simulation of devices in SystemC.

The simulation of PowerPC and MIPS architectures has started. A complete ISS in interpreted mode has been developed for PowerPC, including the Memory Management Unit (MMU). A subset of the dual core Freescale 82641D SoC has been simulated. This simulator can can run U-Boot and Embedded Linux with limited pheripheral devices.

We recently started to work on the certification of our simulator SimSoC by formalizing in Coq the ARMv6 instruction set, its binary encoding/decoding, and its semantics, reusing Xavier
Leroy's work on logical and arithmetic operations on 32-bits words for CompCert

In , , we described a modification of the Calculus of Inductive Constructions allowing the use of decision procedures in the computation mechanism. In , we gave a new definition of the calculus without most of the restrictions made in , , and proved its core logic in Coq. This development has been the basis of CoqMT, our new version of Coq. As a paradigmatic example, we developed the basic theory of dependent lists with CoqMT. Compared with the same development for non-dependent lists, very few modifications were necessary to carry out the proofs.

We also started several generalisations of the previous work. Two are especially important: the ability to consider polymorphic first-order theories, and the extraction of equations from pattern matching.

We started to work on the certification of unsatisfiability proofs given by a set of regular input resolution proofs as provided by the PicoSAT solver

Boolean satisfiability (SAT) is to find if there is a true interpretation for a Boolean formula. Many real-world problems can be transformed into SAT problems and many of these problem instances can be effectively solved via satisfiability, such as testing, formal verification, synthesis, various routing problems, etc. In , we present a novel efficient SAT algorithm based on maxterm covering. The satisfiability of a clause set is determined in terms of the number of relative maxterms of the empty clause with respect to the clause set. If the number of relative maxterms is zero, it is unsatisfiable, otherwise satisfiable. A set of synergic heuristic strategies are presented and elaborated. We conduct a number of experiments on 3-SAT problems at the phase transition region of density 4.3, which have been cited as the hardest group of SAT problems. Our experimental results on public benchmarks attest to the fact that, by incorporating our proposed heuristic strategies, our enhanced algorithm can handle 3-SAT problems with 400 variables. The approach runs 3 to 40 times faster than zChaff does for both satisfiable and unsatisfiable problems.

Array theory is very useful in program verification. The most popular technique for deciding array theory is to reduce to the theory of uninterpreted functions. Although the technique is widely used in SMT solvers, it has several drawbacks. In program verification, one often needs array theory with quantified indices. Enriching the theory of uninterpreted functions with quantifiers leads to undecidability easily. Recently, a new technique that applies counter automata allows more general quantification in array theory . But its algorithm requires two reductions: one from array theory to the reachability of counter automata; the other from counter automata to Presburger arithmetic. In this project, we reduce array theory to the logic of weakly monadic second-order with one successor (WS1S). We have developed an equi-satisfiable reduction and conducted preliminary experiments with Mona.

For the use of CoqMT (or SMT), we need the availability of decision procedures either certified or generating certificates. We studied the certification of decision procedures for the case of (integer or rational) linear arithmetic. For that purpose, we developed in Coq the theory related to the simplex method, a well known linear optimization algorithm over linear constraints. This development [?] includes:

i) the definition and proof of basic properties of the ordered rings and fields,

ii) the definition and proof of basic properties of polytopes (weak Krein-Milman theorem),

iii) the correction of the simplex algorithm steps and

iv) the correction and completeness of the halting conditions.

This work is based of the SsReflect tactic language and libraries developed in the team Mathematical Components of the Microsoft Research-INRIA Joint Centre. It is done in cooperation with Assia Mahboubi, from the TypiCal group at INRIA Saclay - Ile de France.

For solving large problems, the use of decision procedures using complex optimizations and heuristics is necessary. Proving correctness of such programs can be very tedious. Moreover, proofs have to be updated each time the decision procedure algorithm is modified. A workaround is to write non verified algorithm generating certificates at each run. These certificates must be small and easily verifiable.

This implementation is currently used in the CoqMT kernel to check conversion goals involving arithmetic constraints. We also wrote a new Coq tactic for the resolution of arithmetic goals. We expect far better results than the current tactics of Coq.

This work is done in cooperation with Assia Mahboubi, from the TypiCal group at INRIA Saclay - Ile de France.

Since then, we improved the efficiency of proof checking and extended Rainbow and CoLoR with syntactic first-order matching, the verification of loops in term and string rewrite systems (to certify non-termination), and semantic labelling .

We also started to formalize the termination of Haskell programs (internship of Julien Bureaux, ENS Paris, from June 1st to July 31), and to formalize Rainbow itself in order to certify it and improve the efficiency of proof checking by using the extraction mechanism of Coq to OCaml.

The Computability Path Ordering of Blanqui, Jouannaud and Rubio is a well founded order on algebraic lambda terms aiming at proving strong normalization of higher-order rewrite rules. CPO accepts weakly polymorphic algebraic signatures only. We are currently generalizing the well-foundedeness proof of CPO to the more general case of fully polymorphic signatures before to consider the case of dependently typed disciplines.

Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. In , together with Y. Isogai, K. Kusakari and M. Sakai (Nagoya University), we proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, to prove termination of STRSs. In this paper, we extend the method to HRSs. Since HRSs include lambda-abstraction, but STRSs do not, we restructure the static dependency pair method to correspond to lambda-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.

First, we introduce a simplified version of SBT for the simply-typed lambda-calculus. Then, we give new proofs of the correctness of SBT using semantic labelling, both in the first and in the higher-order case. As a consequence, we show that SBT can be extended to systems using matching on defined symbols (e.g. associative functions).

In addition, we started to study how we could use this size information in order to check the correctness of upper bounds on the complexity of functions (internship of Antoine Taveneaux, ENS Lyon, from May 15 to July 31).

Together with Benjamin Monate (CEA), we are interested in proofs of properties of infinite families of specifications, like the family of dihedral groups of order
nfor some natural number
n, or the family of a multicore harware with
2
^{n}cores for some natural number
n. So far, we have shown the decidability of confluence when these families can be presented by parameterised words over a finite alphabet of parameterized size, as in the case of the
example of dihedral groups.

Together with Femke van Raamsdonk from the Free University of Amsterdam, we have started a program to investigate decidable sufficient conditions for the confluence of higher-order rewrite systems when lefthand sides are patterns “a la Miller” which can be fired by using higher-order pattern matching . The difficulty here lies in the fact that it is difficult to abstract from a particular syntax for lambda-terms, such as de Bruijn numbers, localy nameless variables, or freshness conditions. We have not been able yet to prove our results in an axiomatic setting capturing all these various syntax for binders.

Decreasing diagrams are a technique due to van Oostrom for proving confluence results for abstract relations which captures both styles of proofs based respectively on strong and local confluence. Last year, Van Oostrom and Jouannaud developped a refinement of this technique in order to handle relations defined by rewrite systems . We continue this work in order to get rid of some linearity restrictions, and plan to develop a Coq library in order to search for and certify confluence proofs.

To generate finite automata as contextual assumptions, the exact learning algorithm
L^{*}for finite automata is used. In the simplest setting, the system under verification is decomposed into two components. An instance of the
L^{*}algorithm is deployed to find a proper contextual assumption to verify the system. In more realistic settings, the optimal decomposition may consist of several components
. One could deploy several independent instances of the
L^{*}algorithm to find assumptions for these components. The naïve deployment however would disregard semantic information among components. In this project, we would like to incorporate
such information into instances of the
L^{*}algorithm. We have discussed ideas to coordinate the construction of contextual assumptions in each
L^{*}algorithm.

Contextual assumptions are required to apply assume-guarantee reasoning. Previously, assumptions are computed explicitly . It has been reported that the explicit assume-guarantee reasoning is less efficient than explicit monolithic algorithms. To address this problem, we apply an exact learning algorithm for Boolean formulae to generate assumptions implicitly. For the invariant checking problem, our new algorithm derives initial predicates and transition relations represented by Boolean formulae implicitly. We have implemented a prototype. Preliminary experiments show that our algorithm is comparable to monolithic SAT-based algorithms for small cases.

Automated compositional reasoning using assume-guarantee rules plays a key role in large system verification. A vexing problem is to discover fine decomposition of system contributing to appropriate assumptions. In , we present with William N. N. Hung and Xiaoyu Song an automatic decomposition approach in compositional reasoning verification. The method is based on data mining algorithms. An association rule algorithm is harnessed to discover the hidden rules among system variables. A hypergraph partitioning algorithm is proposed to incorporate these rules as weight constraints for system variable clustering. The experiments demonstrate that our strategy leads to order-of-magnitude speedup over previous.

The population protocol model has emerged as a new computation paradigm for describing mobile
*ad hoc*networks that consist of a number of mobile nodes interacting with each other to carry out a computation
. Correctness proofs of such protocols involve intricate arguments on
infinite sequences of events. We formalize such proofs in the constructive framework given by Coq. Preliminary results on the leader election problem show that we gain interesting insights on
the behaviour of such algorithms.

This work is done in cooperation with Deng Yuxin, from the BASICS group at Shanghai Jiaotong University.

The goal of this project contracted with Schneider Electric China is to develop a full system simulator for a System-On-Chip used by Schneider Electric in their automation product line.

The goal of this project is to complete the PowerPC simulator and compare its performance with another simulator used internally by Orange IT Labs.

SIVES is a French-Chinese ANR project for 2009-2011 between INRIA FORMES, Verimag, ST Microelectronics, Tsinghua University and Beihand University on the development of a “SImulation and Verification based platform for Embedded Systems” (coordinated by Frédéric Blanqui).

FORMES is part of the Sino-French Laboratory for Computer Science, Automation and Applied Mathematics (LIAMA)

FORMES co-organized the 1st Workshop on Simulation Based Development of Certified Embedded Systems

FORMES organized the 1st Asian-Pacific Summer School on Formal Methods

Ming Gu and Vania Joloboff participated into the scientific committee and provided local organization for the 2009 ARTIST Summer School in China

Last but not least, FORMES organizes a weekly seminar which has been proved to be a major local forum in the area of formal methods, with a steady participation of colleagues who come from the other nearby research institutions, CASIA, ISCAS and Pekin University, to attend the presentations. All seminars are announced on our website, as well as the other relevant local seminars or events, in particular those taking place at ISCAS.

FORMES is part of the working group LTP on Languages, Types and Proofs of the GDR GPL

FORMES is part of the working group LAC on Logic, Algebra and Calculus of the GDR IM

Jean-François Monin visited Yuxi Fu (Jiaotong University, Shanghai) from October 11 to October 16, and from December 14 to December 18.

Vania Joloboff was invited to give presentations at the Shanghai Jiaotong University, the Hunan University and the National University of Defense Technology (Changsha).

Jean-Pierre Jouannaud gave a seminar presentation at Ecole Polytechnique, in June 2009, on his work on confluence with van Oostrom.

Jean-Pierre Jouannaud gave an invited presentation at the 10th Anniversary of the France-Taiwan Scientific Prize, September 2009, NTU, Taipei, Taiwan.

Jean-Pierre Jouannaud was guest speaker in the Distinguished lecture series of Academia Sinica, October 2009, Taipei, Taiwan.

Jean-Pierre Jouannaud was keynote speaker at the IEEE Open Source Software Conference, October 2009, Guiyang, China.

Vania Joloboff and Jean-Pierre Jouannaud gave an invited presentation at the Microsoft Research Asia Verified Software Workshop, October 2009, Beijing, China.

Frédéric Blanqui, Jean-Pierre Jouannaud and Vania Joloboff gave an invited presentation at the workshop SBDCES, October 2009, Osaka, Japan.

Jean-Pierre Jouannaud and Pierre-Yves Strub gave a seminar presentation at USTC, November 2009, Suzhou, China.

Jean-Pierre Jouannaud is a member of the editorial board of IJSI.

Frédéric Blanqui was PC member of the 4th International Workshop on Logical Frameworks and Meta-languages: Theory and Practice (LFMTP'09), the 10th International Workshop on Termination (WST'09) and the 1st Coq Workshop (Coq'09).

Jean-Pierre Jouannaud was PC member of Developments in Computational Models 2009: Computational Models From Nature, ICALP workshop, July 11, Rhodes, Greece.

Jean-Pierre Jouannaud is PC Chair of LICS 2010 in Edinburgh, UK.

Frédéric Blanqui supervised the 2-months internship of Julien Bureaux (ENS Paris) on the certification in Coq of the termination of Haskell programs.

Frédéric Blanqui supervised the 2-months internship of Antoine Taveneaux (ENS Lyon) on the automated verification of complexity bounds using sized types.

Frédéric Blanqui supervised the internships of Lianyi Zhang and Qian Wang on the certification of loops in rewrite systems.

Vania Joloboff supervised the 4-month internships of Pascal Combier, Patrice Seng and Ren Ligang, as well as the one-year internships of Bin Liu, Yuning Pang, Ming Liu and Bing Zhou.

Jean-Pierre Jouannaud participated in january to the selection of chinese candidates to the postdoc program of the "Fondation Franco-Chinoise pour la Science et ses Applications".

Frédéric Blanqui, Jean-François Monin and Pierre-Yves Strub participated to the classes in the 1st Asian-Pacific Summer School on Formal Methods.

Frédéric Blanqui gave lectures on “System F
*à la*Curry” at Tsinghua University.

Jean-Pierre Jouannaud gave lectures at Tsinghua university, at both the graduate and undergraduate level.

Frédéric Blanqui and Pierre-Yves Strub gave lectures on Coq at Tsinghua university (graduate class).

Jean-François Monin participated to the oral examinations and selection of chinese students applicants to the Polytech'Network.

Cody Roux, PhD at INRIA Nancy in the Pareo team, supervised by Claude Kirchner (INRIA Bordeaux), Gilles Dowek (INRIA Saclay, École Polytechnique) and Frédéric Blanqui visited Formes from June 15 to July 5, and from November 18 to December 16, to work on his thesis with Frédéric Blanqui.

Pierre-Louis Curien (CNRS), leader of INRIA
r^{2}project-team, visited FORMES from August 18 to October 11 and gave lectures on “Proof theory: sequent calculus and focalisation”.

Jean-Jacques Lévy (INRIA), director of the MSR-INRIA Joint Center, visited FORMES from October 13 to December 5, gave lectures on “Caml, OCaml and Jocaml” and a talk at ISCAS on “Three years of research at the MSR-INRIA Joint Centre”. His visit was partially supported by ISCAS.

Kokichi Futatsugi (JAIST) gave a talk on April 3 on “Combining Inference and Search in Verification with CafeOBJ”.

Gilles Dowek (Ecole Polytechnique) gave a talk on April 17 on “Polarized Resolution Modulo”.

Yoshiki Kinoshita (AIST) gave a talk on April 24 on “Introduction to Agda language and system”.

Christian Urban (TU Muenchen) gave a talk on May 22 on “Nominal Techniques in the Theorem Prover Isabelle or, How Not to be Intimidated by the Variable Convention”.

Hubert Comon-Lundh (INRIA & AIST) gave a talk on May 26 on “Models for security protocols”.

Hugo Herbelin (INRIA) gave a talk on June 16 on “Coq: current development issues”.

Gilles Barthe (IMDEA Software) gave a talk on September 1st on the “Certification of code-based cryptographic proofs”.

Laurent Fribourg (CNRS & ENS Cachan) gave a talk on October 12 on “Detecting Race Condition in Concurrent Systems with the Inverse Method”.

Joseph Sifakis (CNRS & INRIA-Schneider), Turing Award 2007, gave a talk on October 26 on “Embedded Systems Design: Scientific Challenges and Work Directions”, and on October 27 on “Component-based Construction of Heterogeneous Real-time Systems in BIP”.

T. John Koo, Director of the Center for Embedded Software Systems, Shenzhen Institute of Advanced Technology (SIAT), Chinese Academy of Sciences, gave a talk on November 13 about“Model-based tool-chain for the design and analysis of embedded hybrid systems”.