The general objective of the Toccata project is to promote formal specification and computer-assisted proof in the development of software that requires high assurance in terms of safety and correctness with respect to the intended behavior of the software.

The importance of software in critical systems increased a lot in the last decade. Critical software appears in various application domains like transportation (e.g., aviation, railway), communication (e.g., smartphones), banking, etc. The number of tasks performed by software is quickly increasing, together with the number of lines of code involved. Given the need of high assurance of safety in the functional behavior of such applications, the need for automated (i.e., computer-assisted) methods and techniques to bring guarantee of safety became a major challenge. In the past and at present, the most widely used approach to check safety of software is to apply heavy test campaigns. These campaigns take a large part of the costs of software development, yet they cannot ensure that all the bugs are caught.

Generally speaking, software verification approaches pursue three goals: (1) verification should be sound, in the sense that no bugs should be missed, (2) verification should not produce false alarms, or as few as possible (3) it should be as automated as possible. Reaching all three goals at the same time is a challenge. A large class of approaches emphasizes goals (2) and (3): testing, run-time verification, symbolic execution, model checking, etc. Static analysis, such as abstract interpretation, emphasizes goals (1) and (3). Deductive verification emphasizes (1) and (2). The Toccata project is mainly interested in exploring the deductive verification approach, although we also consider the others in some cases.

In the past decade, there has been significant progress made in the
domain of deductive program verification. They are emphasized by some
success stories of application of these techniques on industrial-scale
software. For example, the *Atelier B* system was used to develop part of
the embedded software of the Paris metro line 14 and
other railroad-related systems; a formally proved C compiler was
developed using the Coq proof assistant ; Microsoft's
hypervisor for highly secure virtualization was verified using
VCC and the Z3 prover ; the
L4-verified project developed a formally verified micro-kernel with
high security guarantees, using analysis tools on top of the
Isabelle/HOL proof assistant . Another sign of
recent progress is the emergence of deductive verification
competitions (e.g., VerifyThis , VScomp ).

Finally, recent trends in the industrial practice for development of critical software is to require more and more guarantees of safety, e.g., the upcoming DO-178C standard for developing avionics software adds to the former DO-178B the use of formal models and formal methods. It also emphasizes the need for certification of the analysis tools involved in the process.

There are two main families of approaches for deductive
verification. Methods in the first family build on top of mathematical
proof assistants (e.g., Coq, Isabelle) in which both the model and the
program are encoded; the proof that the program meets its
specification is typically conducted in an interactive way using the
underlying proof construction engine. Methods from the second family
proceed by the design of standalone tools taking as input a program in
a particular programming language (e.g., C, Java) specified with a
dedicated annotation language (e.g., ACSL ,
JML ) and automatically producing a set of
mathematical formulas (the *verification conditions*) which are
typically proved using automatic provers (e.g., Z3,
Alt-Ergo , CVC3 , CVC4).

The first family of approaches usually offers a higher level of assurance than the second, but also demands more work to perform the proofs (because of their interactive nature) and makes them less easy to adopt by industry. Moreover, they do not allow to directly analyze a program written in a mainstream programming language like Java or C. The second kind of approaches has benefited in the past years from the tremendous progress made in SAT and SMT solving techniques, allowing more impact on industrial practices, but suffers from a lower level of trust: in all parts of the proof chain (the model of the input programming language, the VC generator, the back-end automatic prover), potential errors may appear, compromising the guarantee offered. Moreover, while these approaches are applied to mainstream languages, they usually support only a subset of their features.

In the former ProVal project, we have been working on the design of
methods and tools for deductive verification of programs. One of our
original skills was the ability to conduct proofs by using automatic
provers and proof assistants at the same time, depending on the
difficulty of the program, and specifically the difficulty of each
particular verification condition. We thus believe that we are in a good
position to propose a bridge between the two families of approaches of
deductive verification presented above.
Establishing this bridge is one of the goals of the Toccata project:
we want to provide methods and tools for deductive program
verification that can offer both a high amount of proof automation
and a high guarantee of validity. Toward this objective, a new axis
of research was proposed: the development of *certified* analysis
tools that are themselves formally proved correct.

The reader should be aware that the word “certified”
in this scientific programme means “verified by a formal specification
and a formal proof that the program meets this specification”. This
differs from the standard meaning of “certified” in an industrial
context where it means a conformance to some rigorous process and/or
norm. We believe this is the right term to use, as it was used for the
*Certified Compiler* project , the new
conference series *Certified Programs and Proofs*, and more
generally the important topics of *proof certificates*.

In industrial applications, numerical calculations are very common (e.g. control software in transportation). Typically they involve floating-point numbers. Some of the members of Toccata have an internationally recognized expertise on deductive program verification involving floating-point computations. Our past work includes a new approach for proving behavioral properties of numerical C programs using Frama-C/Jessie , various examples of applications of that approach , the use of the Gappa solver for proving numerical algorithms , an approach to take architectures and compilers into account when dealing with floating-point programs , . We also contributed to the Handbook of Floating-Point Arithmetic . A representative case study is the analysis and the proof of both the method error and the rounding error of a numerical analysis program solving the one-dimension acoustic wave equation . Our experience led us to a conclusion that verification of numerical programs can benefit a lot from combining automatic and interactive theorem proving , . Certification of numerical programs is the other main axis of Toccata.

Our scientific programme in structured into four objectives:

deductive program verification;

automated reasoning;

formalization and certification of languages, tools and systems;

proof of numerical programs.

We detail these objectives below.

Permanent researchers: A. Charguéraud, S. Conchon, J.-C. Filliâtre, C. Marché, G. Melquiond, A. Paskevich

This ecosystem is central in our work; it is displayed on Figure . The boxes in red background correspond to the tools we develop in the Toccata team.

The initial design of Why3 was presented in 2012 , . In the past years, the main improvements concern the specification language (such as support for higher-order logic functions ) and the support for provers. Several new interactive provers are now supported: PVS 6 (used at NASA), Isabelle2014 (planned to be used in the context of Ada program via Spark), and Mathematica. We also added support for new automated provers: CVC4, Metitarski, Metis, Beagle, Princess, and Yices2. More technical improvements are the design of a Coq tactic to call provers via Why3 from Coq, and the design of a proof session mechanism . Why3 was presented during several invited talks , , , .

At the level of the C front-end of Why3 (via Frama-C), we have
proposed an approach to add a notion of refinement on C
programs , and an approach to reason about
pointer programs with a standard logic, via *separation predicates*

The Ada front-end of Why3 has mainly been developed during the past three
years, leading to the release of SPARK2014 (http://

In collaboration with J. Almeida, M. Barbosa, J. Pinto, and B. Vieira (University do Minho, Braga, Portugal), J.-C. Filliâtre has developed a method for certifying programs involving cryptographic methods. It uses Why as an intermediate language .

With M. Pereira and S. Melo de Sousa (Universidade da Beira Interior, Covilhã, Portugal), J.-C. Filliâtre has developed an environment for proving ARM assembly code. It uses Why3 as an intermediate VC generator. It was presented at the Inforum conference (best student paper).

S. Conchon and A. Mebsout, in collaboration with F. Zaïdi (VALS team, LRI), A. Goel and S. Krstić (Strategic Cad Labs, INTEL) have proposed a new model-checking approach for verifying safety properties of array-based systems. This is a syntactically restricted class of parametrized transition systems with states represented as arrays indexed by an arbitrary number of processes. Cache coherence protocols and mutual exclusion algorithms are typical examples of such systems. It was first presented at CAV 2012 and detailed further . It was applied to the verification of programs with fences . The core algorithm has been extended with a mechanism for inferring invariants. This new algorithm, called BRAB, is able to automatically infer invariants strong enough to prove industrial cache coherence protocols. BRAB computes over-approximations of backward reachable states that are checked to be unreachable in a finite instance of the system. These approximations (candidate invariants) are then model-checked together with the original safety properties. Completeness of the approach is ensured by a mechanism for backtracking on spurious traces introduced by too coarse approximations , .

In the context of the ERC DeepSea project

To provide an easy access to the case studies that we develop using
Why3 and its front-ends, we have published a *gallery of verified
programs* on our web page
http://

Other case studies that led to publications are the design of a
library of data-structures based on
AVLs , and the verification a two-lines
C program (solving the

A. Charguéraud, with F. Pottier (Inria Paris), extended their formalization of the correctness and asympotic complexity of the classic Union Find data structure, which features the bound expressed in terms of the inverse Ackermann function . The proof, conducted using CFML extended with time credits, was refined using a slightly more complex potential function, allowing to derive a simpler and richer interface for the data structure .

For other case studies, see also sections of numerical programs and formalization of languages and tools.

Several research groups in the world develop their own approaches, techniques, and tools for deductive verification. With respect to all these related approaches and tools, our originality is our will to use more sophisticated specification languages (with inductive definitions, higher-order features and such) and the ability to use a large set of various theorem provers, including the use of interactive theorem proving to deal with complex functional properties.

The RiSE
team

The KeY project

The “software engineering” group at Augsburg, Germany,
develops the KIV
system

The VeriFast
system

The Mobius Program Verification
Environment

The Lab for Automated Reasoning and Analysis

The TLA environment

The F* project

The KeY and KIV environments mentioned above are partly based on interactive theorem provers. There are other approaches on top of general-purpose proof assistants for proving programs that are not purely functional:

The Ynot project

Front-ends to Isabelle were developed to deal with simple sequential imperative programs or C programs . The L4-verified project is built on top of Isabelle.

Permanent researchers: S. Conchon, G. Melquiond, A. Paskevich

J. C. Blanchette and A. Paskevich have designed an extension to the TPTP TFF (Typed First-order Form) format of theorem proving problems to support rank-1 polymorphic types (also known as ML-style parametric polymorphism) . This extension, named TFF1, has been incorporated in the TPTP standard.

S. Conchon defended his *habilitation à diriger des
recherches* in December 2012. The memoir
provides a useful survey of the scientific work of the
past 10 years, around the SMT solving techniques, that led to the
tools Alt-Ergo and Cubicle as they are nowadays.

C. Dross, J. Kanig, S. Conchon, and A. Paskevich have proposed a
generic framework for adding a decision procedure for a theory or a
combination of theories to an SMT prover. This mechanism is based
on the notion of instantiation patterns, or *triggers*, which
restrict instantiation of universal premises and can effectively
prevent a combinatorial explosion. A user provides an
axiomatization with triggers, along with a proof of completeness and
termination in the proposed framework, and obtains in return a sound,
complete and terminating solver for his theory. A prototype
implementation was realized on top of
Alt-Ergo. As a case study, a feature-rich
axiomatization of doubly-linked lists was proved complete and
terminating . C. Dross
defended her PhD thesis in April 2014 . The
main results of the thesis are: (1) a formal semantics of the notion
of *triggers* typically used to control quantifier
instantiation in SMT solvers, (2) a general setting to show how a
first-order axiomatization with triggers can be proved correct,
complete, and terminating, and (3) an extended DPLL(T) algorithm to
integrate a first-order axiomatization with triggers as a decision
procedure for the theory it defines. Significant case studies were
conducted on examples coming from SPARK programs, and on the
benchmarks on B set theory constructed within the BWare project.

S. Conchon, É. Contejean and M. Iguernelala have presented a modular
extension of ground AC-completion for deciding formulas in the
combination of the theory of equality with user-defined AC symbols,
uninterpreted symbols and an arbitrary signature-disjoint Shostak
theory X . This work extends the results presented
in by showing that a simple preprocessing
step allows to get rid of a full AC-compatible reduction ordering,
and to simply use a partial multiset extension of a
*non-necessarily AC-compatible* ordering.

S. Conchon, M. Iguernelala, and A. Mebsout have designed a collaborative framework for reasoning modulo simple properties of non-linear arithmetic . This framework has been implemented in the Alt-Ergo SMT solver.

S. Conchon, G. Melquiond and C. Roux have described a dedicated procedure for a theory of floating-point numbers which allows reasoning on approximation errors. This procedure is based on the approach of the Gappa tool: it performs saturation of consequences of the axioms, in order to refine bounds on expressions. In addition to the original approach, bounds are further refined by a constraint solver for linear arithmetic . This procedure has been implemented in Alt-Ergo.

In collaboration with A. Mahboubi (Inria project-team Typical), and G. Melquiond, the group involved in the development of Alt-Ergo have implemented and proved the correctness of a novel decision procedure for quantifier-free linear integer arithmetic . This algorithm tries to bridge the gap between projection and branching/cutting methods: it interleaves an exhaustive search for a model with bounds inference. These bounds are computed provided an oracle capable of finding constant positive linear combinations of affine forms. An efficient oracle based on the Simplex procedure has been designed. This algorithm is proved sound, complete, and terminating and is implemented in Alt-Ergo.

Most of the results above are detailed in M. Iguernelala's PhD thesis .

We have been quite successful in the application of Alt-Ergo to industrial development: qualification by Airbus France, integration of Alt-Ergo into the Spark Pro toolset.

In the context of the BWare project, aiming at using Why3 and Alt-Ergo for discharging proof obligations generated by Atelier B, we made progress into several directions. The method of translation of B proof obligations into Why3 goals was first presented at ABZ'2012 . Then, new drivers have been designed for Why3, in order to use new back-end provers Zenon modulo and iProver modulo. A notion of rewrite rule was introduced into Why3, and a transformation for simplifying goals before sending them to back-end provers was designed. Intermediate results obtained so far in the project were presented both at the French conference AFADL and at ABZ'2014 .

On the side of Alt-Ergo, recent developments have been made to efficiently discharge proof obligations generated by Atelier B. This includes a new plugin architecture to facilitate experiments with different SAT engines, new heuristics to handle quantified formulas, and important modifications in its internal data structures to boost performances of core decision procedures. Benchmarks realized on more than 10,000 proof obligations generated from industrial B projects show significant improvements .

Hybrid automatons interleave continuous behaviors (described by differential equations) with discrete transitions. D. Ishii and G. Melquiond have worked on an automated procedure for verifying safety properties (that is, global invariants) of such systems .

Automated Theorem Proving is a large community, but several sub-groups can be identified:

The SMT-LIB community gathers people interested in reasoning
modulo theories. In this community, only a minority of participants are
interested in supporting first-order quantifiers at the same time as
theories. SMT solvers that support quantifiers are Z3 (Microsoft
Research Redmond, USA), CVC3 and its successor CVC4

The TPTP community gathers people interested in first-order theorem proving.

Other Inria teams develop provers: veriT by team Veridis, and Psyche by team Parsifal.

Other groups develop provers dedicated to very specific cases,
such as
Metitarski *cf* objective 4.

It should be noticed that a large number of provers mentioned above are connected to Why3 as back-ends.

Permanent researchers: S. Boldo, A. Charguéraud, C. Marché, G. Melquiond, C. Paulin

S. Boldo, C. Lelay, and G. Melquiond have worked on the Coquelicot library, designed to be a user-friendly Coq library about real analysis , . An easier way of writing formulas and theorem statements is achieved by relying on total functions in place of dependent types for limits, derivatives, integrals, power series, and so on. To help with the proof process, the library comes with a comprehensive set of theorems and some automation. We have exercised the library on several use cases: on an exam at university entry level , for the definitions and properties of Bessel functions , and for the solution of the one-dimensional wave equation . We have also conducted a survey on the formalization of real arithmetic and real analysis in various proof systems .

Watermarking techniques are used to help identify copies of publicly released information. They consist in applying a slight and secret modification to the data before its release, in a way that should remain recognizable even in (reasonably) modified copies of the data. Using the Coq Alea library, which formalizes probability theory and probabilistic programs, D. Baelde together with P. Courtieu, D. Gross-Amblard from Rennes and C. Paulin have established new results about the robustness of watermarking schemes against arbitrary attackers . The technique for proving robustness is adapted from methods commonly used for cryptographic protocols and our work illustrates the strengths and particularities of the Alea style of reasoning about probabilistic programs.

P. Herms, together with C. Marché and B. Monate (CEA List), has developed a certified VC generator, using Coq. The program for VC calculus and its specifications are both written in Coq, but the code is crafted so that it can be extracted automatically into a stand-alone executable. It is also designed in a way that allows the use of arbitrary first-order theorem provers to discharge the generated obligations . On top of this generic VC generator, P. Herms developed a certified VC generator for C source code annotated using ACSL. This work is the main result of his PhD thesis .

A. Tafat and C. Marché have developed a certified VC generator using Why3 , . The challenge was to formalize the operational semantics of an imperative language, and a corresponding weakest precondition calculus, without the possibility to use Coq advanced features such as dependent types or higher-order functions. The classical issues with local bindings, names and substitutions were solved by identifying appropriate lemmas. It was shown that Why3 can offer a significantly higher amount of proof automation compared to Coq.

A. Charguéraud, together with Alan Schmitt (Inria Rennes) and
Thomas Wood (Imperial College), has developed an interactive debugger
for JavaScript. The interface, accessible as a webpage in a browser,
allows to execute a given JavaScript program, following step by step
the formal specification of JavaScript developped in prior work
on *JsCert* . Concretely, the
tool acts as a double-debugger: one can visualize both the state of
the interpreted program and the state of the interpreter program.
This tool is intended for the JavaScript committee, VM developpers,
and other experts in JavaScript semantics.

M. Clochard, C. Marché, and A. Paskevich have developed a general setting for developing programs involving binders, using Why3. This approach was successfully validated on two case studies: a verified implementation of untyped lambda-calculus and a verified tableaux-based theorem prover .

M. Clochard, J.-C. Filliâtre, C. Marché, and A. Paskevich
have developed a case study on the formalization of semantics of
programming languages using
Why3 . This case study aims at
illustrating recent improvements of Why3 regarding the support for
higher-order logic features in the input logic of Why3, and how
these are encoded into first-order logic, so that goals can be
discharged by automated provers. This case study also illustrates
how reasoning by induction can be done without need for
interactive proofs, via the use of *lemma functions*.

M. Clochard and L. Gondelman have developed a formalization of a simple compiler in Why3 . It compiles a simple imperative language into assembler instructions for a stack machine. This case study was inspired by a similar example developed using Coq and interactive theorem proving. The aim is to improve significantly the degree of automation in the proofs. This is achieved by the formalization of a Hoare logic and a Weakest Precondition Calculus on assembly programs, so that the correctness of compilation is seen as a formal specification of the assembly instructions generated.

The objective of formalizing languages and algorithms is very general, and it is pursued by several Inria teams. One common trait is the use of the Coq proof assistant for this purpose: Pi.r2 (development of Coq itself and its meta-theory), Gallium (semantics and compilers of programming languages), Marelle (formalization of mathematics), SpecFun (real arithmetic), Celtique (formalization of static analyzers).

Other environments for the formalization of languages include

ACL2
system

Isabelle
environment

The team “Trustworthy Systems” at NICTA in
Australia

The PVS system

In the Toccata team, we do not see these alternative environments as competitors, even though, for historical reasons, we are mainly using Coq. Indeed both Isabelle and PVS are available as back-ends of Why3.

Permanent researchers: S. Boldo, C. Marché, G. Melquiond

Linked with objective 1 (Deductive Program Verification), the methodology for proving numerical C programs has been presented by S. Boldo in her habilitation and as invited speaker . An application is the formal verification of a numerical analysis program. S. Boldo, J.-C. Filliâtre, and G. Melquiond, with F. Clément and P. Weis (POMDAPI team, Inria Paris - Rocquencourt), and M. Mayero (LIPN), completed the formal proof of the second-order centered finite-difference scheme for the one-dimensional acoustic wave .

Several challenging floating-point algorithms have been studied and proved. This includes an algorithm by Kahan for computing the area of a triangle: S. Boldo proved an improvement of its error bound and new investigations in case of underflow . This includes investigations about quaternions. They should be of norm 1, but due to the round-off errors, a drift of this norm is observed over time. C. Marché determined a bound on this drift and formally proved it correct . P. Roux formally verified an algorithm for checking that a matrix is semi-definite positive . The challenge here is that testing semi-definiteness involves algebraic number computations, yet it needs to be implemented using only approximate floating-point operations.

Because of compiler optimizations (or bugs), the floating-point semantics of a program might change once compiled, thus invalidating any property proved on the source code. We have investigated two ways to circumvent this issue, depending on whether the compiler is a black box. When it is, T. Nguyen has proposed to analyze the assembly code it generates and to verify it is correct . On the contrary, S. Boldo and G. Melquiond (in collaboration with J.-H. Jourdan and X. Leroy) have added support for floating-point arithmetic to the CompCert compiler and formally proved that none of the transformations the compiler applies modify the floating-point semantics of the program , .

Linked with objectives 2 (Automated Reasoning) and 3 (Formalization and Certification of Languages, Tools and Systems), G. Melquiond has implemented an efficient Coq library for floating-point arithmetic and proved its correctness in terms of operations on real numbers . It serves as a basis for an interval arithmetic on which Taylor models have been formalized. É. Martin-Dorel and G. Melquiond have integrated these models into CoqInterval . This Coq library is dedicated to automatically proving the approximation properties that occur when formally verifying the implementation of mathematical libraries (libm).

Double rounding occurs when the target precision of a floating-point computation is narrower than the working precision. In some situations, this phenomenon incurs a loss of accuracy. P. Roux has formally studied when it is innocuous for basic arithmetic operations . É. Martin-Dorel and G. Melquiond (in collaboration with J.-M. Muller) have formally studied how it impacts algorithms used for error-free transformations . These works were based on the Flocq formalization of floating-point arithmetic for Coq.

By combining multi-precision arithmetic, interval arithmetic, and massively-parallel computations, G. Melquiond (in collaboration with G. Nowak and P. Zimmermann) has computed enough digits of the Masser-Gramain constant to invalidate a 30-year old conjecture about its closed form .

This objective deals both with formal verification and floating-point arithmetic, which is quite uncommon. Therefore our competitors/peers are few. We may only cite the works by J. Duracz and M. Konečný, Aston University in Birmingham, UK.

The Inria team AriC (Grenoble - Rhône-Alpes) is closer to our research interests, but they are lacking manpower on the formal proof side; we have numerous collaborations with them. The Inria team Caramel (Nancy - Grand Est) also shares some research interests with us, though fewer; again, they do not work on the formal aspect of the verification; we have some occasional collaborations with them.

There are many formalization efforts from chip manufacturers, such as AMD (using the ACL2 proof assistant) and Intel (using the Forte proof assistants) but the algorithms they consider are quite different from the ones we study. The works on the topic of floating-point arithmetic from J. Harrison at Intel using HOL Light are really close to our research interests, but they seem to be discontinued.

A few deductive program verification teams are willing to extend their tools toward floating-point programs. This includes the KeY project and SPARK. We have an ongoing collaboration with the latter, in the context of the ProofInUSe project.

Deductive verification is not the only way to prove programs. Abstract interpretation is widely used, and several teams are interested in floating-point arithmetic. This includes the Inria team Antique (Paris - Rocquencourt) and a CEA List team, who have respectively developed the Astrée and Fluctuat tools. This approach targets a different class of numerical algorithms than the ones we are interested in.

Other people, especially from the SMT community (*cf*
objective 2), are also interested in automatically proving formulas
about floating-point numbers, notably at Oxford University. They are
mainly focusing on pure floating-point arithmetic though and do not
consider them as approximation of real numbers.

Finally, it can be noted that numerous teams are working on the verification of numerical programs, but assuming the computations are real rather than floating-point ones. This is out of the scope of this objective.

The application domains we target involve safety-critical software, that is where a high-level guarantee of soundness of functional execution of the software is wanted. Currently our industrial collaborations mainly belong to the domain of transportation, including aeronautics, railroad, space flight, automotive.

Transportation is the domain considered in the context of the ANR U3CAT project, led by CEA, in partnership with Airbus France, Dassault Aviation, Sagem Défense et Sécurité. It included proof of C programs via Frama-C/Jessie/Why, proof of floating-point programs , the use of the Alt-Ergo prover via CAVEAT tool (CEA) or Frama-C/WP. Within this context, we contributed to a qualification process of Alt-Ergo with Airbus industry: the technical documents (functional specifications and benchmark suite) have been accepted by Airbus, and these documents were submitted by Airbus to the certification authorities (DO-178B standard) in 2012. This action is continued in the new project Soprano.

Aeronautics is the main target of the Verasco project, led by Verimag, on the development of certified static analyzers, in partnership with Airbus. This is a follow-up of the transfer of the CompCert certified compiler (Inria team Gallium) to which we contributed to the support of floating-point computations .

The former FUI
project Hi-Lite, led by Adacore company, introduced the use of Why3
and Alt-Ergo as back-end to SPARK2014, an environment for
verification of Ada programs. This is applied to the domain of
aerospace (Thales, EADS Astrium). At the very beginning of that
project, Alt-Ergo was added in the Spark Pro toolset (predecessor of
SPARK2014), developed by Altran-Praxis: Alt-Ergo can be used by
customers as an alternate prover for automatically proving
verification conditions. Its usage is described in the new edition
of the Spark book *iFacts*.

In the current ANR project BWare, we investigate the use of Why3 and
Alt-Ergo as an alternative back-end for checking proof obligations
generated by *Atelier B*, whose main applications are
railroad-related
software,

a collaboration with Mitsubishi Electric R&D Centre Europe (Rennes) (joint publication ) and ClearSy (Aix-en-Provence).

S. Conchon (with A. Mebsout
and F. Zaidi from VALS team at LRI) has a long-term collaboration
with S. Krstic and A. Goel (Intel Strategic Cad Labs in Hillsboro,
OR, USA) that aims in the development of the SMT-based model checker
Cubicle (http://

S. Conchon has co-organized POPL'2017 (January, Paris,
http://

C. Marché has co-organized the first joint Frama-C/SPARK day (May,
Paris, http://

S. Boldo and G. Melquiond have published a book: Computer Arithmetic and Formal Proofs, Verifying Floating-point Algorithms with the Coq System .

M. Pereira and R. Rieu-Helft received the "Best student team"
award, and J.-C. Filliâtre the "Best overall team" award, at the
*VerifyThis@ETAPS2017 verification competition*.

*Automated theorem prover for software verification*

Keywords: Software Verification - Automated theorem proving

Functional Description: Alt-Ergo is an automatic solver of formulas based on SMT technology. It is especially designed to prove mathematical formulas generated by program verification tools, such as Frama-C for C programs, or SPARK for Ada code. Initially developed in Toccata research team, Alt-Ergo's distribution and support are provided by OCamlPro since September 2013.

Release Functional Description: the "SAT solving" part can now be delegated to an external plugin, new experimental SAT solver based on mini-SAT, provided as a plugin. This solver is, in general, more efficient on ground problems, heuristics simplification in the default SAT solver and in the matching (instantiation) module, re-implementation of internal literals representation, improvement of theories combination architecture, rewriting some parts of the formulas module, bugfixes in records and numbers modules, new option "-no-Ematching" to perform matching without equality reasoning (i.e. without considering "equivalence classes"). This option is very useful for benchmarks coming from Atelier-B, two new experimental options: "-save-used-context" and "-replay-used-context". When the goal is proved valid, the first option allows to save the names of useful axioms into a ".used" file. The second one is used to replay the proof using only the axioms listed in the corresponding ".used" file. Note that the replay may fail because of the absence of necessary ground terms generated by useless axioms (that are not included in .used file) during the initial run.

Participants: Alain Mebsout, Évelyne Contejean, Mohamed Iguernelala, Stéphane Lescuyer and Sylvain Conchon

Partner: OCamlPro

Contact: Sylvain Conchon

*Interactive program verification using characteristic formulae*

Keywords: Coq - Software Verification - Deductive program verification - Separation Logic

Functional Description: The CFML tool supports the verification of OCaml programs through interactive Coq proofs. CFML proofs establish the full functional correctness of the code with respect to a specification. They may also be used to formally establish bounds on the asymptotic complexity of the code. The tool is made of two parts: on the one hand, a characteristic formula generator implemented as an OCaml program that parses OCaml code and produces Coq formulae, and, on the other hand, a Coq library that provides notations and tactics for manipulating characteristic formulae interactively in Coq.

Participants: Arthur Charguéraud, Armaël Guéneau and François Pottier

Contact: Arthur Charguéraud

*The Coq Proof Assistant*

Keywords: Proof - Certification - Formalisation

Scientific Description: Coq is an interactive proof assistant based on the Calculus of (Co-)Inductive Constructions, extended with universe polymorphism. This type theory features inductive and co-inductive families, an impredicative sort and a hierarchy of predicative universes, making it a very expressive logic. The calculus allows to formalize both general mathematics and computer programs, ranging from theories of finite structures to abstract algebra and categories to programming language metatheory and compiler verification. Coq is organised as a (relatively small) kernel including efficient conversion tests on which are built a set of higher-level layers: a powerful proof engine and unification algorithm, various tactics/decision procedures, a transactional document model and, at the very top an IDE.

Functional Description: Coq provides both a dependently-typed functional programming language and a logical formalism, which, altogether, support the formalisation of mathematical theories and the specification and certification of properties of programs. Coq also provides a large and extensible set of automatic or semi-automatic proof methods. Coq's programs are extractible to OCaml, Haskell, Scheme, ...

Release Functional Description: Version 8.7 features a large amount of work on cleaning and speeding up the code base, notably the work of Pierre-Marie Pédrot on making the tactic-level system insensitive to existential variable expansion, providing a safer API to plugin writers and making the code more robust.

New tactics: Variants of tactics supporting existential variables "eassert", "eenough", etc. by Hugo Herbelin. Tactics "extensionality in H" and "inversion_sigma" by Jason Gross, "specialize with" accepting partial bindings by Pierre Courtieu.

Cumulative Polymorphic Inductive Types, allowing cumulativity of universes to go through applied inductive types, by Amin Timany and Matthieu Sozeau.

The SSReflect plugin by Georges Gonthier, Assia Mahboubi and Enrico Tassi was integrated (with its documentation in the reference manual) by Maxime Dénès, Assia Mahboubi and Enrico Tassi.

The "coq_makefile" tool was completely redesigned to improve its maintainability and the extensibility of generated Makefiles, and to make "_CoqProject" files more palatable to IDEs by Enrico Tassi.

A lot of other changes are described in the CHANGES file.

News Of The Year: Version 8.7 was released in October 2017 and version 8.7.1 in December 2017, development started in January 2017. This is the second release of Coq developed on a time-based development cycle. Its development spanned 9 months from the release of Coq 8.6 and was based on a public road-map. It attracted many external contributions. Code reviews and continuous integration testing were systematically used before integration of new features, with an important focus given to compatibility and performance issues.

The main scientific advance in this version is the integration of cumulative inductive types in the system. More practical advances in stability, performance, usability and expressivity of tactics were also implemented, resulting in a mostly backwards-compatible but appreciably faster and more robust release. Much work on plugin extensions to Coq by the same development team has also been going on in parallel, including work on JSCoq by Emilio JG Arias, Ltac 2 by P.M-Pédrot, which required synchronised changes of the main codebase. In 2017, the construction of the Coq Consortium by Yves Bertot and Maxime Dénès has greatly advanced and is now nearing its completion.

Participants: Abhishek Anand, C. J. Bell, Yves Bertot, Frédéric Besson, Tej Chajed, Pierre Courtieu, Maxime Denes, Julien Forest, Emilio Jesús Gallego Arias, Gaëtan Gilbert, Benjamin Grégoire, Jason Gross, Hugo Herbelin, Ralf Jung, Matej Kosik, Sam Pablo Kuper, Xavier Leroy, Pierre Letouzey, Assia Mahboubi, Cyprien Mangin, Érik Martin-Dorel, Olivier Marty, Guillaume Melquiond, Pierre-Marie Pédrot, Benjamin C. Pierce, Lars Rasmusson, Yann Régis-Gianas, Lionel Rieg, Valentin Robert, Thomas Sibut-Pinote, Michael Soegtrop, Matthieu Sozeau, Arnaud Spiwack, Paul Steckler, George Stelle, Pierre-Yves Strub, Enrico Tassi, Hendrik Tews, Laurent Théry, Amin Timany, Vadim Zaliva and Théo Zimmermann

Partners: CNRS - Université Paris-Sud - ENS Lyon - Université Paris-Diderot

Contact: Matthieu Sozeau

Publication: The Coq Proof Assistant, version 8.7.1

URL: http://

*Interval package for Coq*

Keywords: Interval arithmetic - Coq

Functional Description: CoqInterval is a library for the proof assistant Coq.

It provides several tactics for proving theorems on enclosures of real-valued expressions. The proofs are performed by an interval kernel which relies on a computable formalization of floating-point arithmetic in Coq.

The Marelle team developed a formalization of rigorous polynomial approximation using Taylor models in Coq. In 2014, this library has been included in CoqInterval.

Participants: Assia Mahboubi, Érik Martin-Dorel, Guillaume Melquiond, Jean-Michel Muller, Laurence Rideau, Laurent Théry, Micaela Mayero, Mioara Joldes, Nicolas Brisebarre and Thomas Sibut-Pinote

Contact: Guillaume Melquiond

Publications: Proving bounds on real-valued functions with computations - Floating-point arithmetic in the Coq system - Proving Tight Bounds on Univariate Expressions with Elementary Functions in Coq - Formally Verified Approximations of Definite Integrals - Formally Verified Approximations of Definite Integrals

*The Coquelicot library for real analysis in Coq*

Keywords: Coq - Real analysis

Functional Description: Coquelicot is library aimed for supporting real analysis in the Coq proof assistant. It is designed with three principles in mind. The first is the user-friendliness, achieved by implementing methods of automation, but also by avoiding dependent types in order to ease the stating and readability of theorems. This latter part was achieved by defining total function for basic operators, such as limits or integrals. The second principle is the comprehensiveness of the library. By experimenting on several applications, we ensured that the available theorems are enough to cover most cases. We also wanted to be able to extend our library towards more generic settings, such as complex analysis or Euclidean spaces. The third principle is for the Coquelicot library to be a conservative extension of the Coq standard library, so that it can be easily combined with existing developments based on the standard library.

Participants: Catherine Lelay, Guillaume Melquiond and Sylvie Boldo

Contact: Sylvie Boldo

*The Cubicle model checker modulo theories*

Keywords: Model Checking - Software Verification

Functional Description: Cubicle is an open source model checker for verifying safety properties of array-based systems, which corresponds to a syntactically restricted class of parametrized transition systems with states represented as arrays indexed by an arbitrary number of processes. Cache coherence protocols and mutual exclusion algorithms are typical examples of such systems.

Participants: Alain Mebsout and Sylvain Conchon

Contact: Sylvain Conchon

*The Flocq library for formalizing floating-point arithmetic in Coq*

Keywords: Floating-point - Arithmetic code - Coq

Functional Description: The Flocq library for the Coq proof assistant is a comprehensive formalization of floating-point arithmetic: core definitions, axiomatic and computational rounding operations, high-level properties. It provides a framework for developers to formally verify numerical applications.

Flocq is currently used by the CompCert verified compiler to support floating-point computations.

Participants: Guillaume Melquiond, Pierre Roux and Sylvie Boldo

Contact: Sylvie Boldo

Publications: Flocq: A Unified Library for Proving Floating-point Algorithms in Coq - A Formally-Verified C Compiler Supporting Floating-Point Arithmetic - Verified Compilation of Floating-Point Computations - Innocuous Double Rounding of Basic Arithmetic Operations - Formal Proofs of Rounding Error Bounds - Computer Arithmetic and Formal Proofs

*The Gappa tool for automated proofs of arithmetic properties*

Keywords: Floating-point - Arithmetic code - Software Verification - Constraint solving

Functional Description: Gappa is a tool intended to help formally verifying numerical programs dealing with floating-point or fixed-point arithmetic. It has been used to write robust floating-point filters for CGAL and it is used to verify elementary functions in CRlibm. While Gappa is intended to be used directly, it can also act as a backend prover for the Why3 software verification plateform or as an automatic tactic for the Coq proof assistant.

Participant: Guillaume Melquiond

Contact: Guillaume Melquiond

Publications: Generating formally certified bounds on values and round-off errors - Formal certification of arithmetic filters for geometric predicates - Assisted verification of elementary functions - From interval arithmetic to program verification - Formally Certified Floating-Point Filters For Homogeneous Geometric Predicates - Combining Coq and Gappa for Certifying Floating-Point Programs - Handbook of Floating-Point Arithmetic - Certifying the floating-point implementation of an elementary function using Gappa - Automations for verifying floating-point algorithms - Automating the verification of floating-point algorithms - Computer Arithmetic and Formal Proofs

*The Why3 environment for deductive verification*

Keywords: Formal methods - Trusted software - Software Verification - Deductive program verification

Functional Description: Why3 is an environment for deductive program verification. It provides a rich language for specification and programming, called WhyML, and relies on external theorem provers, both automated and interactive, to discharge verification conditions. Why3 comes with a standard library of logical theories (integer and real arithmetic, Boolean operations, sets and maps, etc.) and basic programming data structures (arrays, queues, hash tables, etc.). A user can write WhyML programs directly and get correct-by-construction OCaml programs through an automated extraction mechanism. WhyML is also used as an intermediate language for the verification of C, Java, or Ada programs.

Participants: Andriy Paskevych, Claude Marché, François Bobot, Guillaume Melquiond, Jean-Christophe Filliâtre, Levs Gondelmans and Martin Clochard

Partners: CNRS - Université Paris-Sud

Contact: Claude Marché

URL: http://

F. Faissole and B. Spitters have developed a mathematical formalism based on synthetic topology and homotopy type theory to interpret probabilistic algorithms. They suggest to use proof assistants to prove such programs . They also have formalized synthetic topology in the Coq proof assistant using the HoTT library. It consists of a theory of lower reals, valuations and lower integrals. All the results are constructive. They apply their results to interpret probabilistic programs using a monadic approach .

J.-C. Filliâtre and M. Pereira proposed a new approach to the verification of higher-order programs, using the technique of defunctionalization, that is, the translation of first-class functions into first-order values. This is an early experimental work, conducted on examples only within the Why3 system. This work was published at JFLA 2017 .

R. Rieu-Helft, C. Marché, and G. Melquiond devised a simple memory model for representing C-like pointers in the Why3 system. This makes it possible to translate a small fragment of Why3 verified programs into idiomatic C code . This extraction mechanism was used to turn a verified Why3 library of arbitrary-precision integer arithmetic into a C library that can be substituted to part of the GNU Multi-Precision (GMP) library .

J.-C. Filliâtre, M. Pereira and S. Melo de Sousa proposed a new methodology for proving highly imperative OCaml programs with Why3. For a given OCaml program, a specific memory model is built and one checks a Why3 program that operates on it. Once the proof is complete, they use Why3's extraction mechanism to translate its programs to OCaml, while replacing the operations on the memory model with the corresponding operations on mutable types of OCaml. This method is evaluated on several examples that manipulate linked lists and mutable graphs .

The SMT-LIB standard defines a formal semantics for a theory of floating-point (FP) arithmetic (FPA). This formalization reduces FP operations to reals by means of a rounding operator, as done in the IEEE-754 standard. Closely following this description, S. Conchon, M. Iguernlala, K. Ji, G. Melquiond and C. Fumex propose a three-tier strategy to reason about FPA in SMT solvers. The first layer is a purely axiomatic implementation of the automatable semantics of the SMT-LIB standard. It reasons with exceptional cases (e.g. overflows, division by zero, undefined operations) and reduces finite representable FP expressions to reals using the rounding operator. At the core of the strategy, a second layer handles a set of lemmas about the properties of rounding. For these lemmas to be used effectively, the instantiation mechanism of SMT solvers is extended to tightly cooperate with the third layer, the NRA engine of SMT solvers, which provides interval information. The strategy is implemented in the Alt-Ergo SMT solver and validated on a set of benchmarks coming from the SMT-LIB competition, and also from the deductive verification of C and Ada programs. The results show that the approach is promising and compete with existing techniques implemented in state-of-the-art SMT solvers. This work was presented at the CAV conference .

M. Clochard designed an extension of first-order logic, for describing reasoning steps needed to discharge a proof obligation. The extension is under the form of two new connectives, called proof indications, that allow the user to encode reasoning steps inside a logic formula. This extension makes possible to use the syntax of formulas as a proof language. The approach was presented at the JFLA conference and implemented in Why3. It brings a lightweight mechanism for declarative proofs in an environment like Why3 where provers are used as black boxes. Moreover, this mechanism restricts the scope of auxiliary lemmas, reducing the size of proof obligations sent to external provers.

F. Faissole formalized a theory of finite dimensional subspaces of Hilbert spaces in order to apply the Lax-Milgram Theorem on such subspaces. He had to prove, in the Coq proof assistant, that finite dimensional subspaces of Hilbert spaces are closed in the context of general topology using filters . He also formalized both finite dimensional modules and finite dimensional subspaces of modules. He compared the two formalizations and showed a complementarity between them. He proved that the product of two finite dimensional modules is a finite dimensional module .

The CoqInterval library provides some tactics for computing and formally verifying numerical approximations of real-valued expressions inside the Coq system. In particular, it is able to compute reliable bounds on proper definite integrals . A. Mahboubi, G. Melquiond, and T. Sibut-Pinote extended these algorithms to also cover some improper integrals, e.g., those with an unbounded integration domain . This makes CoqInterval one of the very few tools able to produce reliable results for improper integrals, be they formally verified or not.

S. Boldo, F. Clément, F. Faissole, V. Martin, and M. Mayero worked on a Coq formal proof of the Lax–Milgram theorem. It is one of the theoretical cornerstone for the correctness of the Finite Element Method. It required many results from linear algebra, geometry, functional analysis, and Hilbert spaces .

S. Boldo, D. Gallois-Wong, and T. Hilaire developped a formalization in the Coq proof assistant of numerical filters. It includes equivalences between several expressions and the formal proof of the Worst-Case Peak Gain Theorem to bound the magnitude of the outputs (and every intern variable) of stable filters.

Abstract Libraries are the basic building blocks of any realistic programming project. It is thus of utmost interest for a programmer to build her software on top of bug-free libraries. At the ML family workshop , A. Charguéraud, J.-C. Filliâtre, M. Pereira and F. Pottier presented the ongoing VOCAL project, which aims at building a mechanically verified library of general-purpose data structures and algorithms, written in the OCaml language. A key ingredient of VOCAL is the design of a specification language for OCaml, independently of any verification tool.

The shell language
is widely used for various system administration tasks on UNIX
machines. The CoLiS project aims at applying formal methods for
verifying scripts used for installation of packages of software
distributions. The syntax and semantics of shell are particularly
treacherous. They proposed a new language called CoLiS which, on the
one hand, has well-defined static semantics and avoids some of the
pitfalls of the shell, and, on the other hand, is close enough to
the shell to be the target of an automated translation of the
scripts in our corpus. In collaboration with N. Jeannerod and
R. Treinen, C. Marché formalized the syntax and semantics of CoLiS
in Why3, defined an interpreter for the language in the WhyML
programming language, and present an automated proof in the Why3
proof environment of soundness and completeness of this interpreter
with respect to the formal semantics .
The development is available in Toccata's gallery
http://

R. Rieu-Helft used the Why3 system to implement,
specify, and verify a library of arbitrary-precision integer
arithmetic: comparison, addition, multiplication, shifts,
division. A lot of efforts were put into replicating and verifying
the numerous implementation tricks the GMP library uses to achieve
state-of-the-art performances, especially for the division
algorithm. While the resulting library is nowhere near as fast as
the hand-written assembly code GMP uses, it is competitive with the
generic C code of GMP for small integers (i.e., mini-GMP)
. The development is available in
Toccata's gallery
http://

M. Clochard, L. Gondelman and M. Pereira worked on a case study
about matrix multiplication. Two variants for the multiplication
of matrices are proved: a naive version using three nested loops
and Strassen's algorithm. To formally specify the two
multiplication algorithms, they developed a new Why3 theory of
matrices, and they applied a reflection methodology to conduct
some of the proofs. A first version of this work was presented at
the VSTTE Conference in 2016 . An
extended version that considers arbitrary rectangular matrices
instead of square ones is published in the Journal of Automated
Reasoning . The development is
available in Toccata's gallery
http://

As part of a larger set of case studies on
algorithms on graphs http://

In the context of file systems
like those of Unix, path resolution is the operation that given a
character string denoting an access path, determines the target
object (a file, a directory, etc.) designated by this path. This
operation is not trivial because of the presence of symbolic
links. Indeed, the presence of such links may induce infinite
loops in the resolution process. R. Chen, M. Clochard and
C. Marché consider a path resolution algorithm that always
terminates, detecting if it enters an infinite loop and reports a
resolution failure in such a case. They propose a formal
specification of path resolution and they formally prove that
their algorithm terminates on any input, and is correct and
complete with respect to this formal specification.
. The development is available in
Toccata's gallery
http://

S. Boldo and G. Melquiond published a book that provides a comprehensive view of how to formally specify and verify tricky floating-point algorithms with the Coq proof assistant. It describes the Flocq formalization of floating-point arithmetic and some methods to automate theorem proofs. It then presents the specification and verification of various algorithms, from error-free transformations to a numerical scheme for a partial differential equation. The examples cover not only mathematical algorithms but also C programs as well as issues related to compilation .

The level of proof success and proof automation highly depends on the way the floating-point operations are interpreted in the logic supported by back-end provers. C. Fumex, C. Marché and Y. Moy addressed this challenge by combining multiple techniques to separately prove different parts of the desired properties. They use abstract interpretation to compute numerical bounds of expressions, and use multiple automated provers, relying on different strategies for representing floating-point computations. One of these strategies is based on the native support for floating-point arithmetic recently added in the SMT-LIB standard. The approach is implemented in the Why3 environment and its front-end SPARK 2014. It is validated experimentally on several examples originating from industrial use of SPARK 2014 , .

S. Boldo, A. Chapoutot, and F. Faissole provided bounds on the round-off errors of explicit one-step numerical integration methods, such as Runge-Kutta methods. They developed a fine-grained analysis that takes advantage of the linear stability of the scheme, a mathematical property that vouches the scheme is well-behaved .

S. Boldo, S.Graillat, and J.-M. Muller worked on the 2Sum and Fast2Sum algorithms, that are important building blocks in numerical computing. They are used (implicitly or explicitly) in many compensated algorithms or for manipulating floating-point expansions. They showed that these algorithms are much more robust than it is usually believed: the returned result makes sense even when the rounding function is not round-to-nearest, and they are almost immune to overflow .

Many numerical problems require a higher computing precision than the one offered by standard floating-point formats. A common way of extending the precision is to use floating-point expansions. S. Boldo, M. Joldes, J.-M. Muller, and V. Popescu proved one of the algorithms used as a basic brick when computing with floating-point expansions: renormalization that “compresses” an expansion .

ProofInUse is a joint project between the Toccata team and the
SME AdaCore. It was selected and funded by the ANR programme
“Laboratoires communs”, starting from April 2014, for 3 years
http://

The SME AdaCore is a software publisher specializing in providing
software development tools for critical systems. A previous
successful collaboration between Toccata and AdaCore enabled
*Why3* technology to be put into the heart of the
AdaCore-developed SPARK technology.

The goal is now to promote and transfer the use of deduction-based verification tools to industry users, who develop critical software using the programming language Ada. The proof tools are aimed at replacing or complementing the existing test activities, whilst reducing costs.

ELEFFAN is a Digicosme project funding the PhD of
F. Faissole. S. Boldo is the principal investigator. It began in 2016
for three years. https://

The ELEFFAN project aims at formally proving rounding error bounds of numerical schemes.

Partners: ENSTA Paristech (A. Chapoutot)

The CoLiS research project is funded by the programme “Société de
l'information et de la communication” of the ANR, for a period of
60 months, starting on October 1st,
2015. http://

The project aims at developing formal analysis and verification techniques and tools for scripts. These scripts are written in the POSIX or bash shell language. Our objective is to produce, at the end of the project, formal methods and tools allowing to analyze, test, and validate scripts. For this, the project will develop techniques and tools based on deductive verification and tree transducers stemming from the domain of XML documents.

Partners: Université Paris-Diderot, IRIF laboratory (formerly PPS & LIAFA), coordinator; Inria Lille, team LINKS

The Vocal research project is funded by the programme “Société de
l'information et de la communication” of the ANR, for a period of 60
months, starting on October 1st, 2015. https://

The goal of the Vocal project is to develop the first formally verified library of efficient general-purpose data structures and algorithms. It targets the OCaml programming language, which allows for fairly efficient code and offers a simple programming model that eases reasoning about programs. The library will be readily available to implementers of safety-critical OCaml programs, such as Coq, Astrée, or Frama-C. It will provide the essential building blocks needed to significantly decrease the cost of developing safe software. The project intends to combine the strengths of three verification tools, namely Coq, Why3, and CFML. It will use Coq to obtain a common mathematical foundation for program specifications, as well as to verify purely functional components. It will use Why3 to verify a broad range of imperative programs with a high degree of proof automation. Finally, it will use CFML for formal reasoning about effectful higher-order functions and data structures making use of pointers and sharing.

Partners: team Gallium (Inria Paris-Rocquencourt), team DCS (Verimag), TrustInSoft, and OCamlPro.

This is a research project funded by the programme “Ingénierie
Numérique & Sécurité” of the ANR. It is funded for a period of
48 months and it has started on October 1st,
2014. http://

Our aim is to develop computer-aided proofs of numerical values, with certified and reasonably tight error bounds, without sacrificing efficiency. Applications to zero-finding, numerical quadrature or global optimization can all benefit from using our results as building blocks. We expect our work to initiate a "fast and reliable" trend in the symbolic-numeric community. This will be achieved by developing interactions between our fields, designing and implementing prototype libraries and applying our results to concrete problems originating in optimal control theory.

Partners: team ARIC (Inria Grenoble Rhône-Alpes), team MARELLE (Inria Sophia Antipolis - Méditerranée), team SPECFUN (Inria Saclay - Île-de-France), Université Paris 6, and LAAS (Toulouse).

The Soprano research project is funded by the programme “Sciences et
technologies logicielles” of the ANR, for a period of
42 months, starting on October 1st, 2014. http://

The SOPRANO project aims at preparing the next generation of verification-oriented solvers by gathering experts from academia and industry. We will design a new framework for the cooperation of solvers, focused on model generation and borrowing principles from SMT (current standard) and CP (well-known in optimization). Our main scientific and technical objectives are the following. The first objective is to design a new collaboration framework for solvers, centered around synthesis rather than satisfiability and allowing cooperation beyond that of Nelson-Oppen while still providing minimal interfaces with theoretical guarantees. The second objective is to design new decision procedures for industry-relevant and hard-to-solve theories. The third objective is to implement these results in a new open-source platform. The fourth objective is to ensure industrial-adequacy of the techniques and tools developed through periodical evaluations from the industrial partners.

Partners: team DIVERSE (Inria Rennes - Bretagne Atlantique), Adacore, CEA List, Université Paris-Sud, and OCamlPro.

LCHIP (Low Cost High Integrity Platform) is aimed at easing the
development of safety critical applications (up to SIL4) by
providing: (i) a complete IDE able to automatically generate and
prove bounded complexity software (ii) a low cost, safe execution
platform. The full support of DSLs and third party code generators
will enable a seamless deployment into existing development cycles.
LCHIP gathers scientific results obtained during the last 20 years
in formal methods, proof, refinement, code generation, etc. as well
as a unique return of experience on safety critical systems design.
http://

Partners: 2 technology providers (ClearSy, OcamlPro), in charge of building the architecture of the platform; 3 labs (IFSTTAR, LIP6, LRI), to improve LCHIP IDE features; 2 large companies (SNCF, RATP), representing public ordering parties, to check compliance with standard and industrial railway use-case.

The project lead by ClearSy has started in April 2016 and lasts 3 years. It is funded by BpiFrance as well as French regions.

Verification of PARameterized DIstributed systems. A parameterized
system specification is a specification for a whole class of
systems, parameterized by the number of entities and the properties
of the interaction, such as the communication model
(synchronous/asynchronous, order of delivery of message, application
ordering) or the fault model (crash failure, message loss). To
assist and automate verification without parameter instantiation,
PARDI uses two complementary approaches. First, a fully automatic
model checker modulo theories is considered. Then, to go beyond the
intrinsic limits of parameterized model checking, the project
advocates a collaborative approach between proof assistant and model
checker. http://

The proof lead by Toulouse INP/IRIT started in 2016 and lasts for 4 years. Partners: Université Pierre et Marie Curie (LIP6), Université Paris-Sud (LRI), Inria Nancy (team VERIDIS)

Program: COST (European Cooperation in Science and Technology).

Project acronym: EUTypes https://

Project title: The European research network on types for programming and verification

Duration: 2015-2019

Coordinator: Herman Geuvers, Radboud University Nijmegen, The Netherlands

Other partners: 36 members countries, see http://

Abstract: Types are pervasive in programming and information technology. A type defines a formal interface between software components, allowing the automatic verification of their connections, and greatly enhancing the robustness and reliability of computations and communications. In rich dependent type theories, the full functional specification of a program can be expressed as a type. Type systems have rapidly evolved over the past years, becoming more sophisticated, capturing new aspects of the behaviour of programs and the dynamics of their execution.

This COST Action will give a strong impetus to research on type theory and its many applications in computer science, by promoting (1) the synergy between theoretical computer scientists, logicians and mathematicians to develop new foundations for type theory, for example as based on the recent development of "homotopy type theory”, (2) the joint development of type theoretic tools as proof assistants and integrated programming environments, (3) the study of dependent types for programming and its deployment in software development, (4) the study of dependent types for verification and its deployment in software analysis and verification. The action will also tie together these different areas and promote cross-fertilisation.

Ran Chen is a PhD student from Institute of Software (Chinese Academy of Sciences, Beijing, China) visiting the team for 10 months under the supervision of C. Marché and J.-J. Lévy (PiR2 team, Inria Paris). She worked on the formal verification of graphs algorithms , , and also in the context of the CoLiS project on verification of some aspects of the Unix file system and shell scripts

S. Boldo, vice-president of the 28th “Journées Francophones des Langages Applicatifs” (JFLA 2017)

S. Boldo, president of the 29th “Journées Francophones des Langages Applicatifs” (JFLA 2018)

J.-C. Filliâtre, scientific chair and co-organizer of EJCP
(École Jeunes Chercheurs en Programmation du GDR GPL) at
Toulouse on June 26–30, 2017.
http://

S. Conchon, local chair for the 44th ACM SIGPLAN-SIGACT
Symposium on Principles of Programming Languages (POPL 2017),
held in Paris, France in January 2017.
http://

C. Marché, co-organizer of the first joint Frama-C/SPARK day
(May, Paris, http://

A. Paskevich, program chair of the 9th Working Conference on Verified Software: Theories, Tools, and Experiments (VSTTE 2017), in collaboration with Thomas Wies (NYU) .

S. Boldo, program chair of the 10th International Workshop on Numerical Software Verification (NSV 2017) in collaboration with Alessandro Abate (Oxford) .

S. Boldo, program vice-chair of the 28th “Journées Francophones des Langages Applicatifs” (JFLA 2017) .

S. Boldo, program chair of the 29th “Journées Francophones des Langages Applicatifs” (JFLA 2018).

S. Boldo, PC of the 24th IEEE Symposium on Computer Arithmetic (ARITH 2017)

S. Boldo, PC of the 25th IEEE Symposium on Computer Arithmetic (ARITH 2018)

S. Boldo, PC of the 6th ACM SIGPLAN Conference on Certified Programs and Proofs (CPP 2017)

S. Boldo, PC of the 7th ACM SIGPLAN Conference on Certified Programs and Proofs (CPP 2018)

S. Boldo, PC of the 8th International Conference on Interactive Theorem Proving (ITP 2017)

S. Boldo, PC of the Tenth NASA Formal Methods Symposium (NFM 2018)

G. Melquiond, PC of the 3rd International Workshop on Coq for Programming Languages (CoqPL 2017).

G. Melquiond, PC of the 1st ACM SIGPLAN Workshop on Machine Learning and Programming Languages (MAPL 2017).

G. Melquiond, PC of the 10th International Workshop on Numerical Software Verification (NSV 2017).

The members of the Toccata team have reviewed papers for numerous international conferences.

G. Melquiond, member of the editorial board of *Reliable
Computing*.

S. Boldo, member of the editorial board of Binaire
http://

The members of the Toccata team have reviewed numerous papers for numerous international journals.

S. Boldo gave a talk at EDF in Palaiseau on April 20th

S. Boldo gave a talk at the ModeliScale IPL in Paris on July 4th

S. Boldo gave a talk to teachers in Luminy on May 4th

S. Boldo gave a talk at the université of Saint-Denis de la Réunion on December 8th

S. Boldo, elected chair of the ARITH working group of the GDR-IM (a CNRS subgroup of computer science) with J. Detrey (Inria Nancy).

C. Marché, member of the scientific commission of Inria-Saclay, in charge of selecting candidates for PhD grants, Post-doc grants, temporary leaves from universities (“délégations”).

C. Marché, member of the “Bureau du Comité des Projets” of Inria-Saclay, in charge of examining proposals for creation of new Inria project-teams.

S. Boldo, member of the program committee for selecting postdocs of the maths/computer science program of the Labex mathématique Hadamard.

S. Boldo, member of a hiring committee for an associate professor position in computer science at Université Joseph Fourier, Grenoble, France.

S. Boldo, member of the 2017 committee for the Gilles Kahn PhD award of the French Computer Science Society.

G. Melquiond, member of the committee for the monitoring of PhD
students (*“commission de suivi doctoral”*).

Master Parisien de Recherche en Informatique (MPRI)
https://

Master: Fondements de l'informatique et ingénierie du
logiciel (FIIL)
https://

DUT (Diplôme Universitaire de Technologie): M1101 “Introduction aux systèmes informatiques”, A. Paskevich (36h), M3101 “Principes des systèmes d'exploitation”, A. Paskevich (58.5h), IUT d'Orsay, Université Paris-Sud, France.

Licence: “Langages de programmation et compilation” (L3), J.-C. Filliâtre (26h), École Normale Supérieure, France.

Licence: “INF411: Les bases de l'algorithmique et de la programmation” (L3), J.-C. Filliâtre (16h), École Polytechnique, France.

Master: “INF564: Compilation” (M1), J.-C. Filliâtre (18h), École Polytechnique, France.

Licence: “Programmation fonctionnelle avancée” (L3), S. Conchon (45h), Université Paris-Sud, France.

Licence: “Introduction à la programmation fonctionnelle” (L2), S. Conchon (25h), Université Paris-Sud, France.

R. Rieu-Helft (ENS, Paris) was a pre-PhD student doing an internship under supervision of C. Marché and G. Melquiond. He worked on the design and the formal verification of a library for unbounded integer arithmetic . He implemented in Why3 a mechanism for extracting code to the C language, in order to obtain a certified code that runs very efficiently .

D. Gallois-Wong was a Master-2 intern for 4 months under the supervision of S. Boldo. She began the formalization in Coq of numerical filters.

V. Tourneur was a Master-1 intern for 4 months under the supervision of S. Boldo. He developed and proved a new algorithm for computing the average of two floating-point numbers when the radix is 10.

PhD in progress: M. Clochard, “Méthodes et outils pour la spécification et la preuve de propriétés difficiles de programmes séquentiels”, since Oct. 2013, supervised by C. Marché and A. Paskevich.

PhD in progress: D. Declerck, “Vérification par des techniques de test et model checking de programmes C11”, since Sep. 2014, supervised by F. Zaïdi (LRI) and S. Conchon.

PhD in progress: M. Roux, “Model Checking de systèmes paramétrés et temporisés”, since Sep. 2015, supervised by Sylvain Conchon.

PhD in progress: M. Pereira, “A Verified Graph Library. Tools and techniques for the verification of modular higher-order programs, with extraction”, since May 2015, supervised by J.-C. Filliâtre.

PhD in progress: A. Coquereau, “[ErgoFast] Amélioration de performances pour le solveur SMT Alt-Ergo : conception d'outils d'analyse, optimisations et structures de données efficaces pour OCaml”, since Sep. 2015, supervised by S. Conchon, F. Le Fessant et M. Mauny.

PhD in progress: F. Faissole, “Stabilité(s): liens entre l'arithmétique flottante et l'analyse numérique”, since Oct. 2016, supervised by S. Boldo and A. Chapoutot.

PhD in progress: R. Rieu-Helft, “Développement et vérification de bibliothèques d'arithmétique entière en précision arbitraire”, since Oct. 2017, supervised by G. Melquiond and P. Cuoq (TrustInSoft).

PhD in progress: D. Gallois-Wong, “Vérification formelle et filtres numériques”, since Oct. 2017, supervised by S. Boldo and T. Hilaire.

C. Marché: reviewer of the habilitation thesis of R. Bubel, “Deductive Verification: From Theory to Practice”, Technische Universität Darmstadt, Germany, November 2017.

S. Boldo: reviewer and member of the PhD defense of A. Plet, École Normale Supérieure de Lyon, Lyon, France, July 2017.

S. Boldo: reviewer and member of the PhD defense of F. Maurica, Université de la Réunion, Saint-Denis, France, December 2017.

S. Boldo: president of the PhD defense of T. Sibut-Pinote, Université Paris-Saclay, Palaiseau, France, December 2017.

S. Boldo, scientific head for Saclay for the MECSI group for networking about computer science popularization inside Inria.

S. Boldo gave a talk at the Inria Saclay about how to popularize programming.

During the “Fête de la science” on October 13th, S. Boldo demonstrated unplugged computer science to teenagers and F. Faissole run a stand about an introduction to programming with robots. S. Boldo also did this activity to kids from 7 to 17 at the Massy opera on November, 17th.

S. Boldo gave a talk during at a *Girls can code* weekend on
August 23rd in Paris.

S. Boldo went to the Arpajon high-school for presenting Women in Science on December 19th.

S. Boldo gave a popularization talk to the administrative staff of Inria at Rocquencourt for the Inria birthday on November 16th.