## Section:
New Results2>
### Proved development of algorithms and systems3>
#### Incremental development of distributed algorithms4>

#### Incremental development of distributed algorithms4>

Participants : Dominique Méry, Manamiary Andriamiarina.

distributed algorithms, refinement, verification, distributed protocols

The development of distributed algorithms and, more generally, of distributed systems, is a complex, delicate, and challenging process. The approach based on refinement helps to gain formality by using a proof assistant, and proposes to apply a design methodology that starts from the most abstract model and leads, in an incremental way, to the most concrete model, for producing a distributed solution. Our works help to formalize pre-existing algorithms, develop new algorithms, as well as develop models for distributed systems.

Our research was initially (until 2010) carried out within the ANR project
RIMEL, in joint work with Mohammed Mosbah and Mohammed Tounsi from the LABRI
laboratory, and we are maintaining a joint project B2VISIDIA with LABRI on
these topics. More concretely, we aim at an integration of the
correct-by-construction refinement-based approach into the *local
computation* programming model. The team of LABRI develops an environment
called VISIDIA that provides a toolset for developing distributed algorithms
expressed as a set of rewriting rules of graph structures. The simulation of
rewriting rules is based on synchronization algorithms and we have developed
these algorithms by refinement.

More precisely, we show how state-based models can be developed for specific
problems and how they can be simply reused by controlling the composition of
state-based models through the refinement relationship. Consequently, we
obtain a redevelopment of existing distributed algorithms in the
*correct-by-construction* approach, and a framework for deriving new
distributed algorithms (by integrating models) whose correctness is ensured by
construction. Traditionally, distributed algorithms are supposed to run on a
fixed network, whereas we consider a network with a changing topology. We have
illustrated our methodology with the study of the protocol anycast
rp .

The contribution is related to the development of proof-based patterns providing effective help to the developer of formal models of applications, such as dynamic routing or the snapshot problem [13] . In fact, we have developed patterns for simplifying the development of distributed systems using refinement. The applicability of a pattern for routing has been reapplied to the development of a network on chip [12] with our partners of the French-Algerian cooperation described in section 8.3 .

#### Modeling and verifying the Pastry routing protocol4>

Participants : Tianxiang Lu, Stephan Merz, Christoph Weidenbach.

distributed hash table, peer-to-peer protocol, Pastry, model checking, theorem proving

As a significant case study for the techniques that we are developing within
VeriDis, we are modeling and verifying the routing protocol of the Pastry
algorithm [36] for maintaining a distributed hash
table in a peer-to-peer network. As part of his PhD work, Tianxiang Lu has
developed a TLA+ model of the Pastry routing protocol, which has
uncovered several issues in the existing presentations of the protocol in the
literature, and in particular a loophole in the join protocol that had been
fixed by the algorithm designers in a technical report that appeared after the
publication of the original protocol.

As a first step towards proving correctness of the Pastry routing protocol, we identified in 2011 a number of candidate invariants and formally proved in TLAPS (see section 5.2 ) that these implied the high-level correctness property. In 2012, we consolidated these invariants and proved them correct for our model under the strong assumption that no node ever leaves the network, and the minor assumption that any active node can at any time only allow one new node to join the network. It is still not clear at the moment to which extent nodes can be allowed to leave the network without breaking the virtual ring maintained by Pastry. The invariant proofs contain almost 15000 interactions and constitutes the largest case study carried out so far using TLAPS. We have more recently been able to obtain better automation using the new SMT backend (see section 6.1 ). The proof was presented at the TLA workshop of FM 2012 [23] .

#### Verification of distributed algorithms in the Heard-Of model4>

Participants : Henri Debrat, Stephan Merz.

theorem proving, distributed algorithms, round-based computation, Byzantine failures

Distributed algorithms are often quite subtle, both in the way they operate and in the assumptions required for their correctness. Formal models are important for unambiguously understanding the hypotheses and the properties of a distributed algorithm. We focus on the verification of round-based algorithms for fault-tolerant distributed systems expressed in the Heard-Of model of Charron-Bost and Schiper [37] , and have previously established a reduction theorem that allows to pretend that nodes operate synchronously.

In 2012, we have consolidated our formal proofs in Isabelle/HOL. In particular, we have finished the formal proof of the reduction theorem within Isabelle, produced a generic encoding of the Heard-Of model as a locale in Isabelle/HOL, and used this representation for verifying six different Consensus algorithms: three algorithms tolerating benign failures and three others designed for malicious failures, such as corrupted values. Our Isabelle theories have been published at the Archive of Formal Proofs [27] . The proof of the reduction theorem required formalizing the notion of stuttering invariance, which can be of independent interest and that has also been accepted at the Archive of Formal Proofs [28] .

As a significant extension of this work, we have studied the formal verification of probabilistic Consensus algorithms in the Heard-Of model, in particular the Ben-Or algorithm.

#### Model checking within SimGrid4>

Participants : Marie Duflot-Kremer, Stephan Merz.

model checking, distributed algorithms, message passing, communication primitives, partial-order reduction

For several years we have cooperated with Martin Quinson from the AlGorille project team on adding model checking capabilities to the simulation platform SimGrid for message-passing distributed C programs. The expected benefit of such an integration is that programmers can complement simulation runs by exhaustive state space exploration in order to detect errors such as race conditions that would be hard to reproduce by testing. As part of the thesis work of Cristián Rosa (defended in 2011), a stateless model checker was implemented within the SimGrid platform that can be used to verify safety properties of distributed C programs that communicate by message passing. The ongoing thesis of Marion Guthmuller builds upon this work and aims to extend it for verifying certain liveness properties. This requires rethinking the stateless design, as well as adapting the dynamic partial-order reduction algorithm that is essential to limiting the part of the state space that must actually be explored.

#### Modeling Medical Devices4>

Participant : Dominique Méry.

Formal modelling techniques and tools have attained sufficient maturity for formalizing highly critical systems in view of improving their quality and reliability, and the development of such methods has attracted the interest of industrial partners and academic research institutions. Building high quality and zero-defect medical software-based devices is a particular domain where formal modelling techniques can be applied effectively. In [21] , we present a methodology for developing critical systems from requirement analysis to automatic code generation based on a standard safety assessment approach. This methodology combines refinement, proof, model checking, and animation, and ultimately can automatically generate source code. This approach is intended to contribute to further the use of formal techniques for developing critical systems with high integrity and to verify complex properties. An assessment of the proposed methodology is given through developing a standard case study: the cardiac pacemaker.

Medical devices are very prone to showing unexpected system behaviour in operation when traditional methods are used for system testing. Device-related problems have been responsible for a large number of serious injuries. Officials of the US Food and Drug Administration (FDA) found that many deaths and injuries related to these devices are caused by flaws in product design and engineering. Cardiac pacemakers and implantable cardioverter-defibrillators (ICDs) are among the most critical medical devices and require closed-loop modelling (integrated system and environment modelling) for verification purposes before obtaining a certificate from the certification bodies. In [24] we present a methodology for modelling a biological system, such as the heart. The heart model is based mainly on electrocardiography analysis, which provides a model at the cellular level. Combining this environment model with a formal model of the pacemaker, we obtain a closed-loop model over which the overall correctness can be verified.

Clinical guidelines systematically assist practitioners in providing appropriate health care in specific clinical circumstances. Today, a significant number of guidelines and protocols are lacking in quality. Indeed, ambiguity and incompleteness are likely anomalies in medical practice. In [25] we use the Event-B modeling language to represent guidelines for subsequent validation. Our main contributions are: to apply mathematical formal techniques to evaluate real-life medical protocols for quality improvement, to derive verification proofs for the protocol and properties according to medical experts, and to publicize the potential of this approach. An assessment of the proposed approach is given through a case study, relative to a real-life reference protocol concerning ECG interpretation, for which we uncovered several anomalies.

Finally, we propose a refinement-based methodology [10] for complex medical systems design, which possesses the required key features. A refinement-based combined approach of formal verification, model validation using a model-checker and refinement chart is proposed in this methodology for designing a high-confidence medical device. Furthermore, we show the effectiveness of this methodology for the design of a cardiac pacemaker system.

#### Fundamentals of Network Calculus in Isabelle/HOL4>

Participant : Stephan Merz.

networked systems, min-plus algebra, formal proof

The design of networked and embedded systems has traditionally been
accompanied by formal methods for design and analysis. Network
Calculus [42] is a well-established theory, based on the
dioid, that is designed for computing delay and memory bounds in
networks. The theory is supported by several commercial and open-source tools
and has been used in major industrial applications, such as the design and
certification of the Airbus A380 AFDX backbone. Nevertheless, it is difficult
for certification authorities to assess the correctness of the computations
carried out by the tools supporting Network Calculus, and we propose the use
of *result certification* techniques for increasing the confidence in the
Network Calculus toolchain. In joint work with Marc Boyer from ONERA in
Toulouse, and with Loïc Fejoz and Nicolas Navet from the RealTime at Work (RTaW)
company, we have supervised the master thesis of Etienne Mabille to evaluate
the feasibility of the approach. Parts of the theory underlying Network
Calculus were formalized in the proof assistant Isabelle/HOL, and this
encoding was used to formally derive theorems that underly the
computation of bounds in network servers. The Network Calculus tool produced
by RTaW was instrumented to generate traces of its computation, and the
correctness of simple systems could in this way be certified by Isabelle. A
publication of this work is in preparation, and we intend to continue and
extend it in a future joint project.

#### Bounding message length in attacks against security protocols4>

Participant : Marie Duflot-Kremer.

security protocols, verification

Security protocols are short programs that describe communication between two or more parties in order to achieve security goals. Despite the apparent simplicity of such protocols, their verification is a difficult problem and has been shown to be undecidable in general. This undecidability comes from the fact that the set of executions to be considered is of infinite depth (an infinite number of protocol sessions can be run) and infinitely branching (the intruder can generate an unbounded number of distinct messages). Several attempts have been made to tackle each of these sources of undecidability. Together with Myrto Arapinis, we have shown [32] that, under a syntactic and reasonable condition of “well-formedness” on the protocol, we can get rid of the infinitely branching part. Following this conference publication, we are preparing a journal version of this result extending the set of security properties to which the result is applicable, in particular including authentication properties.

#### Evaluating and verifying probabilistic systems4>

Participant : Marie Duflot-Kremer.

verification, probabilistic systems, performance evaluation

Since its introduction in the 1980s, model checking has become a prominent technique for the verification of complex systems. The aim was to decide whether or not a system was fulfilling its specification. With the rise of probabilistic systems, new techniques have been designed to verify this new type of systems, and appropriate logics have been proposed to describe more subtle properties to be verified. However, some characteristics of such systems cannot fall in the field of model checking. The aim is thus not to tell wether a property is satisfied but how well the system performs with respect to a certain measure. Together with researchers from ENS de Cachan and University Paris Est Créteil we have designed a statistical tool made to tackle both performance and verification issues. Following several conference talks, a journal paper is currently written to present both the approach as well as application to a concrete case study: flexible manufacturing systems.