The aim of the Parsifal team is to develop and exploit *proof
theory* and *type theory* in the specification,
verification, and analysis of computational systems.

*Expertise*: the team conducts basic research in proof
theory and type theory. In particular, the team is developing
results that help with automated deduction and with the
manipulation and communication of formal proofs.

*Design*: based on experience with computational systems
and theoretical results, the team develops new logical principles,
new proof systems, and new theorem proving environments.

*Implementation*: the team builds prototype systems to
help validate basic research results.

*Examples*: the design and implementation efforts are
guided by examples of specification and verification problems.
These examples not only test the success of the tools but also
drive investigations into new principles and new areas of proof
theory and type theory.

The foundational work of the team focuses on *structural* and
*analytic* proof theory, *i.e.*, the study of formal
proofs as algebraic and combinatorial structures and the study of
proof systems as deductive and computational formalisms. The main
focus in recent years has been the study of the *sequent
calculus* and of the *deep inference* formalisms.

An important research question is how to reason about computational
specifications that are written in a *relational* style. To
this end, the team has been developing new approaches to dealing
with induction, co-induction, and generic quantification. A second
important question is of *canonicity* in deductive systems,
*i.e.*, when are two derivations “essentially the same”? This
crucial question is important not only for proof search, because it
gives an insight into the structure and an ability to manipulate the
proof search space, but also for the communication of *proof
objects* between different reasoning agents such as automated
theorem provers and proof checkers.

Important application areas currently include:

Meta-theoretic reasoning on functional programs, such as terms
in the

Reasoning about behaviors in systems with concurrency and
communication, such as the *etc.*

Combining interactive and automated reasoning methods for induction and co-induction

Verification of distributed, reactive, and real-time algorithms that are often specified using modal and temporal logics

Representing proofs as documents that can be printed, communicated, and checked by a wide range of computational logic systems.

Development of cost models for the evaluation of proofs and programs.

There are two broad approaches for computational specifications. In
the *computation as model* approach, computations are encoded as
mathematical structures containing nodes, transitions, and state.
Logic is used to *describe* these structures, that is, the
computations are used as models for logical expressions. Intensional
operators, such as the modals of temporal and dynamic logics or the
triples of Hoare logic, are often employed to express propositions
about the change in state.

The *computation as deduction* approach, in contrast, expresses
computations logically, using formulas, terms, types, and proofs as
computational elements. Unlike the model approach, general logical
apparatus such as cut-elimination or automated deduction becomes
directly applicable as tools for defining, analyzing, and animating
computations. Indeed, we can identify two main aspects of logical
specifications that have been very fruitful:

*Proof normalization*, which treats the state of a
computation as a proof term and computation as normalization of the
proof terms. General reduction principles such as

*Proof search*, which views the state of a computation as a
a structured collection of formulas, known as a *sequent*, and
proof search in a suitable sequent calculus as encoding the dynamics
of the computation. Logic programming is based on proof
search , and different proof search
strategies can be used to justify the design of new and different
logic programming languages .

While the distinction between these two aspects is somewhat informal, it helps to identify and classify different concerns that arise in computational semantics. For instance, confluence and termination of reductions are crucial considerations for normalization, while unification and strategies are important for search. A key challenge of computational logic is to find means of uniting or reorganizing these apparently disjoint concerns.

An important organizational principle is structural proof theory,
that is, the study of proofs as syntactic, algebraic and
combinatorial objects. Formal proofs often have equivalences in
their syntactic representations, leading to an important research
question about *canonicity* in proofs – when are two proofs
“essentially the same?” The syntactic equivalences can be used to
derive normal forms for proofs that illuminate not only the proofs
of a given formula, but also its entire proof search space. The
celebrated *focusing* theorem of
Andreoli identifies one such normal form
for derivations in the sequent calculus that has many important
consequences both for search and for computation. The combinatorial
structure of proofs can be further explored with the use of
*deep inference*; in particular, deep inference allows access
to simple and manifestly correct cut-elimination procedures with
precise complexity bounds.

Type theory is another important organizational principle, but most
popular type systems are generally designed for either search or for
normalization. To give some examples, the Coq
system that implements the Calculus of Inductive
Constructions (CIC) is designed to facilitate the expression of
computational features of proofs directly as executable functional
programs, but general proof search techniques for Coq are rather
primitive. In contrast, the Twelf system
that is based on the LF type theory (a subsystem of the CIC), is
based on relational specifications in canonical form (*i.e.*,
without redexes) for which there are sophisticated automated
reasoning systems such as meta-theoretic analysis tools, logic
programming engines, and inductive theorem provers. In recent years,
there has been a push towards combining search and normalization in
the same type-theoretic framework. The Beluga
system , for example, is an extension of
the LF type theory with a purely computational meta-framework where
operations on inductively defined LF objects can be expressed as
functional programs.

The Parsifal team investigates both the search and the normalization aspects of computational specifications using the concepts, results, and insights from proof theory and type theory.

The team has spent a number of years in designing a strong new logic that can be used to reason (inductively and co-inductively) on syntactic expressions containing bindings. This work is based on earlier work by McDowell, Miller, and Tiu , and on more recent work by Gacek, Miller, and Nadathur . The Parsifal team, along with our colleagues in Minneapolis, Canberra, Singapore, and Cachen, have been building two tools that exploit the novel features of this logic. These two systems are the following.

Abella, which is an interactive theorem prover for the full logic.

Bedwyr, which is a model checker for the “finite” part of the logic.

We have used these systems to provide formalize reasoning of a number
of complex formal systems, ranging from programming languages to the

Since 2014, the Abella system has been extended with a number of new features. A number of new significant examples have been implemented in Abella and an extensive tutorial for it has been written .

The team is developing a framework for defining the semantics of proof evidence. With this framework, implementers of theorem provers can output proof evidence in a format of their choice: they will only need to be able to formally define that evidence's semantics. With such semantics provided, proof checkers can then check alleged proofs for correctness. Thus, anyone who needs to trust proofs from various provers can put their energies into designing trustworthy checkers that can execute the semantic specification.

In order to provide our framework with the flexibility that this
ambitious plan requires, we have based our design on the most recent
advances within the theory of proofs. For a number of years, various
team members have been contributing to the design and theory of
*focused proof systems*
and we have
adopted such proof systems as the corner stone for our framework.

We have also been working for a number of years on the implementation of computational logic systems, involving, for example, both unification and backtracking search. As a result, we are also building an early and reference implementation of our semantic definitions.

Deep inference , is a novel methodology for presenting deductive systems. Unlike traditional formalisms like the sequent calculus, it allows rewriting of formulas deep inside arbitrary contexts. The new freedom for designing inference rules creates a richer proof theory. For example, for systems using deep inference, we have a greater variety of normal forms for proofs than in sequent calculus or natural deduction systems. Another advantage of deep inference systems is the close relationship to categorical proof theory. Due to the deep inference design one can directly read off the morphism from the derivations. There is no need for a counter-intuitive translation.

The following research problems are investigated by members of the Parsifal team:

Find deep inference system for richer logics. This is necessary for making the proof theoretic results of deep inference accessible to applications as they are described in the previous sections of this report.

Investigate the possibility of focusing proofs in deep inference. As described before, focusing is a way to reduce the non-determinism in proof search. However, it is well investigated only for the sequent calculus. In order to apply deep inference in proof search, we need to develop a theory of focusing for deep inference.

Proof nets and atomic flows are abstract (graph-like) presentations of proofs such that all "trivial rule permutations" are quotiented away. Ideally the notion of proof net should be independent from any syntactic formalism, but most notions of proof nets proposed in the past were formulated in terms of their relation to the sequent calculus. Consequently we could observe features like “boxes” and explicit “contraction links”. The latter appeared not only in Girard's proof nets for linear logic but also in Robinson's proof nets for classical logic. In this kind of proof nets every link in the net corresponds to a rule application in the sequent calculus.

Only recently, due to the rise of deep inference, new kinds of proof nets have been introduced that take the formula trees of the conclusions and add additional “flow-graph” information (see e.g., , and . On one side, this gives new insights in the essence of proofs and their normalization. But on the other side, all the known correctness criteria are no longer available.

This directly leads to the following research questions investigated by members of the Parsifal team:

Finding (for classical logic) a notion of proof nets that is deductive, i.e., can effectively be used for doing proof search. An important property of deductive proof nets must be that the correctness can be checked in linear time. For the classical logic proof nets by Lamarche and Straßburger this takes exponential time (in the size of the net).

Studying the normalization of proofs in classical logic using atomic flows. Although there is no correctness criterion they allow to simplify the normalization procedure for proofs in deep inference, and additionally allow to get new insights in the complexity of the normalization.

In the *proof normalization* approach, computation is usually reformulated as the evaluation of functional programs, expressed as terms in a variation over the

Models like Turing machines or RAM rely on atomic computational steps and thus admit quite obvious cost models for time and space. The

Nonetheless, it turns out that the number of *weak evaluation* (i.e., reducing only

With the recent recruitment of Accattoli, the team's research has expanded in this direction. The topics under investigations are:

*Complexity of Abstract Machines*. Bounding and comparing the overhead of different abstract machines for different evaluation schemas (weak/strong call-by-name/value/need

*Reasonable Space Cost Models*. Essentially nothing is known about reasonable space cost models. It is known, however, that environment-based execution model—which are the mainstream technology for functional programs—do not provide an answer. We are exploring the use of the non-standard implementation models provided by Girard's Geometry of Interaction to address this question.

The goal of combining model checking with inductive and co-inductive theorem is appealing. The strengths of systems in these two different approaches are strikingly different. A model checker is capable of exploring a finite space automatically: such a tool can repeatedly explore all possible cases of a given computational space. On the other hand, a theorem prover might be able to prove abstract properties about a search space. For example, a model checker could attempt to discover whether or not there exists a winning strategy for, say, tic-tac-toe while an inductive theorem prover might be able to prove that if there is a winning strategy for one board then there is a winning strategy for any symmetric version of that board. Of course, the ability to combine proofs from these systems could drastically reduce the amount of state exploration and verification of proof certificates that are needed to prove the existence of winning strategies.

Our first step to providing an integration of model checking and
(inductive) theorem proving was the development of a strong logic,
that we call

Bedwyr's tabling mechanism has been extended so that its it can make use of previously proved lemmas. For instance, when trying to prove that some board position has a winning strategy, an available stored lemma can now be used to obtain the result if some symmetric board position is already in the table.

Heath and Miller have shown how model checking can be seen as
constructing proof in (linear) logic .
For more about recent progress on providing checkable proof
certificates for model checking, see the web site for Bedwyr
http://

Traditionally, theorem provers—whether interactive or automatic—are usually monolithic: if any part of a formal development was to be done in a particular theorem prover, then the whole of it would need to be done in that prover. Increasingly, however, formal systems are being developed to integrate the results returned from several, independent and high-performance, specialized provers: see, for example, the integration of Isabelle with an SMT solver as well as the Why3 and ESC/Java systems.

Within the Parsifal team, we have been working on foundational aspects
of this multi-prover integration problem. As we have
described above, we have been developing a formal framework for
defining the semantics of proof evidence. We have also been working
on prototype checkers of proof evidence which are capable of
executing such formal definitions. The proof definition language
described in the papers , is
currently given an implementation in the

Using

Instead of integrating different provers by exchanging proof
evidence and relying on a backend proof-checker, another approach
to integration consists in re-implementing the theorem proving
techniques as proof-search strategies, on an architecture that
guarantees correctness. Focused systems can serve as the basis of
such an architecture, identifying points for choice and backtracking, and
providing primitives for the exploration of the search space. These
form a trusted *Application Programming Interface* that can be
used to program and experiment various proof-search heuristics
without worrying about correctness. No proof-checking is needed if
one trusts the implementation of the API.

This approach has led to the development of the Psyche engine.

Two major research directions are currently being explored, based on the above:

The first one is about understanding how to deal with
quantifiers in presence of one or more theories: On the one hand,
traditional techniques for quantified problems, such as
*unification* or *quantifier
elimination* are usually designed for either the empty theory or
very specific theories. On the other hand, the industrial
techniques for combining theories (Nelson-Oppen, Shostak, MCSAT , , , ) are
designed for quantifier-free problems, and quantifiers there are
dealt with incomplete *clause instantiation* methods or
*trigger*-based techniques . We are
working on making the two approaches compatible.

The above architecture's modular approach raises the
question of how its different modules can safely cooperate (in
terms of guaranteed correctness), while some of them are trusted
and others are not. The issue is particularly acute if some of the
techniques are run concurrently and exchange data at unpredictable
times. For this we explore new solutions based on Milner's *LCF* . In ,
we argued that our solutions in particular provide a way to fulfil
the “Strategy Challenge for SMT-solving” set by De Moura and
Passmore .

D. Miller gave invited talks at the following two regularly held international meetings.

TYPES 2016: 22nd International Conference on Types for Proofs and Programs (Novi Sad, Serbia, 23-26 May 2016) and

Linearity 2016: 4th International Workshop on Linearity (Porto, 25 June 2016).

D. Miller gave invited talks at the following research oriented meetings.

Workshop on linear logic, mathematics and computer science as part of “LL2016-Linear Logic: interaction, proofs and computation”, 7-10 November 2016, Lyon. France.

Research seminar titled “Interactions between logic, computer science and linguistics: history and philosophy”, Université de Lille 3, 15 June 2016.

CIPPMI (Current issues in the philosophy of practice of mathematics and informatics) Workshop on Proofs, justifications and certificates. 3-4 June 2016, Toulouse, France.

A seminar in honor of the 60th birthday of Professor Miller was held on 15-16 December at Université Paris Diderot-Paris 7 in Paris, France. Several members of the team contributed talks and original research papers.

Tomer Libal and Marco Volpe, *A general proof certification
framework for modal logic*.

Roberto Blanco and Zakaria Chihani, *An interactive assistant for
the definition of proof certificates*. Preprint available as
.

Lutz Straßburger, *Combinatorial flows as proof certificates
with built-in proof compression*.

Taus Brock-Nannestad, *Substructural cut elimination*.

B. Accattoli gave an invited talk at the following regularly held international meeting.

WPTE 2016: 3rd International Workshop on Rewriting Techniques for Program Transformations and Evaluation (Porto, 23 June 2016).

S. Graham-Lengrand gave an invited talk at the following international conference.

CLAM 2016: 5th Latin American Congress of Mathematicians, thematic session on Logic and Computability (Barranquilla, Colombia, 15th July 2016).

Functional Description

Abella is an interactive theorem prover for reasoning about computations given as relational specifications. Abella is particuarly well suited for reasoning about binding constructs.

Participants: Dale Miller, Olivier Savary-Bélanger, Mary Southern, Yuting Wang, Kaustuv Chaudhuri, Matteo Cimini and Gopalan Nadathur

Partner: Department of Computer Science and Engineering, University of Minnesota

Contact: Kaustuv Chaudhuri

Bedwyr - A proof search approach to model checking

Functional Description

Bedwyr is a generalization of logic programming that allows model checking directly on syntactic expression that possibly contain bindings. This system, written in OCaml, is a direct implementation of two recent advances in the theory of proof search.

It is possible to capture both finite success and finite failure in a sequent calculus. Proof search in such a proof system can capture both may and must behavior in operational semantics. Higher-order abstract syntax is directly supported using term-level lambda-binders, the nabla quantifier, higher-order pattern unification, and explicit substitutions. These features allow reasoning directly on expressions containing bound variables.

The distributed system comes with several example applications, including the finite pi-calculus (operational semantics, bisimulation, trace analyses, and modal logics), the spi-calculus (operational semantics), value-passing CCS, the lambda-calculus, winning strategies for games, and various other model checking problems.

Participants: Quentin Heath, Roberto Blanco, and Dale Miller

Contact: Quentin Heath

Checkers - A proof verifier

Keywords: Proof - Certification - Verification

Functional Description

Checkers is a tool in Lambda-prolog for the certification of proofs. Checkers consists of a kernel which is based on LKF and is based on the notion of ProofCert.

Participants: Tomer Libal, Giselle Machado Nogueira Reis and Marco Volpe

Contact: Tomer Libal

Proof-Search factorY for Collaborative HEuristics

Functional Description

Psyche is a modular platform for automated or interactive theorem proving, programmed in OCaml and built on an architecture (similar to LCF) where a trusted kernel interacts with plugins. The kernel offers an API of proof-search primitives, and plugins are programmed on top of the API to implement search strategies. This architecture is set up for pure logical reasoning as well as for theory-specific reasoning, for various theories.

The major effort in 2016 was the release of version 2.1 that
allows the combination of theories,
integrating and subsuming both the Nelson-Oppen methodology
and the *model constructing satisfiability* (MCSAT) methodology recently proposed by De Moura and Jovanovic , .

Participants: Assia Mahboubi, Jean-Marc Notin and Stéphane Graham-Lengrand

Contact: Stéphane Graham-Lengrand

Last year's result on the nonexistence of a complete linear term rewriting system for propositional logic has been generalized and some applications to proof theory have been invesigated. For example, we have found that the medial rule which plays a central role in deep inference systems is canonical in a strong sense: It is minimal, and every rule that reduce contraction to an atomic form is indeed derivable via medial. This is published in (joint work with Anupam Das).

We investigate the enumeration of non-crossing tree realizations of integer sequences, and we consider a special case in four parameters, that can be seen as a four-dimensional tetrahedron that generalizes Pascal’s triangle and the Catalan numbers. This work is motivated by the study of ambiguities arising in the parsing of natural language sentences using categorial grammars. This is joint work with Laurent Méhats and published in .

Focusing is a general technique for transforming a sequent proof system into one with a syntactic separation of non-deterministic choices without sacrificing completeness. This not only improves proof search, but also has the representational benefit of distilling sequent proofs into synthetic normal forms. We have shown how to apply the focusing technique to nested sequent calculi, a generalization of ordinary sequent calculi to tree-like instead of list-like structures. We thus improve the reach of focusing to the most commonly studied modal logics, the logics of the modal S5 cube. Among our key contributions is a focused cut-elimination theorem for focused nested sequents. This is published in .

Then we further extend our results to intuitionistic nested sequents, which can capture all the logics of the intuitionistic S5 cube in a modular fashion. We obtained an internal cut-elimination procedure for the focused system which in turn is used to show its completeness. This is published in

Nelson-Oppen and Model-Constructing Satisfiability (MCSAT) ,
are two methodologies that allow the reasoning mechanisms of different theories to collaborate, in order to tackle hybrid problems.
While these methodologies are often used and implemented for the practical applications of Automated Reasoning,
their rather sophisticated foundations are traditionally explained in terms of model theory.
SRI International pioneered some work
providing such methodologies with new and more general foundations in terms of *inference systems* ,
closer to proof theory and to Parsifal's research.
The more recent MCSAT methodology was not captured,
more generally lacked any kind of theorem about the generic combination of arbitrary theories,
and was also thought to be incompatible with the Nelson-Oppen approach,
so that SMT-solvers are either working with one methodology or the other, unable to get the best of both worlds.

In 2016 we designed a combination methodology, based on *inference systems*, that supersedes both Nelson-Oppen and MCSAT .
We showed its soundness and completeness, and identified for this the properties that the theories to combine are required to satisfy.
This generalized MCSAT with the generic combination mechanism that it lacked,
and showed that it is perfectly compatible with the Nelson-Oppen methodology, which can now cohabit within the same solver.

Recent studies of the combinatorics of linear lambda calculus have uncovered some unexpected connections to the old and well-developed theory of graphs embedded on surfaces (also known as “maps”) , , .
In , we aimed to give a simple and conceptual account for one of these connections, namely the correspondence (originally described by Bodini, Gardy, and Jacquot ) between *bridgeless* (in the graph-theoretic sense of having no disconnecting edge) as linear lambda terms with no closed proper subterms.
In turn, this lead to a surprising but natural reformulation of the
Four Color Theorem as a statement about typing in lambda calculus.

In joint work with Paul-André Melliès, we have been investigating the categorical semantics of type refinement systems, which are type systems built “on top of” a typed programming language to specify and verify more precise properties of programs.
The fibrational view of type refinement we have been developing (cf. ) is closely related to the categorical perspective on first-order logic introduced by Lawvere , but with some important conceptual and technical differences that provide an opportunity for reflection.
For example, Lawvere's axiomatization of first-order logic (his theory of so-called “hyperdoctrines”) was based on the idea that existential and universal quantification can be described respectively as left and right adjoints to the operation of substitution, this giving rise to a family of *adjoint triples* *adjoint pairs* *directed* equality predicates (which can be modelled as “hom” presheaves, realizing an early vision of Lawvere), as well as a simple calculus of string diagrams that is highly reminiscent of C. S. Peirce's “existential graphs” for predicate logic.

Continuation-passing style translations make a functional program more explicit by sequentializing its computations and reifying its control. They have been used as an intermediate language in many compilers. They are also understood as classical-to-intuitionistic proof embedding (so-called double negation translations). Matthias Puech studied a novel correspondence between CPS and focusing: to each CPS transform corresponds a focused proof system that is identifiable as a particular polarization of classical statements. Since, after Miller's and others work, we know the full design space of focused sequent calculi, we expect to understand the full design space of CPS translation.

The first step of this goal is to study the syntax and typing of variants of the CPS translation. Puech designed and implemented in OCaml a compacting, optimizing CPS translation, while using OCaml's type system to verify that it maps well-typed terms to well-typed terms in a tightly restricted syntactical form (the “typeful” approach to formalization) . The resulting type system is in Curry-Howard isomorphism with a weakly focused proof system: LJQ.

In a world where trusting software systems is increasingly important,
formal methods and formal proofs can help provide some basis for trust.
Proof checking can help to reduce the size of the *trusted base*
since we do not need to trust an entire theorem prover: instead, we
only need to trust a (smaller and simpler) proof checker.
Many approaches to building proof checkers require embedding within them
a full programming language.
In most modern proof checkers and theorem provers, that
programming language is a functional programming language, often a
variant of ML.
In fact, aspects of ML (e.g., strong typing, abstract data types, and
higher-order programming) were designed to make ML a trustworthy
“meta-language” for checking proofs.
While there is considerable overlap between logic programming and
proof checking (e.g., both benefit from unification, backtracking
search, efficient term structures, etc), the discipline of logic
programming has, in fact, played a minor role in the history of proof
checking.
Miller has been pushing the argument that logic programming can have a
major role in the future of this important topic .
Many aspects of the ProofCert project are based on this perspective
that logic programming techniques and methods can have significant
utility within proof checking.
This perspective stands in constrast to the work on the Dedukti proof
checking framework where functional programming
principles are employed for proof checking.

The kinds of inference rules and decision procedures that one writes
for proofs involving equality and rewriting are rather different from
proofs that one might write in first-order logic using, say, sequent
calculus or natural deduction. For example, equational logic proofs
are often chains of replacements or applications of oriented rewriting
and normal forms. In contrast, proofs involving logical connectives
are trees of introduction and elimination rules. Chihani and Miller
have shown how it is possible to check
various equality-based proof systems with a programmable proof checker
(the *kernel* checker) for first-order logic. That proof
checker's design is based on the implementation of *focused proof
search* and on making calls to (user-supplied) *clerks and
experts* predicates that are tied to the two phases found in focused
proofs. This particular design is based on the work of Chihani,
Miller, and Renaud .

The specification of these clerks and experts provide a formal definition of the structure of proof evidence and they work just as well in the equational setting as in the logic setting where this scheme for proof checking was originally developed. Additionally, executing such a formal definition on top of a kernel provides an actual proof checker that can also do a degree of proof reconstruction. A number of rewriting based proofs have been defined and checked in this manner.

Unification is a central operation in the construction of a range of
computational logic systems based on first-order and higher-order
logics.
First-order unification has a number of properties that dominates the
way it is incorporated within such systems.
In particular, first-order unification is decidable, unary, and can
be performed on untyped term structures.
None of these three properties hold for full higher-order unification:
unification is undecidable, unifiers can be incomparable, and
term-level typing can dominate the search for unifiers.
The so-called *pattern* subset of higher-order unification was
designed to be a small extension to first-order unification that
respected the basic laws governing

Several deductive formalisms (e.g., sequent, nested sequent, labeled sequent, hypersequent calculi) have been used in the literature for the treatment of modal logics, and some connections between these formalisms are already known. Marin, Miller, and Volpe have propose a general framework, which is based on a focused version of the labeled sequent calculus by Negri , augmented with some parametric devices allowing to restrict the set of proofs. By properly defining such restrictions and by choosing an appropriate polarization of formulas, one can obtain different, concrete proof systems for the modal logic K and for its extensions by means of geometric axioms. The expressiveness of the labeled approach and the control mechanisms of focusing allow a clean emulation of a range of existing formalisms and proof systems for modal logic. These results make it possible to write Foundational Proof Certificate definitions of common modal logic proof systems.

(Joint work with Ivan Gazeau and Catuscia Palamidessi). The approximation introduced by finite-precision representation of continuous data can induce arbitrarily large information leaks even when the computation using exact semantics is secure. Such leakage can thus undermine design efforts aimed at protecting sensitive information. Gazeau, Miller, and Palamidessi have applied differential privacy—an approach to privacy that emerged from the area of statistical databases—to this problem. In their approach, privacy is protected by the addition of noise to a true (private) value. To date, this approach to privacy has been proved correct only in the ideal case in which computations are made using an idealized, infinite-precision semantics. An analysis of implementation levels, where the semantics is necessarily finite-precision, i.e. the representation of real numbers and the operations on them are rounded according to some level of precision. In general there are violations of the differential privacy property but a limited (but, arguably, totally acceptable) variant of the property can be used insteand, under only a minor degradation of the privacy level. In fact, two cases of noise-generating distributions can be employed: the standard Laplacian mechanism commonly used in differential privacy, and a bivariate version of the Laplacian recently introduced in the setting of privacy-aware geolocation.

This work describes the theory and implementation of a proof checker for tableau theorem provers for modal logics.
The tool supports proofs in both the traditional tableau format as well as the free variable variant. The implentation can be found
at https://

First-order resolution theorem provers depend on efficient data structures for redundancy elimination. These data structures do not exist for higher-order resolution theorem provers. In we discuss a new approach to this problem. (Joint work with Alexander Steen).

Functional programming languages are often based on the call-by-value

We provided a new proof that the strong

Acyclicity constraints can be used to encode a large variety of useful constraints on graphs. The basic constraint itself can be encoded in terms of simpler constraints (e.g. integer linear constraints) in a straightforward and intuitive way, associating to each vertex of the (fixed) input graph a variable with domain linear in the size of the graph. For large graphs, this quickly becomes inefficient.

In the presence of sum types, the

In the work discussed here, we first used the exp-log decomposition of
the arrow type—inspired from the analytic transformation
*simplification* of the so far
standard axioms for

Moreover, we provided a Coq implementation of a heuristic decision procedure for this equality. Although a heuristic, this implementation manages to tackle examples of equal terms that need a complex program analysis in the only previously implemented heuristic of Vincent Balat.

This work is described in a paper accepted for presentation at POPL 2017, .

In sequent calculi, proof rules can be divided into two groups: invertible (asynchronous) proof rules and non-invertible (synchronous) proof rules. Even in focusing sequent calculi the two groups of rules are present, albeit grouped together in synthetic rules (we speak of the synchronous and asynchronous phase).

In this work, we used the exp-log decomposition (described above) in the context of logic in order to obtain a version of sequent caclulus which contains synchronous rules only, a first such formalism for intuitionistic logic.

We extended the picture from the setting of propositional to the one of first-order intuitionistic logic, where the exp-log decomposition provided us with an intuitionistic hierarchy of formulas analogous to the classical arithmetical hierachy; although the classical arithmetical hierarchy exists since the 1920s, a correspondingly versatile notion for intuitionistic logic has been elusive up to this day.

This work is described in the manuscript , submitted to an academic journal.

Title: ProofCert: Broad Spectrum Proof Certificates

Programm: FP7

Type: ERC

Duration: January 2012 - December 2016

Coordinator: Inria

Inria contact: Dale Miller

There is little hope that the world will know secure software if we cannot make greater strides in the practice of formal methods: hardware and software devices with errors are routinely turned against their users. The ProofCert proposal aims at building a foundation that will allow a broad spectrum of formal methods—ranging from automatic model checkers to interactive theorem provers—to work together to establish formal properties of computer systems. This project starts with a wonderful gift to us from decades of work by logicians and proof theorist: their efforts on logic and proof has given us a universally accepted means of communicating proofs between people and computer systems. Logic can be used to state desirable security and correctness properties of software and hardware systems and proofs are uncontroversial evidence that statements are, in fact, true. The current state-of-the-art of formal methods used in academics and industry shows, however, that the notion of logic and proof is severely fractured: there is little or no communication between any two such systems. Thus any efforts on computer system correctness is needlessly repeated many time in the many different systems: sometimes this work is even redone when a given prover is upgraded. In ProofCert, we will build on the bedrock of decades of research into logic and proof theory the notion of proof certificates. Such certificates will allow for a complete reshaping of the way that formal methods are employed. Given the infrastructure and tools envisioned in this proposal, the world of formal methods will become as dynamic and responsive as the world of computer viruses and hackers has become.

Title: The Fine Structure of Formal Proof Systems and their Computational Interpretations

Duration: 01/01/2016 – 31/12/2018

Partners:

University Paris VII, PPS (PI: Michel Parigot)

Inria Saclay–IdF, EPI Parsifal (PI: Lutz Straßburger)

University of Innsbruck, Computational Logic Group (PI: Georg Moser)

Vienna University of Technology, Theory and Logic Group (PI: Matthias Baaz)

Total funding by the ANR: 316 805 EUR

The FISP project is part of a long-term, ambitious project whose objective is to apply the powerful and promising techniques from structural proof theory to central problems in computer science for which they have not been used before, especially the understanding of the computational content of proofs, the extraction of programs from proofs and the logical control of refined computational operations. So far, the work done in the area of computational interpretations of logical systems is mainly based on the seminal work of Gentzen, who in the mid-thirties introduced the sequent calculus and natural deduction, along with the cut-elimination procedure. But that approach shows its limits when it comes to computational interpretations of classical logic or the modelling of parallel computing. The aim of our project, based on the complementary skills of the teams, is to overcome these limits. For instance, deep inference provides new properties, namely full symmetry and atomicity, which were not available until recently and opened new possibilities at the computing level, in the era of parallel and distributed computing.

*Title*: COst model for Complexity Analyses of Higher-Order programming LAnguages.

*Collaborators*: Ugo Dal Lago (University of Bologna & Inria), Delia Kesner (Paris Diderot University), Damiano Mazza (CNRS & Paris 13 University), Claudio Sacerdoti Coen (University of Bologna).

*Duration*: 01/10/2016 – 31/09/2019

*Total funding by the ANR*: 155 280 EUR

The COCA HOLA project aims at developing complexity analyses of higher-order computations, i.e. that approach to computation where the inputs and outputs of a program are not simply numbers, strings, or compound data-types, but programs themselves. The focus is not on analysing fixed programs, but whole programming languages. The aim is the identification of adequate units of measurement for time and space, i.e. what are called reasonable cost models. The problem is non-trivial because the evaluation of higher-order languages is defined abstractly, via high-level operations, leaving the implementation unspecified. Concretely, the project will analyse different implementation schemes, measuring precisely their computational complexity with respect to the number of high-level operations, and eventually develop more efficient new ones. The goal is to obtain a complexity-aware theory of implementations of higher-order languages with both theoretical and practical downfalls.

The projects stems from recent advances on the theory of time cost models for the lambda-calculus, the computational model behind the higher-order approach, obtained by the principal investigator and his collaborators (who are included in the project).

COCA HOLA will span over three years and is organised around three work packages, essentially:

extending the current results to encompass realistic languages;

explore the gap between positive and negative results in the literature;

use ideas from linear logic to explore space cost models, about which almost nothing is known.

Title: Analytic Calculi for Modal Logics

Duration: 01/01/2016 – 31/12/2017

Austrian Partner: TU Wien, Institute for Computer Science (Department III)

Modal logics are obtained from propositional logics by adding
modalities

The purpose of this project is to develop a proof theory for variants of modal logic that have applications in modern computer science but that have been neglected by traditional proof theory so far.

Professor Chuck Liang (from Hofstra University, NY, USA) visited the team from 5 June to 25 June 2016 in order to continue his collaborations with team members on basic questions of proof theory. In particular, he worked with Miller on identifying possible means to allow classical and intuitionistic logic to be mixed in a common proof system. Miller is exploring how the resulting ideas might be able to reorganize the notion of kernel logic used within the ProofCert project.

Ameni Chtourou was an intern funded by ProofCert during May, June, and
July 2016. She was advised by Accattoli and worked with using the
Abella theorem prover to formalize connections various connections
between

Stéphane Graham-Lengrand spent 8 months, from January 2016 to August 2016, at SRI International, Computer Science Lab. This visit developed a collaboration with N. Shankar, MP Bonacina, D. Jovanovic, and Martin Schaeff on new algorithms and new architectures for automated and interactive theorem proving, as well as on new programme verification techniques.

D. Miller was on the Steering Committee for the FSCD series of International Conference on Formal Structures for Computation and Deduction.

D. Miller was a member of the jury for selecting the 2016 Ackermann Award (the EACSL award for outstanding doctoral dissertation in the field of Logic in Computer Science).

D. Miller was an ex officio member of the Executive Committee of the ACM Special Interest Group on Logic and Compuation (SIGLOG), from April 2014 to June 2016. He was also a member of the SIGLOG advisory board, starting November 2015.

D. Miller was on the Program Committee of the following meetings.

FSCD’16: First International Conference on Formal Structures for Computation and Deduction, Porto, Portugal, 22-26 June.

IJCAR 2016: International Joint Conference on Automated Reasoning, Coimbra, Portugal, 27 June - 2 July.

CPP 2016, Fifth International Conference on Certified Programs and Proofs, 18-19 January, Saint Petersburg, Florida.

B. Accattoli was one of the two Program Committee chairs of the 5th International Workshop on Confluence (IWC 2016).

N. Zeilberger served on the program committee for workshops Computational Logic and Applications (CLA 2016) and Off the Beaten Track (OBT 2016).

N. Zeilberger served on external review committee for POPL 2017

L. Straßburger was on the Program Committee for LICS 2016.

D. Miller was a reviewer for CONCUR 2016: the International Conference on Concurrency Theory.

B. Accattoli was a reviewer for the international conferences ICTAC 2016, FSCD 2016, LICS 2016 (twice), FOSSACS 2017.

S. Graham-Lengrand was a reviewer for the international conferences FSCD 2016 (twice), LICS 2016 (three times), Concur 2016, VSTTE 2017, HATT 2017, FSTTCS 2017.

L. Straßburger was a reviewer for the international conferences LICS 2016 (8 times), FLOPS 2016.

M. Volpe was a reviewer for the international conferences IJCAR 2016 and CSL 2016.

H. Steele was a reviewer for the internal conference LICS 2016.

D. Miller is on the editorial board of the following
journals: *ACM Transactions on Computational Logic*,
*Journal of Automated Reasoning* (Springer),
*Theory and Practice of Logic Programming* (Cambridge University
Press), and *Journal of Applied Logic* (Elsevier).

S. Graham-Lengrand has been a reviewer for the journals
*Fundamenta Informaticae*,
*Transactions on Computational Logic*,
*Journal of Logic and Computation*,
*Logical Methods in Computer Science*,
*Journal of Automated Reasoning*.

Danko Ilik was a reviewer for Mathematical Reviews and Zentralblatt MATH.

F. Lamarche was a reviewer for *Mathematical
Structures in Computer Science*.

Lutz Straßburger was a reviewer for the journals
*Theoretical Computer Science* and *Logical Methods
in Computer Science*.

Marco Volpe was a reviewer for the journal *Annals of Mathematics and Artificial Intelligence*.

Beniamino Accattoli was a reviewer for the journals *Theoretical Computer Science* and *Logical Methods in Computer Science*.

D. Miller was an invited speaker at the following conferences and workshops.

Workshop on linear logic, mathematics and computer science as part of “LL2016-Linear Logic: interaction, proofs and computation”, 7-10 November 2016, Lyon. France.

Linearity 2016. Porto, 25 June 2016.

CIPPMI (Current issues in the philosophy of practice of mathematics and informatics) Workshop on Proofs, justifications and certificates. 3-4 June 2016, Toulouse, France.

TYPES 2016: 22nd International Conference on Types for Proofs and Programs. Novi Sad, Serbia, 23-26 May 2016.

D. Miller was an invited speaker at the research seminar titled “Interactions between logic, computer science and linguistics: history and philosophy”, Université de Lille 3, 15 June 2016.

D. Miller was an invited speaker at the ACADIA research centre, Ca’ Foscari University, Venice, 27 April 2016.

B. Accattoli was invited speaker at WPTE 2016: 3rd International Workshop on Rewriting Techniques for Program Transformations and Evaluation (Porto, 23 June 2016).

S. Graham-Lengrand gave an invited talk at CLAM 2016: 5th Latin American Congress of Mathematicians, thematic session on Logic and Computability (Barranquilla, Colombia, 15th July 2016).

N. Zeilberger was an invited lecturer at OPLSS 2016: Oregon Programming Languages Summer School on Types, Logic, Semantics, and Verification.

D. Miller was a member of the ACM SIGLOG Advisory Board, the LICS Organizing Board, the CPP Steering Committee, and the ACM SIGLOG Executive Committee Nominating Committee.

S. Graham-Lengrand is the head of the National Workgroup on “Logic, Algebra, and Computation”, within the Informatique Mathématique section of CNRS.

L. Straßburger serves on the “commission développement technologique (CDT)” for Inria Saclay–Île-de-France since June 2012

Master: D. Miller, “*MPRI 2-1: Logique linéaire et
paradigmes logiques du calcul*”, 12 hours, M2, Master Parisien de
Recherche en Informatique, France.

Licence: S. Graham-Lengrand, “*INF412: Fondements de l'Informatique: Logique, Modèles, Calcul*”, 32 hours eq. TD, L3, École Polytechnique,
France.

Master: S. Graham-Lengrand, “*INF551:
Computational Logic*”, 45 hours eq. TD, M1, École
Polytechnique, France.

Master: S. Graham-Lengrand, “*MPRI 2-1: Logique linéaire et
paradigmes logiques du calcul*”, 6 hours, M2, Master Parisien de
Recherche en Informatique, France.

Undergraduate: K. Chaudhuri, R. Blanco, M. Volpe, G. Reis, T. Libal all taught or tutored exercises for first and second year undergrad courses, mostly at École Polytechnique.

PhD in progress: Sonia Marin, 1 Nov 2014, supervised by L. Straßburger and D. Miller

PhD in progress: Roberto Blanco, Ulysse Gérard, and Quentin Heath, supervised by D. Miller

PhD in progress: François Thiré (since 1st October 2016), supervised by S. Graham-Lengrand (joint with G. Dowek)

Miller was a reporter for the PhD juries of Raphaël Cauderlier (CNAM, 10 October 2016) and Gabriel Scherer (Université Paris-Diderot, 30 March 2016).

Graham-Lengrand was a reporter for the PhD juries of Pierre Halmagrand (CNAM, 10 December 2016).