The aim of the Parsifal team is to develop and exploit *proof
theory* and *type theory* in the specification,
verification, and analysis of computational systems.

*Expertise*: the team conducts basic research in proof
theory and type theory. In particular, the team is developing
results that help with automated deduction and with the
manipulation and communication of formal proofs.

*Design*: based on experience with computational systems
and theoretical results, the team develops new logical principles,
new proof systems, and new theorem proving environments.

*Implementation*: the team builds prototype systems to
help validate basic research results.

*Examples*: the design and implementation efforts are
guided by examples of specification and verification problems.
These examples not only test the success of the tools but also
drive investigations into new principles and new areas of proof
theory and type theory.

The foundational work of the team focuses on *structural* and
*analytic* proof theory, *i.e.*, the study of formal
proofs as algebraic and combinatorial structures and the study of
proof systems as deductive and computational formalisms. The main
focus in recent years has been the study of the *sequent
calculus* and of the *deep inference* formalisms.

An important research question is how to reason about computational
specifications that are written in a *relational* style. To
this end, the team has been developing new approaches to dealing
with induction, co-induction, and generic quantification. A second
important question is of *canonicity* in deductive systems,
*i.e.*, when are two derivations “essentially the same”? This
crucial question is important not only for proof search, because it
gives an insight into the structure and an ability to manipulate the
proof search space, but also for the communication of *proof
objects* between different reasoning agents such as automated
theorem provers and proof checkers.

Important application areas currently include:

Meta-theoretic reasoning on functional programs, such as terms
in the

Reasoning about behaviors in systems with concurrency and
communication, such as the *etc.*

Combining interactive and automated reasoning methods for induction and co-induction

Verification of distributed, reactive, and real-time algorithms that are often specified using modal and temporal logics

Representing proofs as documents that can be printed, communicated, and checked by a wide range of computational logic systems.

Development of cost models for the evaluation of proofs and programs.

There are two broad approaches for computational specifications. In
the *computation as model* approach, computations are encoded as
mathematical structures containing nodes, transitions, and state.
Logic is used to *describe* these structures, that is, the
computations are used as models for logical expressions. Intensional
operators, such as the modals of temporal and dynamic logics or the
triples of Hoare logic, are often employed to express propositions
about the change in state.

The *computation as deduction* approach, in contrast, expresses
computations logically, using formulas, terms, types, and proofs as
computational elements. Unlike the model approach, general logical
apparatus such as cut-elimination or automated deduction becomes
directly applicable as tools for defining, analyzing, and animating
computations. Indeed, we can identify two main aspects of logical
specifications that have been very fruitful:

*Proof normalization*, which treats the state of a
computation as a proof term and computation as normalization of the
proof terms. General reduction principles such as

*Proof search*, which views the state of a computation as a
a structured collection of formulas, known as a *sequent*, and
proof search in a suitable sequent calculus as encoding the dynamics
of the computation. Logic programming is based on proof
search , and different proof search
strategies can be used to justify the design of new and different
logic programming languages .

While the distinction between these two aspects is somewhat informal, it helps to identify and classify different concerns that arise in computational semantics. For instance, confluence and termination of reductions are crucial considerations for normalization, while unification and strategies are important for search. A key challenge of computational logic is to find means of uniting or reorganizing these apparently disjoint concerns.

An important organizational principle is structural proof theory,
that is, the study of proofs as syntactic, algebraic and
combinatorial objects. Formal proofs often have equivalences in
their syntactic representations, leading to an important research
question about *canonicity* in proofs – when are two proofs
“essentially the same?” The syntactic equivalences can be used to
derive normal forms for proofs that illuminate not only the proofs
of a given formula, but also its entire proof search space. The
celebrated *focusing* theorem of
Andreoli identifies one such normal form
for derivations in the sequent calculus that has many important
consequences both for search and for computation. The combinatorial
structure of proofs can be further explored with the use of
*deep inference*; in particular, deep inference allows access
to simple and manifestly correct cut-elimination procedures with
precise complexity bounds.

Type theory is another important organizational principle, but most
popular type systems are generally designed for either search or for
normalization. To give some examples, the Coq
system that implements the Calculus of Inductive
Constructions (CIC) is designed to facilitate the expression of
computational features of proofs directly as executable functional
programs, but general proof search techniques for Coq are rather
primitive. In contrast, the Twelf system
that is based on the LF type theory (a subsystem of the CIC), is
based on relational specifications in canonical form (*i.e.*,
without redexes) for which there are sophisticated automated
reasoning systems such as meta-theoretic analysis tools, logic
programming engines, and inductive theorem provers. In recent years,
there has been a push towards combining search and normalization in
the same type-theoretic framework. The Beluga
system , for example, is an extension of
the LF type theory with a purely computational meta-framework where
operations on inductively defined LF objects can be expressed as
functional programs.

The Parsifal team investigates both the search and the normalization aspects of computational specifications using the concepts, results, and insights from proof theory and type theory.

The team has spent a number of years in designing a strong new logic that can be used to reason (inductively and co-inductively) on syntactic expressions containing bindings. This work is based on earlier work by McDowell, Miller, and Tiu , and on more recent work by Gacek, Miller, and Nadathur . The Parsifal team, along with our colleagues in Minneapolis, Canberra, Singapore, and Cachan, have been building two tools that exploit the novel features of this logic. These two systems are the following.

Abella, which is an interactive theorem prover for the full logic.

Bedwyr, which is a model checker for the “finite” part of the logic.

We have used these systems to provide formalize reasoning of a number
of complex formal systems, ranging from programming languages to the

Since 2014, the Abella system has been extended with a number of new features. A number of new significant examples have been implemented in Abella and an extensive tutorial for it has been written .

The team is developing a framework for defining the semantics of proof evidence. With this framework, implementers of theorem provers can output proof evidence in a format of their choice: they will only need to be able to formally define that evidence's semantics. With such semantics provided, proof checkers can then check alleged proofs for correctness. Thus, anyone who needs to trust proofs from various provers can put their energies into designing trustworthy checkers that can execute the semantic specification.

In order to provide our framework with the flexibility that this
ambitious plan requires, we have based our design on the most recent
advances within the theory of proofs. For a number of years, various
team members have been contributing to the design and theory of
*focused proof systems*
and we have
adopted such proof systems as the corner stone for our framework.

We have also been working for a number of years on the implementation of computational logic systems, involving, for example, both unification and backtracking search. As a result, we are also building an early and reference implementation of our semantic definitions.

Deep inference , is a novel methodology for presenting deductive systems. Unlike traditional formalisms like the sequent calculus, it allows rewriting of formulas deep inside arbitrary contexts. The new freedom for designing inference rules creates a richer proof theory. For example, for systems using deep inference, we have a greater variety of normal forms for proofs than in sequent calculus or natural deduction systems. Another advantage of deep inference systems is the close relationship to category-theoretic proof theory. Due to the deep inference design one can directly read off the morphism from the derivations. There is no need for a counter-intuitive translation.

The following research problems are investigated by members of the Parsifal team:

Find deep inference system for richer logics. This is necessary for making the proof theoretic results of deep inference accessible to applications as they are described in the previous sections of this report.

Investigate the possibility of focusing proofs in deep inference. As described before, focusing is a way to reduce the non-determinism in proof search. However, it is well investigated only for the sequent calculus. In order to apply deep inference in proof search, we need to develop a theory of focusing for deep inference.

*Proof nets* graph-like presentations of sequent calculus proofs such
that all "trivial rule permutations" are quotiented away. Ideally
the notion of proof net should be independent from any syntactic
formalism, but most notions of proof nets proposed in the past were
formulated in terms of their relation to the sequent calculus.
Consequently we could observe features like “boxes” and explicit
“contraction links”. The latter appeared not only in Girard's
proof nets for linear logic but also in
Robinson's proof nets for classical
logic. In this kind of proof nets every link in the net corresponds
to a rule application in the sequent calculus.

Only recently, due to the rise of deep inference, new kinds of proof
nets have been introduced that take the formula trees of the
conclusions and add additional “flow-graph” information (see e.g.,
leading to the notion of
*atomic flow* and . On one side, this
gives new insights in the essence of proofs and their normalization.
But on the other side, all the known correctness criteria are no
longer available.

*Combinatorial proofs* are another form
syntax-independent proof presentation which separates the
multiplicative from the additive behaviour of classical connectives.

The following research questions investigated by members of the Parsifal team:

Finding (for classical and intuitionistic logic) a notion of canonical proof presentation that is deductive, i.e., can effectively be used for doing proof search.

Studying the normalization of proofs using atomic flows and combinatorial proofs, as they simplify the normalization procedure for proofs in deep inference, and additionally allow to get new insights in the complexity of the normalization.

Studying the size of proofs in the combinatorial proof formalism.

In the *proof normalization* approach, computation is usually reformulated as the evaluation of functional programs, expressed as terms in a variation over the

Models like Turing machines or RAM rely on atomic computational steps and thus admit quite obvious cost models for time and space. The

Nonetheless, it turns out that the number of *weak evaluation* (i.e., reducing only

With the recent recruitment of Accattoli, the team's research has expanded in this direction. The topics under investigations are:

*Complexity of Abstract Machines*. Bounding and comparing the overhead of different abstract machines for different evaluation schemas (weak/strong call-by-name/value/need

*Reasonable Space Cost Models*. Essentially nothing is known about reasonable space cost models. It is known, however, that environment-based execution model—which are the mainstream technology for functional programs—do not provide an answer. We are exploring the use of the non-standard implementation models provided by Girard's Geometry of Interaction to address this question.

The production of real-world verified software has made it necessary to integrate results coming from different theorem provers in a single certification package. One approach to this integration task is by exchanging proof evidence and relying on a backend proof-checker.

Another approach to integration consists in re-implementing the theorem proving techniques as proof-search strategies, on an architecture that guarantees correctness.

Inference systems in general, and focused sequent calculi in
particular, can serve as the basis of such an architecture, providing
primitives for the exploration of the search space. These form a
trusted *Application Programming Interface* that can be used to
program and experiment various proof-search heuristics without
worrying about correctness. No proof-checking is needed if one trusts
the implementation of the API.

This approach has led to the development of the Psyche engine, and to its latest branch CDSAT.

Three major research directions are currently being explored, based on the above:

The first one is about formulating automated reasoning techniques
in terms of inference systems,
so that they fit the approach described above.
While this is rather standard for technique used in first-order Automated Theorem Provers (ATP),
such as resolution, superposition, etc,
this is much less standard in SMT-solving,
the branch of automated reasoning that can natively handle reasoning
in a combination of mathematical theories:
the traditional techniques developed there usually organise the collaborations
between different reasoning black boxes,
whose opaque mechanisms less clearly connect to proof-theoretical inference systems.
We are therefore investigating new foundations
for reasoning in combinations of theories,
expressed as fine-grained inference systems,
and developed the *Conflict-Driven Satisfiability framework*
for these foundations .

The second one is about understanding how to deal with
quantifiers in presence of one or more theories: On the one hand,
traditional techniques for quantified problems, such as
*unification* or *quantifier
elimination* are usually designed for either the empty theory or
very specific theories. On the other hand, the industrial
techniques for combining theories (Nelson-Oppen, Shostak, MCSAT , , , ) are
designed for quantifier-free problems, and quantifiers there are
dealt with incomplete *clause instantiation* methods or
*trigger*-based techniques . We are
working on making the two approaches compatible.

The above architecture's modular approach raises the
question of how its different modules can safely cooperate (in
terms of guaranteed correctness), while some of them are trusted
and others are not. The issue is particularly acute if some of the
techniques are run concurrently and exchange data at unpredictable
times. For this we explore new solutions based on Milner's *LCF* . In ,
we argued that our solutions in particular provide a way to fulfil
the “Strategy Challenge for SMT-solving” set by De Moura and
Passmore .

The application domain of the *cost models and abstract machines for functional programs* line of work—when *application* is intended in concrete terms—is the implementation of proof assistants.

Both functional languages and proof assistants rely on the *weak* *strong*

The study of reasonable cost models naturally leads to a refined theory of implementations, where different techniques and optimisations are classified depending on their complexity (with respect to the cost model). This direction is particularly relevant for the strong *ad-hoc* way.

The theoretical study in particular pointed out that all available proof assistants are implemented following unreasonable implementation schemas, where *unreasonable* here means with potentially exponential overhead with respect to the number of steps in the calculus.

Beniamino Accattoli collaborates with Bruno Barras—one of the implementors of *Coq*, the most used proof assistant—and Claudio Sacerdoti Coen—one of the implementors of *Matita*—in order to develop a fine theory of implementation for proof assistants.

If *applications* are intended also at a more theoretical level, the study of reasonable cost models is also applicable to the development of quantitative denotational semantics, to higher-order approaches to complexity theory, and to implicit computational complexity.

D. Miller has been made General Chair of the LICS Conference Series for three years, starting July 2018.

Functional Description: Abella is an interactive theorem prover for reasoning about computations given as relational specifications. Abella is particuarly well suited for reasoning about binding constructs.

Participants: Dale Miller, Gopalan Nadathur, Kaustuv Chaudhuri, Mary Southern, Matteo Cimini, Olivier Savary-Bélanger and Yuting Wang

Partner: Department of Computer Science and Engineering, University of Minnesota

Contact: Kaustuv Chaudhuri

*Bedwyr - A proof search approach to model checking*

Keyword: Model Checker

Functional Description: Bedwyr is a generalization of logic programming that allows model checking directly on syntactic expressions that possibly contain bindings. This system, written in OCaml, is a direct implementation of two recent advances in the theory of proof search.

It is possible to capture both finite success and finite failure in a sequent calculus. Proof search in such a proof system can capture both may and must behavior in operational semantics. Higher-order abstract syntax is directly supported using term-level lambda-binders, the nabla quantifier, higher-order pattern unification, and explicit substitutions. These features allow reasoning directly on expressions containing bound variables.

The distributed system comes with several example applications, including the finite pi-calculus (operational semantics, bisimulation, trace analyses, and modal logics), the spi-calculus (operational semantics), value-passing CCS, the lambda-calculus, winning strategies for games, and various other model checking problems.

Participants: Dale Miller, Quentin Heath and Roberto Blanco Martinez

Contact: Dale Miller

*Checkers - A proof verifier*

Keywords: Proof - Certification - Verification

Functional Description: Checkers is a tool in Lambda-prolog for the certification of proofs. Checkers consists of a kernel which is based on LKF and is based on the notion of ProofCert.

Participants: Giselle Machado Nogueira Reis, Marco Volpe and Tomer Libal

Contact: Tomer Libal

*Proof-Search factorY for Collaborative HEuristics*

Functional Description: Psyche is a modular platform for automated or interactive theorem proving, programmed in OCaml and built on an architecture (similar to LCF) where a trusted kernel interacts with plugins. The kernel offers an API of proof-search primitives, and plugins are programmed on top of the API to implement search strategies. This architecture is set up for pure logical reasoning as well as for theory-specific reasoning, for various theories.

Release Functional Description: It is now equipped with the machinery to handle quantifiers and quantifier-handling techniques. Concretely, it uses meta-variables to delay the instantiation of existential variables, and constraints on meta-variables are propagated through the various branches of the search-space, in a way that allows local backtracking. The kernel, of about 800 l.o.c., is purely functional.

Participants: Assia Mahboubi, Jean-Marc Notin and Stéphane Graham-Lengrand

Contact: Stéphane Graham-Lengrand

Functional Description: Mætning is an automated theorem prover for intuitionistic predicate logic that is designed to disprove non-theorems.

Contact: Kaustuv Chaudhuri

Keywords: Functional programming - Static typing - Compilation

Functional Description: The OCaml language is a functional programming language that combines safety with expressiveness through the use of a precise and flexible type system with automatic type inference. The OCaml system is a comprehensive implementation of this language, featuring two compilers (a bytecode compiler, for fast prototyping and interactive use, and a native-code compiler producing efficient machine code for x86, ARM, PowerPC and System Z), a debugger, a documentation generator, a compilation manager, a package manager, and many libraries contributed by the user community.

Participants: Damien Doligez, Xavier Leroy, Fabrice Le Fessant, Luc Maranget, Gabriel Scherer, Alain Frisch, Jacques Garrigue, Marc Shinwell, Jeremy Yallop and Leo White

Contact: Damien Doligez

URL: https://

We have been designing a new functional programming language, MLTS,
that uses the * $\lambda $-tree* syntax approach to encoding
bindings that appear within data structures
. In this setting, bindings never become
free nor escape their scope: instead, binders in data structures are
permitted to

The operational semantics of MLTS is given using natural semantics for
evaluation. We shall view such natural semantics as a logical theory
with a rich logic that includes both nominal abstraction and the

We have developed a number of examples of how this new programming
language can be used. Some of the most convincing of these examples
are programs that manipuate untyped

While model checking has often been considered as a practical
alternative to building formal proofs, we have argued that the theory
of sequent calculus proofs can be used to provide an appealing
foundation for model checking . Given that
the emphasis of model checking is on establishing the truth of a
property in a model, our framework concentrates on *additive*
inference rules since these provide a natural description of truth
values via inference rules. Unfortunately, using these rules alone
can force the use of inference rules with an infinite number of
premises. In order to accommodate more expressive and finitary
inference rules, *multiplicative* rules must be used, but limited
to the construction of *additive synthetic inference rules*: such
synthetic rules are described using the proof-theoretic notions of
polarization and focused proof systems. This framework provides a
natural, proof-theoretic treatment of reachability and
non-reachability problems, as well as tabled deduction, bisimulation,
and winning strategies. (Q. Heath collaborated on several parts of
this research effort.)

We continued our research on combinatorial proofs as a notion of proof identity for classical logic. We managed to extend our results from last year: We show for various syntactic formalisms including sequent calculus, analytic tableaux, and resolution, how they can be translated into combinatorial proofs, and which notion of identity they enforce. This allows the comparison of proofs that are given in different formalisms.

These results have been presented at the MLA workshop ins Kanazawa and the IJCAR conference in Oxford, published in .

In a joint work with Willem Heijltjes (University of Bath) and Dominic Hughes (UC Berkeley) we present canonical proof nets for first-order additive linear logic, the fragment of linear logic with sum, product, and first-order universal and existential quantification. We present two versions of our proof nets. One, witness nets, retains explicit witnessing information to existential quantification. For the other, unification nets, this information is absent but can be reconstructed through unification. Unification nets embody a central contribution of the paper: first-order witness information can be left implicit, and reconstructed as needed. Witness nets are canonical for first-order additive sequent calculus. Unification nets in addition factor out any inessential choice for existential witnesses. Both notions of proof net are defined through coalescence, an additive counterpart to multiplicative contractibility, and for witness nets an additional geometric correctness criterion is provided. Both capture sequent calculus cut-elimination as a one-step global composition operation.

These results are published in and have been presented at the First workshop of the Proof Society in Ghent and at the 3rd FISP workshop in Vienna.

The decision problem for multiplicative exponential linear logic (MELL) is one of the most important open problems in the are of linear logic. in 2015 there has been an attempt by Bimbò to prove the decidability of MELL. However, we have found several mistakes in that work, and the main mistake is so serious that there is no obvious fix, and therefore the decidability of MELL remains to be open. As a side effect, our work contains a complete (syntactic) proof of the decidability of the relevant version of MELL, that is the logic obtained from MELL by replacing the linear logic contraction rule by a general unrestricted version of the contraction rule. These results are presented in .

We worked on the evolution of advanced features of the OCaml programming language, designing static analyses to ensure their safety through a scientific study their metatheory. Specifically, we worked on unboxed type declarations (during an internship by Simon Colin, M1 from École Polytechnique) and recursive value definitions (during an internship by Alban Reynaud, L3 from ENS Lyon). The two internships and followup work each resulted in both a change proposal to the OCaml implementation and a submission to an academic conference.

Thomas Réfis (Jane Street) and Frédéric Bour maintain the Merlin language server of OCaml, a tool that provides language-aware features to text editors. We collaborated with them on dissecting the tool and explaining its design and evolution (); the similarities and differences with usual compiler frontends may inform future language implementation work, and our language-agnostic presentation may be of use to tool designers for other languages and proof assistants.

In a programming system where programs are created in one programming language, we consider the addition of another programming language that interoperates with the first – and the reimplementation of some library/system functions in this new language. This can increase expressivity, but it could also break some assumptions made by programmers. Typically, adding a bridge to C or assembly code can introduce memory-unsafe code in a previously-safe system. In , we formalize a notion of “graceful” interoperability between two languages in this setting, determined by full abstraction, that is, preservation of equational reasoning. We instantiate this general idea by extending ML with an advanced expert language with linear types and linear mutable cells.

The *two-level logic approach* that underlies the Abella prover
is excellent at reasoning about the inductive structure of terms with
binding constructs, such as

*Hybrid Linear Logic* (HyLL) was proposed by Chaudhuri and
Despeyroux in 2010 as a meta-logic for reasoning about constrained
transition systems, with applications to a number of domains including
formal molecular biology . This logic is an
extension of (intuitionistic) linear logic with hybrid connectives
that can reason about monoidal constraint domains such as instants of
time or rate functions. *Linear logic with subexponential* is a
different extension of linear logic that has been proposed as a
mechanism for capturing certain well known constrained settings such
as bigraphs or concurrent constraint
programming . In a paper accepted to
MSCS we show how to relate these two
extensions of linear logic by giving an embedding of HyLL into linear
logic with subexponentials. Furthermore, we show that subexponentials
are able to give an adequate encoding of CTL

The *Linear Substitution Calculus* (LSC) is a refinement of the

In this work we show that the LSC is isomorphic to the linear logic representation of the *proof nets* presentation of such a fragment of linear logic. Proof nets are a graphical formalism, which—as most graphical formalisms—is handy for intuitions but not prone to formal reasoning. The result is relevant because it allows to manipulate formally a graphical formalism (proof nets) by means of an ordinary term syntax (the LSC).

This joint work with Delia Kesner (Paris Diderot University) belongs to line of work *Cost Models and Abstract Machines for Functional Programs*, supported by the ANR project COCA HOLA, and it has been published in the proceedings of the international conference ICFP 2018.

Intersection types are a classic tool in the study of the

It is also well-known that *multi types*, a variant of intersection types strongly related to linear logic, also characterise termination properties. Typing derivation of multi types, moreover, provide quantitative information such as the number of evaluation step and the size of the results, as first shown by de Carvalho.

In this work we provide some new results on this line of work, notably we provide the first quantitative study via multi types of the leftmost and linear head evaluation strategies. Moreover, we show that our approach covers also the other cases in the literature.

This joint work with Giulio Guerrieri (Bologna University) belongs to line of work *Cost Models and Abstract Machines for Functional Programs*, supported by the ANR project COCA HOLA, and it has been published in the proceedings of the international conference APLAS 2018.

The theory of the call-by-value *closed* programs, that is, programs without free variables. In the last few years, the authors dedicated considerable efforts to extend it to open terms, that is the case relevant for the implementation of proof assistants. The simplest presentation of the call-by-value *fireball calculus*.

In this work we extend the quantitative study via multi types mentioned in *Tight Typings and Split Bounds* to the fireball calculus.

Provability in intuitionistic propositional logic is decidable and, as revealed by the works of, e.g., Vorobev , Hudelmaier and Dyckhoff , proof theory can provide natural decision procedures, which have been implemented in various software. More precisely, a decision procedure is obtained by performing direct root-first proof-search in (different variants of) a sequent calculus system called LJT (aka G4ip); termination is ensured by a property of the sequent calculus called depth-boundedness.

Independently from this, Claessen and Rosen recently proposed a decision procedure for the same logic, based on a methodology used in the field of Satisfiability-Modulo-Theories (SMT). Their implementation clearly outperforms the sequent-calculus-based implementations.

In 2018 we managed to establish of formal connection between the G4ip sequent calculus and the algorithm from , revealing the features that they share and the features that distinguish them. This connection is interesting because it gives a proof-theoretical light on SMT-solving techniques, and it opens the door to the design of an intuitionistic version of the CDCL algorithm used in SAT-solvers, which decides provability in classical logic.

In this work we study the computational meaning of the inference rules that are admissible, but not derivable, in intuitionistic logic .

An inference rule is admissible for a logic if whenever its antecedent is derivable, its conclusion was already derivable without the rule. In classical logic, whenever this is the case, then also the implication between antecedent and conclusion is derivable. The notion of an admissible rule is therefore internalized in the logic.

This is not the case for intuitionistic logic, and some rules that are admissible are not derivable: therefore they need reasoning outside the usual intuitionistic logic in order to be reduced to purely intuitionistic derivation.

In this work we propose a proof system with term annotations and reduction rules to give a computational meaning to these reductions.

COCA HOLA: Cost Models for Complexity Analyses of Higher-Order Languages, coordinated by B. Accattoli, 2016–2019.

FISP: The Fine Structure of Formal Proof Systems and their Computational Interpretations, coordinated by Lutz Straßburger in collaboration with Université Paris 7, Universität Innsbruck and TU Wien, 2016–2019.

UPScale: Universality of Proofs in SaCLay, a Working Group of LabEx DigiCosme, organized by Chantal Keller (LRI) with regular participation from Parsifal members and a post-doc co-supervision.

Simon Colin did an M1 internship supervised by G. Scherer, conducting a static analysis to check the safety, in OCaml, of unboxing annotations on type declarations.

Alban Reynaud did an L3 internship supervised by G. Scherer, conducting a static analysis to check the safety, in OCaml, of recursive value declarations.

S. Graham-Lengrand was an International Fellow at SRI International, for 25 months over a period of three years between 2015 and 2018.

D. Miller is the General Chair of LICS (Logic In Computer Science), starting July 2018.

D. Miller is on the Steering Committee for the FSCD conference series and the CPP conference series.

D. Miller is a member of the SIGLOG advisory board, starting November 2015.

B. Accattoli co-chaired LSFA 2018: 13th Workshop on Logical and Semantic Frameworks with Applications, Fortaleza, Brazil, September 26-28, 2018.

G. Scherer chaired ML2018: the ML Family Workshop 2018 in Saint Louis, US, on Friday September 28th 2018.

L. Straßburger chaired TYDI 2018: Workshop on “Twenty Years of Deep Inference” in Oxford July 7, 2018.

B. Accattoli was on the PPDP 2018 Program Committe: 20th International Symposium on Principles and Practice of Declarative Programming, Frankfurt, Germany, 3–5 September 2018.

S. Graham-Lengrand was on the LFMTP 2018 Program Comittee: Workshop on Logical Frameworks and Meta-Languages: Theory and Practice, Oxford, UK, 7 July 2018.

L. Straßburger was on the PC for LACompLing 2018: Symposium on Logic and Algorithms in Computational Linguistics, Stockholm, 28–31 August 2018

D. Miller was on the program committee for IJCAR-2018: 9th International Joint Conference on Automated Reasoning, Oxford, 14-17 July 2018.

D. Miller was a member of the jury for selecting the 2018 Ackermann Award (the EACSL award for outstanding doctoral dissertation in the field of Logic in Computer Science).

Member of the EATCS Distinguished Dissertation Award Committee since March 2013.

G. Scherer was on the POPL 2019 Program Committee: Principles Of Programming Languages, 13-19 January 2019 Cascais/Lisbon, Portugal

G. Scherer reviewed for Computer Science Logic (CSL).

L. Straßburger was reviewer for the following conferences:

LICS 2018

IJCAR 2018

FSCD 2018

AiML 2018

ARQNL 2018

B. Accattoli reviewed for LICS 2018, FSCD 2018, PPDP 2018, LSFA 2018.

D. Miller is on the editorial board of the following journals:

Journal of Automated Reasoning

Journal of Applied Logics

G. Scherer reviewed for Mathematical Structures in Computer Science (MSCS).

L. Straßburger was reviewer for the following journals:

Transactions on Computational Logic, ToCL (2x)

Logical Methods in Computer Science, LMCS

Mathematical Structures in Computer Science, MSCS

Journal of Logic, Language and Information, JLLI

Journal of Automated Reasoning, JAR

Notre Dame Journal of Formal Logic, NDJFL

B. Accattoli reviewed for Logical Methods in Computer Science (LMCS) and Theoretical Computer Science (TCS).

S. Graham-Lengrand gave an invited talk at the JFLA 2018 (January), and an invited lecture series at the 8th Summer School on Formal Techniques (May).

B. Accattoli gave an invited talk at the *IFIP Working Group 1.6: Rewriting* on July 8 2018 in Oxford, Uk.

D. Miller was an invited speaker and panelist at the Workshop on Proof Theory and its Applications, 6–7 September 2018 in Ghent, Beligum.

D. Miller gave a colloquim talk at the Technical University of Vienna on 31 October 2018 and at the Cyber Security Lab, NTU, Singapore, 21 March 2018.

G. Scherer participated to a scientific expertise of the implementation of the Tezos blockchain – implemented in OCaml.

L. Straßburger was reviewer for the NWO (Netherlands Organisation for Scientific Research).

Licence : G. Scherer, Programmation Fonctionnelle, 50, L1, Paris 8 (Vincennes / Saint Denis), France

Licence : K. Chaudhuri, Programmation avancée en OCaml, 40 hours eq TD, L3, École polytechnique, France

Bachelor : K. Chaudhuri, Computer programming, principal instructor, École polytechnique, France

(This program has no direct equivalent in the traditional French university system; the closest would be L1.)

Licence: S. Graham-Lengrand, “*INF412: Fondements de l'Informatique: Logique, Modèles, Calcul*”, 32 hours eq. TD, L3, École Polytechnique,
France.

Master: S. Graham-Lengrand, “*INF551:
Computational Logic*”, 45 hours eq. TD, M1, École
Polytechnique, France.

Master: B. Accattoli, “*Logique linéaire et paradigmes logiques du calcul*”, 18 hours eq. TD, M2, Master Parisien de Recherche en Informatique (MPRI), France.

Master: D. Miller, “*Logique linéaire et paradigmes logiques du calcul*”, 18 hours eq. TD, M2, Master Parisien de Recherche en Informatique (MPRI), France.

Summer School: B. Accattoli, “The Complexity of Beta-reduction”, 4.5h, International School on Rewriting (ISR) 2018, Cali, Colombia.

PhD : Sonia Marin, Modal Proof Theory through a Focused Telescope, Université Paris-Saclay, 30 January 2018, encadrant(s): Lutz Straßburger, Dale Miller.

PhD in progress: Ulysse Gérard and Matteo Manighetti supervised by Dale Miller.

PhD in progress: François Thiré (since 1st October 2016), supervised by S. Graham-Lengrand (joint with G. Dowek).

PhD in progress: Maico Leberle supervised by Dale Miller and Beniamino Accattoli.

D. Miller was the a reporter for the PhD juries of Michael Lettmann (TU Vienna, 30 October 2018)

L. Straßburger serves as member of the “commission développement technologique (CDT)” for Inria Saclay–Île-de-France (since June 2012).

F. Lamarche was site co-ordinator for the Activity Report for Inria Saclay–Ile-de-France.

G. Scherer and M. Manighetti participated the “Fête de la Science” exhibit at Inria Saclay on the whole day of October 11th, 2018. They manned an activity on sorting algorithms for colored plastic pieces.

G. Scherer spoke at the “Unithé ou café” meeting, a Saclay-internal popularization meeting, on February 1st, 2018.