The EPI Gallinette aims at developing a new generation of proof assistants, with the belief that practical experiments must go in pair with foundational investigations:

The goal is to advance proof assistants both as certified programming languages and mechanised logical systems. Advanced programming and mathematical paradigms must be integrated, notably dependent types and effects. The distinctive approach is to implement new programming and logical paradigms on top of Coq by considering the latter as a target language for compilation.

The aim of foundational investigations is to extend the boundaries of the Curry-Howard correspondence. It is seen both as providing foundations for programming languages and logic, and as a purveyor of techniques essential to the development of proof assistants. Under this perspective, the development of proof assistants is seen as a total experiment using the correspondence in every aspect: programming languages, type theory, proof theory, rewriting and algebra.

Software quality is a requirement that is becoming more and more prevalent,
by now far exceeding the traditional scope of embedded systems. The
development of tools to construct software that respects a given specification
is a major challenge facing computer science. *Proof assistants
*such as Coq provide a formal method whose central
innovation is to produce *certified programs *by transforming
the very activity of programming. Programming and proving are merged
into a single development activity, informed by an elegant but rigid
mathematical theory inspired by the correspondence between programming,
logic and algebra: the *Curry-Howard correspondence*. For the
certification of programs, this approach has shown its efficiency
in the development of important pieces of certified software such
as the C compiler of the CompCert project .
The extracted CompCert compiler is reliable and efficient, running
only 15% slower than GCC 4 at optimisation level 2 (`gcc -O2`),
a level of optimisation that was considered before to be highly unreliable.

Proof assistants can also be used to *formalise mathematical
theories*: they not only provide a means of representing mathematical
theories in a form amenable to computer processing, but their internal
logic provides a language for reasoning about such theories. In the
last decade, proof assistants have been used to verify extremely large
and complicated proofs of recent mathematical results, sometimes requiring
either intensive computations ,
or intricate combinations of a multitude of mathematical theories
. But formalised mathematics is more than just proof
checking and proof assistants can help with the organisation mathematical
knowledge or even with the discovery of new constructions and proofs.

Unfortunately, the rigidity of the theory behind proof assistants impedes their expressiveness both as programming languages and as logical systems. For instance, a program extracted from Coq only uses a purely functional subset of OCaml, leaving behind important means of expression such as side-effects and objects. Limitations also appears in the formalisation of advanced mathematics: proof assistants do not cope well with classical axioms such as excluded middle and choice which are sometimes used crucially. The fact of the matter is that the development of proof assistants cannot be dissociated from a reflection on the nature of programs and proofs coming from the Curry-Howard correspondence. In the EPC Gallinette, we propose to address several drawbacks of proof assistants by pushing the boundaries of this correspondence.

In the 1970's, the Curry-Howard correspondence was seen as a perfect match between functional programs, intuitionistic logic, and Cartesian closed categories. It received several generalisations over the decades, and now it is more widely understood as a fertile correspondence between computation, logic, and algebra. Nowadays, the view of the Curry-Howard correspondence has evolved from a perfect match to a collection of theories meant to explain similar structures at work in logic and computation, underpinned by mathematical abstractions. By relaxing the requirement of a perfect match between programs and proofs, and instead emphasising the common foundations of both, the insights of the Curry-Howard correspondence may be extended to domains for which the requirements of programming and mathematics may in fact be quite different.

Consider the following two major theories of the past decades, which were until recently thought to be irreconcilable:

**(Martin-Löf) Type theory:** introduced by Martin-Löf in 1971,
this formalism is both a programming language and
a logical system. The central ingredient is the use of *dependent
types* to allow fine-grained invariants to be expressed in program
types. In 1985, Coquand and Huet developed a similar system called
the *calculus of constructions*, which served as logical foundation
of the first implementation of Coq. This kind of systems is still
under active development, especially with the recent advent of homotopy
type theory (HoTT) which gives a new point of view
on types and the notion of equality in type theory.

**The theory of effects:** starting in the 1980's, Moggi
and Girard put forward monads and co-monads as describing
various compositional notions of computation. In this theory, programs
can have side-effects (state, exceptions, input-output), logics can
be non-intuitionistic (linear, classical), and different computational
universes can interact (modal logics). Recently, the safe and automatic
management of resources has also seen a coming of age (Rust, Modern
C++) confirming the importance of linear logic for various programming
concepts. It is now understood that the characteristic feature of
the theory of effects is sensitivity to *evaluation order*, in
contrast with type theory which is built around the assumption that
evaluation order is irrelevant.

We now outline a series of scientific challenges aimed at understanding of type theory, effects, and their combination.

More precisely, three key axes of improvement have been identified:

Making the notion of equality closer to what is usually assumed when doing proofs on black board, with a balance between irrelevant equality for simple structures and equality up-to equivalences for more complex ones (Section ). Such a notion of equality should allow one to implement traditional model transformations that enhance the logical power of the proof assistant using distinct compilation phases.

Advancing the foundations of effects within the Curry-Howard approach. The objective is to pave the way for the integration of effects in proof assistants and to prototype the corresponding implementation. This integration should allow for not only certified programming with effects, but also the expression of more powerful logics (Section ).

Making more programming features (notably, object polymorphism) available in proof assistants, in order to scale to practical-sized developments. The objective is to enable programming styles closer to common practices. One of the key challenges here is to leverage gradual typing to dependent programming (Section ).

To validate the new paradigms, we propose in Section three particular application fields in which members of the team already have a strong expertise: code refactoring, constraint programming and symbolic computation.

The democratisation of proof assistants based on type theory has likely been impeded one central problem: the mismatch between the conception of equality in mathematics and its formalisation in type theory. Indeed, some basic principles that are used implicitly in mathematics—such as Church’s principle of propositional extensionality, which says that two propositions are equal when they are logically equivalent—are not derivable in type theory. Even more problematically, from a computer science point of view, the basic concept of two functions being equal when they are equal at every “point” of their domain is also not derivable: rather, it must be added as an additional axiom. Of course, these principles are consistent with type theory so that working under the corresponding additional assumptions is safe. But the use of these assumptions in a definition potentially clutters its computational behaviour: since axioms are computational black boxes, computation gets stuck at the points of the code where they have been used.

We propose to investigate how expressive logical transformations such as forcing and sheaf construction might be used to enhance the computational and logical power of proof assistants—with a particular emphasis on their implementation in the Coq proof assistant by the means of effective translations (or compilation phases). One of the main topics of this task, in connection to the ERC project CoqHoTT, is the integration in Coq of new concepts inspired by homotopy type theory such as the univalence principle, and higher inductive types.

In the Coq proof assistant, the sort *proof
irrelevance,* can be expressed formally as: *raison d'être* of the sort

The univalence principle is becoming widely accepted as a very promising
avenue to provide new foundations for mathematics and type theory.
However, this principle has not yet been incorporated into a proof
assistant. Indeed, the very mathematical structures (known as

Hence a major objective is to achieve a complete internalisation of univalence in intensional type theory, including an integration to a new version of Coq. We will strive to keep compatibility with previous versions, in particular from a performance point of view. Indeed, the additional complexity of homotopy type theory should not induce an overhead in the type checking procedure used by the software if we want our new framework to become rapidly adopted by the community. Concretely, we will make sure that the compilation time of Coq’s Standard Library will be of the same order of magnitude.

Extending the power of a logic using model transformations (*e.g.,*
forcing transformation , or the sheaf
construction ) is a classic topic of mathematical
logic , . However, these ideas have
not been much investigated in the setting of type theory, even though
they may provide a useful framework for extending the logical power
of proof assistant in a modular way. There is a good reason for this:
with a syntactic notion of equality, the underlying structure of type
theory does not conform to the structure of topos used in mathematical
logic. A direct incorporation of the standard techniques is therefore
not possible. However, a univalent notion of equality brings type
theory closer to the required algebraic structure, as it corresponds
to the notion of

The Gallinette project advocates the use of distinct compilation phases as a methodology for the design of a new generation of proof assistants featuring modular extensions of a core logic. The essence of a compiler is the separation of the complexity of a translation process into modular stages, and the organization of their re-composition. This idea finds a natural application in the design of complex proof assistants (Figure ). For instance, the definition of type classes in Coq follows this pattern, and is morally given by the means of a translation into a type-class free kernel. More recently, a similar approach by compilation stages, using the forcing transformation, was used to relax the strict positivity condition guarding inductive types , . We believe that this flavour of compilation-based strategies offers a promising direction of investigation for the propose of defining a decidable type checking algorithm for HoTT.

We propose the incorporation of effects in the theory of proof assistants at a foundational level. Not only would this allow for certified programming with effects, but it would moreover have implications for both semantics and logic.

We mean *effects* in a broad sense that encompasses both Moggi's
monads and Girard's linear
logic . These two seminal works have given rise to respective
theories of effects (monads) and resources (co-monads). Recent advances,
however have unified these two lines of thought: it is now clear that
the defining feature of effects, in the broad sense, is sensitivity
to evaluation order , .

In contrast, the type theory that forms the foundations of proof assistants
is based on pure *“the next
difficult step [...] currently under investigation”*.

Any realistic program contains effects: state, exceptions, input-output.
More generally, evaluation order may simply be important for complexity
reasons. With this in mind, many works have focused on certified programming
with effects: notably Ynot , and more recently

We propose to develop the foundations of a type theory with effects
taking into account the logical and semantic aspects, and to study
their practical and theoretical consequences. A type theory that integrates
effects would have logical, algebraic and computational implications
when viewed through the Curry-Howard correspondence. For instance,
effects such as control operators establish a link with classical
proof theory . Indeed, control
operators provide computational interpretations of type isomorphisms
such as

The goal is to develop a type theory with effects that accounts both for practical experiments in certified programming, and for clues from denotational semantics and logical phenomena, in a unified setting.

A crucial step is the integration of dependent types with effects,
a topic which has remained *“currently under investigation”*
ever since the beginning. The difficulty resides
in expressing the dependency of types on terms that can perform side-effects
during the computation. On the side of denotational semantics, several
extensions of categorical models for effects with dependent types
have been proposed , using axioms that should
correspond to restrictions in terms of expressivity but whose practical
implications, however, are not immediately transparent. On the side
of logical approaches , , , ,
one first considers a drastic restriction to terms that do not compute,
which is then relaxed by semantic means. On the side of systems for
certified programming such as

Thus, the recurring idea is to introduce restrictions on the dependency
in order to establish an encapsulation of effects. In our approach,
we seek a principled description of this idea by developing the concept
of *semantic value* (thunkables, linears) which arose from foundational
considerations , , and
whose relevance was highlighted in recent works , .
The novel aspect of our approach is to seek a proper extension of
type theory which would provide foundations for a classical type theory
with axiom of choice in the style of Herbelin ,
but which moreover could be generalised to effects other than just
control by exploiting an abstract and adaptable notion of semantic
value.

In our view, the common idea that evaluation order does not matter
for pure and termination computations should serve as a bridge between
our proposals for dependent types in the presence of effects and traditional
type theory. Building on the previous goal, we aim to study the relationship
between semantic values, purity, and parametricity theorems , .
Our goal is to characterise parametricity as a form of intuitionistic*
depolarisation* following the method by which the first game model
of full linear logic was given (Melliès , ).
We have two expected outcomes in mind: enriching type theory with
intensional content without losing its properties, and giving an explanation
of the dependent types in the style of Idris and

An integrated type theory with effects requires an understanding of
evaluation order from the point of view of rewriting. For instance,
rewriting properties can entail the decidability of some conversions,
allowing the automation of equational reasoning in types .
They can also provide proofs of computational consistency (that terms
are not all equivalent) by showing that extending calculi with new
constructs is conservative . In our approach,
the

One goal is to prove computational consistency or decidability of conversions purely using advanced rewriting techniques following a technique introduced in . Another goal is the characterisation of weak reductions: extensions of the operational semantics to terms with free variables that preserve termination, whose iteration is equivalent to strong reduction , . We aim to show that such properties derive from generic theorems of higher-order rewriting , so that weak reduction can easily be generalised to richer systems with effects.

Proof theory and rewriting are a source of *coherence theorems
*in category theory, which show how calculations in a category can
be simplified with an embedding into a structure with stronger properties
, . We aim to explore such results
for categorical models of effects , . Our key
insight is to consider the reflection between *indirect and direct
models * , as a coherence theorem:
it allows us to embed the traditional models of effects into structures
for which the rewriting and proof-theoretic techniques from the previous
section are effective.

Building on this, we are further interested in connecting operational
semantics to 2-category theory, in which a second dimension is traditionally
considered for modelling conversions of programs rather than equivalences.
This idea has been successfully applied for the

The unified theory of effects and resources prompts an investigation into the semantics of safe and automatic resource management, in the style of Modern C++ and Rust. Our goal is to show how advanced semantics of effects, resources, and their combination arise by assembling elementary blocks, pursuing the methodology applied by Melliès and Tabareau in the context of continuations . For instance, by combining control flow (exceptions, return) with linearity allows us to describe in a precise way the “Resource Acquisition Is Initialisation” idiom in which the resource safety is ensured with scope-based destructors. A further step would be to reconstruct uniqueness types and borrowing using similar ideas.

The development of tools to construct software systems that respect
a given specification is a major challenge of current and future research
in computer science. Certified programming with dependent types has
recently attracted a lot of interest, and Coq is the *de facto*
standard for such endeavours, with an increasing number of users,
pedagogical resources, and large-scale projects. Nevertheless, significant
work remains to be done to make Coq more usable from a software engineering
point of view. The Gallinette team proposes to make progress on three
lines of work: (i) the development of gradual certified programming,
(ii) the integration of imperative features and object polymorphism
in Coq, and (iii) the development of robust tactics for proof engineering
for the scaling of formalised libraries.

One of the main issues faced by a programmer starting to internalise in a proof assistant code written in a more permissive world is that type theory is constrained by a strict type discipline which lacks flexibility. Concretely, as soon that you start giving more a precise type/specification to a function, the rest of the code interacting with this functions needs to be more precise too. To address this issue, the Gallinette team will put strong efforts into the development of gradual typing in type theory to allow progressive integration of code that comes from a more permissive world.

Indeed, on the way to full verification, programmers can take advantage of a gradual approach in which some properties are simply asserted instead of proven, subject to dynamic verification. Tabareau and Tanter have made preliminary progress in this direction . This work, however, suffers from a number of limitations, the most important being the lack of a mechanism for handling the possibility of runtime errors within Coq. Instead of relying on axioms, this project will explore the application of Section to embed effects in Coq. This way, instead of postulating axioms for parts of the development that are too hard/marginal to be dealt with, the system adds dynamic checks. Then, after extraction, we get a program that corresponds to the initial program but with dynamic check for parts that have not been proven, ensuring that the program will raise an error instead of going outside its specification.

This will yield new foundations of gradual certified programming, both more expressive and practical. We will also study how to integrate previous techniques with the extraction mechanism of Coq programs to OCaml, in order to exploit the exception mechanism of OCaml.

Abstract data types (ADTs) become useful as the size of programs grows
since they provide for a modular approach, allowing abstractions about
data to be expressed and then instantiated. Moreover, ADTs are natural
concepts in the calculus of inductive constructions. But while it
is easy to declare an ADT, it is often difficult to implement an efficient
one. Compare this situation with, for example, Okasaki's purely functional
data structures which implement ADTs like queues
in languages with imperative features. Of course, Okasaki's queues
enforce some additional properties for free, such as persistence,
but the programmer may prefer to use and to study a simpler implementation
without those additional properties. Also in certified symbolic computation
(see ), an efficient functional
implementation of ADTs is often not available, and efficiency is a
major challenge in this area. Relying on the theoretical work done
in , we will equip Coq with imperative
features and we will demonstrate how they can be used to provide efficient
implementations of ADTs. However, it is also often the case that imperative
implementation are hard-to-reason-on, requiring for instance the use
of separation logic. But in that case, we could take benefice of recent
works on integration of separation logic in the Coq proof assistant
and in particular the Iris project http://

Object-oriented programming has evolved since its foundation based on the representation of computations as an exchange of messages between objects. In modern programming languages like Scala, which aims at a synthesis between object-oriented and functional programming, object-orientation concretely results in the use of hierarchies of interfaces ordered by the subtyping relation and the definition of interface implementations that can interoperate. As observed by Cook and Aldrich , , interoperability can be considered as the essential feature of objects and is a requirement for many modern frameworks and ecosystems: it means that two different implementations of the same interface can interoperate.

Our objective is to provide a representation of object-oriented programs, by focusing on subtyping and interoperability.

For subtyping, the natural solution in type theory is coercive subtyping , as implemented in Coq, with an explicit operator for coercions. This should lead to a shallow embedding, but has limitations: indeed, while it allows subtyping to be faithfully represented, it does not provide a direct means to represent union and intersection types, which are often associated with subtyping (for instance intersection types are present in Scala). A more ambitious solution would be to resort to subsumptive subtyping (or semantic subtyping ): in its more general form, a type algebra is extended with boolean operations (union, intersection, complementing) to get a boolean algebra with operators (the original type constructors). Subtying is then interpreted as the natural partial order of the boolean algebra.

We propose to use the type class machinery of Coq to implement semantic subtyping for dependent type theory. Using type class resolution, we can emulate inference rules of subsumptive subtyping without modifying Coq internally. This has also another advantage. As subsumptive subtyping for dependent types should be undecidable in general, using type class resolution allows for an incomplete yet extensible decision procedure.

When developing certified software, a major part of the effort is
spent not only on writing proof scripts, but on *rewriting* them,
either for the purpose of code maintenance or because of more significant
changes in the base definitions. Regrettably, proof scripts suffer
more often than not from a bad programming style, and too many proof
developers casually neglect the most elementary principles of well-behaved
programmers. As a result, many proof scripts are very brittle, user-defined
tactics are often difficult to extend, and sometimes even lack a clear
specification. Formal libraries are thus generally very fragile pieces
of software. One reason for this unfortunate situation is that proof
engineering is very badly served by the tools currently available
to the users of the Coq proof assistant, starting with its tactic
language. One objective of the Gallinette team is to develop better
tools to write proof scripts.

Completing and maintaining a large corpus of formalised mathematics
requires a well-designed tactic language. This language should both
accommodate the possible specific needs of the theories at stake,
and help with diagnostics at refactoring time. Coq's tactic language
is in fact two-leveled. First, it includes a basic tactic language,
to organise the deductive steps in a proof script and to perform the
elementary bureaucracy. Its second layer is a meta-programming language,
which allows user to defined their own new tactics at toplevel. Our
first direction of work consists in the investigation of the appropriate
features of the *basic tactic language*. For instance, the design
of the Ssreflect tactic language, and its support for the small scale
reflection methodology , has been a
key ingredient in at least two large scale formalisation endeavours:
the Four Colour Theorem and of the Odd Order
Theorem . Building on our experience with the Ssreflect
tactic language, we will contribute to the ongoing work on the basic
tactic language for Coq. The second objective of this task is to contribute
to the design of a *typed tactic language*. In particular, we
will build on the work of Ziliani and his collaborators ,
extending it with reasoning about the effects that tactics have on
the “state of a proof” (e.g. number of sub-goals, metavariables in
context). We will also develop a novel approach for incremental type
checking of proof scripts, so that programmers gain access to a richer
discovery- engineering interaction with the proof assistant.

The first three axes of the EPC Gallinette aim at developing a new generation of proof assistants. But we strongly believe that foundational investigations must go hand in hand with practical experiments. Therefore, we expect to benefit from existing expertise and collaborations in the team to experiment our extensions of Coq on real world developments. It should be noticed that those practical experiments are strongly guided by the deep history of research on software engineering of team members.

In the context of refactoring of C programs, we intend to formalise program transformations that are written in an imperative style to test the usability of our addition of effects in the proof assistant. This subject has been chosen based on the competence of members of the team.

Build a refactoring tool that programmers can rely on and make it available in a popular platform (such as Eclipse, IntelliJ or Frama-C).

Explore large, drastic program transformations such as replacing a design architecture for an other one, by applying a sequence of small refactoring operations (as we have done for Java and Haskell programs before , , ), while ensuring behaviour preservation.

Explore the use of enhancements of proof systems on large developments. For instance, refactoring tools are usually developed in the imperative/object paradigm, so the extension of Coq with side effects or with object features proposed in the team can find a direct use-case here.

We plan to make use of the internalisation of the object-oriented paradigm in the context of constraint programming. Indeed, this domain is made of very complex algorithms that are often developed using object-oriented programming (as it is the case for instance for CHOCO, which is developed in the Tasc Group at IMT Atlantique, Nantes). We will in particular focus on filtering algorithms in constraint solvers, for which research publications currently propose new algorithms with manual proofs. Their formalisation in Coq is challenging. Another interesting part of constraint solving to formalise is the part that deals with program generation (as opposed to extraction). However, when there are numerous generated pieces of code, it is not realistic to prove their correctness manually, and it can be too difficult to prove the correctness of a generator. So we intend to explore a middle path that consists in generating a piece of code along with its corresponding proof (script or proof term). A target application could be interval constraints (for instance Allen interval algebra or region connection calculus) that can generate thousands of specialised filtering algorithms for a small number of variables .

Finally, Rémi Douence has already worked (articles publishing , , , PhD Thesis advising ) with different members of the Tasc team. Currently, he supervises with Nicolas Beldiceanu the PhD Thesis of Ekaterina Arafailova in the Tasc team. She studies finite transducers to model time-series constraints , , . This work requires proofs, manually done for now, we would like to explore when these proofs could be mechanised.

We will investigate how the addition of effects in the Coq proof assistant can facilitate the marriage of computer algebra with formal proofs. Computer algebra systems on one hand, and proof assistants on the other hand, are both designed for doing mathematics with the help of a computer, by the means of symbolic computations. These two families of systems are however very different in nature: computer algebra systems allow for implementations faithful to the theoretical complexity of the algorithms, whereas proof assistants have the expressiveness to specify exactly the semantic of the data-structures and computations.

Experiments have been run that link computer algebra systems with Coq , . These bridges rely on the implementation of formal proof-producing core algorithms like normalisation procedures. Incidentally, they require non trivial maintenance work to survive the evolution of both systems. Other proof assistants like the Isabelle/HOL system make use of so-called reflection schemes: the proof assistant can produce code in an external programming language like SML, but also allows to import the values output by these extracted programs back inside the formal proofs. This feature extends the trusted base of code quite significantly but it has been used for major achievements like a certified symbolic/numeric ODE solver .

We would like to bring Coq closer to the efficiency and user-friendliness of computer algebra systems: for now it is difficult to use the Coq programming language so that certified implementations of computer algebra algorithms have the right, observable, complexity when they are executed inside Coq. We see the addition of effects to the proof assistant as an opportunity to ease these implementations, for instance by making use of caching mechanisms or of profiling facilities. Such enhancements should enable the verification of computation-intensive mathematical proofs that are currently beyond reach, like the validation of Helfgott's proof of the weak Goldbach conjecture .

Gaëtan Gilbert, currently PhD student in the Gallinette team, will be promoted expert engineer for the Coq consortium, staying in the Gallinette team.

Matthieu Sozeau, Inria Junior Researcher and leader of the Coq development team, is joining the Gallinette team end of 2019-beginning of 2020.

Nicolas Tabareau is now director of research (DR2) at Inria since October 2019.

Marie Kerjean has been awarded a L’Oréal - Unesco Foundation grant.

L’Oréal - Unesco Grants for Women in Science are awarded to talented young female researchers.

*The Coq Proof Assistant*

Keywords: Proof - Certification - Formalisation

Scientific Description: Coq is an interactive proof assistant based on the Calculus of (Co-)Inductive Constructions, extended with universe polymorphism. This type theory features inductive and co-inductive families, an impredicative sort and a hierarchy of predicative universes, making it a very expressive logic. The calculus allows to formalize both general mathematics and computer programs, ranging from theories of finite structures to abstract algebra and categories to programming language metatheory and compiler verification. Coq is organised as a (relatively small) kernel including efficient conversion tests on which are built a set of higher-level layers: a powerful proof engine and unification algorithm, various tactics/decision procedures, a transactional document model and, at the very top an IDE.

Functional Description: Coq provides both a dependently-typed functional programming language and a logical formalism, which, altogether, support the formalisation of mathematical theories and the specification and certification of properties of programs. Coq also provides a large and extensible set of automatic or semi-automatic proof methods. Coq's programs are extractible to OCaml, Haskell, Scheme, ...

Release Functional Description: Coq version 8.10 contains two major new features: support for a native fixed-precision integer type and a new sort SProp of strict propositions. It is also the result of refinements and stabilization of previous features, deprecations or removals of deprecated features, cleanups of the internals of the system and API, and many documentation improvements. This release includes many user-visible changes, including deprecations that are documented in the next subsection, and new features that are documented in the reference manual.

Version 8.10 is the fifth release of Coq developed on a time-based development cycle. Its development spanned 6 months from the release of Coq 8.9. Vincent Laporte is the release manager and maintainer of this release. This release is the result of 2500 commits and 650 PRs merged, closing 150+ issues.

See the Zenodo citation for more information on this release: https://zenodo.org/record/3476303#.Xe54f5NKjOQ

News Of The Year: Coq 8.10.0 contains:

- some quality-of-life bug fixes, - a critical bug fix related to template polymorphism, - native 63-bit machine integers, - a new sort of definitionally proof-irrelevant propositions: SProp, - private universes for opaque polymorphic constants, - string notations and numeral notations, - a new simplex-based proof engine for the tactics lia, nia, lra and nra, - new introduction patterns for SSReflect, - a tactic to rewrite under binders: under, - easy input of non-ASCII symbols in CoqIDE, which now uses GTK3.

All details can be found in the user manual.

Participants: Yves Bertot, Frédéric Besson, Maxime Denes, Emilio Jesús Gallego Arias, Gaëtan Gilbert, Jason Gross, Hugo Herbelin, Assia Mahboubi, Érik Martin-Dorel, Guillaume Melquiond, Pierre-Marie Pédrot, Michael Soegtrop, Matthieu Sozeau, Enrico Tassi, Laurent Théry, Théo Zimmermann, Theo Winterhalter, Vincent Laporte, Arthur Charguéraud, Cyril Cohen, Christian Doczkal and Chantal Keller

Partners: CNRS - Université Paris-Sud - ENS Lyon - Université Paris-Diderot

Contact: Matthieu Sozeau

URL: http://

*Mathematical Components library*

Keyword: Proof assistant

Functional Description: The Mathematical Components library is a set of Coq libraries that cover the prerequiste for the mechanization of the proof of the Odd Order Theorem.

Release Functional Description: The library includes 16 more theory files, covering in particular field and Galois theory, advanced character theory, and a construction of algebraic numbers.

Participants: Alexey Solovyev, Andrea Asperti, Assia Mahboubi, Cyril Cohen, Enrico Tassi, François Garillot, Georges Gonthier, Ioana Pasca, Jeremy Avigad, Laurence Rideau, Laurent Théry, Russell O'Connor, Sidi Ould Biha, Stéphane Le Roux and Yves Bertot

Contact: Assia Mahboubi

Functional Description: Ssreflect is a tactic language extension to the Coq system, developed by the Mathematical Components team.

Participants: Assia Mahboubi, Cyril Cohen, Enrico Tassi, Georges Gonthier, Laurence Rideau, Laurent Théry and Yves Bertot

Contact: Yves Bertot

Keywords: Coq - Proof assistant

Functional Description: A replacement for Ltac, the tactic language of Coq.

Contact: Pierre-Marie Pédrot

The call-by-need evaluation strategy for the

In an impressive series of papers, Krivine showed at the edge of the last decade how classical realizability provides a surprising technique to build models for classical theories. In particular, he proved that classical realizability subsumes Cohen’s forcing, and even more, gives rise to unexpected models of set theories. Pursuing the algebraic analysis of these models that was first undertaken by Streicher, Miquel recently proposed to lay the algebraic foundation of classical realizability and forcing within new structures which he called implicative algebras. These structures are a generalization of Boolean algebras based on an internal law representing the implication. Notably, implicative algebras allow for the adequate interpretation of both programs (i.e. proofs) and their types (i.e. formulas) in the same structure. The very definition of implicative algebras takes position on a presentation of logic through universal quantification and the implication and, computationally, relies on the call-by-name λ-calculus. In , we investigate the relevance of this choice, by introducing two similar structures. On the one hand, we define disjunctive algebras, which rely on internal laws for the negation and the disjunction and which we show to be particular cases of implicative algebras. On the other hand, we introduce conjunctive algebras, which rather put the focus on conjunctions and on the call-by-value evaluation strategy. We finally show how disjunctive and conjunctive algebras algebraically reflect the well-known duality of computation between call-by-name and call-by-value.

Building on the connection between resource management in systems programming and ordered logic we established previously, we investigate a pervasive issue in the languages C++ and Rust whereby compiler-generated clean-up functions cause a stack overflow on deep structures. In , we show how to generate clean-up algorithms that run in constant time and space for a broad class of ordered algebraic datatypes such as ones that can be found in C++ and Rust or in future extensions of functional programming languages with first-class resources.

Building on our investigations for a resource-management model for OCaml, we have proposed several preliminary improvements to the OCaml language. We contributed to the design and implementation of new resource management primitives (PRs #2118, #8962), resource-safe C APIs (PRs #8993, #8997, #9037), and core runtime capabilities (PR #8961). (#2118 has been merged into OCaml 4.08 and #8993 and #9037 have been merged into OCaml 4.10.)

We continued to interact with L. White and S. Dolan (Jane Street), on the design of resource management and exception safety in multicore OCaml.

In their work on second-order equational logic, Fiore and Hur have studied presentations of simply typed languages by generating binding constructions and equations among them. To each pair consisting of a binding signature and a set of equations, they associate a category of `models', and they give a monadicity result which implies that this category has an initial object, which is the language presented by the pair.
In , we propose, for the untyped setting, a variant of their approach where monads and modules over them are the central notions. More precisely, we study, for monads over sets, presentations by generating (`higher-order') operations and equations among them. We consider a notion of 2-signature which allows to specify a monad with a family of binding operations subject to a family of equations, as is the case for the paradigmatic example of the lambda calculus, specified by its two standard constructions (application and abstraction) subject to

Linear Logic was introduced as the computational counterpart of the algebraic notion of linearity. Differential Linear Logic refines Linear Logic with a proof-theoretical interpretation of the geometrical process of differentiation. In , we construct a polarized model of Differential Linear Logic satisfying computational constraints such as an interpretation for higher-order functions, as well as constraints inherited from physics such as a continuous interpretation for spaces. This extends what was done previously by Kerjean for first order Differential Linear Logic without promotion. Concretely, we follow the previous idea of interpreting the exponential of Differential Linear Logic as a space of higher-order distributions with compact-support, and is constructed as an inductive limit of spaces of distributions on Euclidean spaces. We prove that this exponential is endowed with a co-monadic like structure, with the notable exception that it is functorial only on isomorphisms. Interestingly, as previously argued by Ehrhard, this still allows one to interpret differential linear logic without promotion.

Chiralities are categories introduced by Mellies to account for a game semantics point of view on negation. In , , we uncover instances of this structure in the theory of topological vector spaces, thus constructing several new polarized models of Multiplicative Linear Logic. These models improve previously known smooth models of Differential Linear Logic, showing the relevance of chiralities to express topological properties of vector spaces. They are the first denotational polarized models of Multiplicative Linear Logic, based on the pre-existing theory of topological vector spaces, in which two distinct sets of formulas, two distinct negations, and two shifts appear naturally.

Distributed applications are challenging to program because they have to deal with a plethora of concerns, including synchronisation, locality, replication, security and fault tolerance. Aspect-oriented programming (AOP) is a paradigm that promotes better modularity by providing means to encapsulate cross-cutting concerns in entities called aspects. Over the last years, a number of distributed aspect-oriented programming languages and systems have been proposed, illustrating the benefits of AOP in a distributed setting. Chemical calculi are particularly well-suited to formally specify the behaviour of concurrent and distributed systems. The join calculus is a functional name-passing calculus, with both distributed and object-oriented extensions. It is used as the basis of concurrency and distribution features in several mainstream languages like C# (Polyphonic C#, now C

There is a critical tension between substitution, dependent elimination and effects in type theory. In this paper, we crystallize this tension in the form of a no-go theorem that constitutes the fire triangle of type theory. To release this tension, we propose in DCBPV, an extension of call-by-push-value (CBPV)-a general calculus of effects-to dependent types. Then, by extending to CBPV the well-known decompositions of call-by-name and call-by-value into CBPV, we show why, in presence of effects, dependent elimination must be restricted in call-by-name, and substitution must be restricted in call-by-value. To justify DCBPV and show that it is general enough to interpret many kinds of effects, we define various effectful syntactic translations from DCBPV to Martin-Löf type theory: the reader, weaning and forcing translations.

Traditional approaches to compensate for the lack of exceptions in type theories for proof assistants have severe drawbacks from both a programming and a reasoning perspective. We recently extended the Calculus of Inductive Constructions (CIC) with exceptions. The new exceptional type theory is interpreted by a translation into CIC, covering full dependent elimination, decidable type-checking and canonicity. However, the exceptional theory is inconsistent as a logical system. To recover consistency, we propose an additional translation that uses parametricity to enforce that all exceptions are caught locally. While this enforcement brings logical expressivity gains over CIC, it completely prevents reasoning about exceptional programs such as partial functions. In , we addresses the dilemma between exceptions and consistency in a more flexible manner, with the Reasonably Exceptional Type Theory (RETT). RETT is structured in three layers: (a) the exceptional layer, in which all terms can raise exceptions; (b) the mediation layer, in which exceptional terms must be provably parametric; (c) the pure layer, in which terms are non-exceptional, but can refer to exceptional terms. We present the general theory of RETT, where each layer is realized by a predicative hierarchy of universes, and develop an instance of RETT in Coq: the impure layer corresponds to the predicative universe hierarchy, the pure layer is realized by the impredicative universe of propositions, and the mediation layer is reified via a parametricity type class. RETT is the first full dependent type theory to support consistent reasoning about exceptional terms, and the CoqRETT plugin readily brings this ability to Coq programmers.

Type theories with equality reflection, such as extensional type theory (ETT), are convenient theories in which to formalise mathematics, as they make it possible to consider provably equal terms as convertible. Although type-checking is undecidable in this context, variants of ETT have been implemented, for example in NuPRL and more recently in Andromeda. The actual objects that can be checked are not proof-terms, but derivations of proof-terms. This suggests that any derivation of ETT can be translated into a typecheckable proof term of intensional type theory (ITT). However, this result, investigated categorically by Hofmann in 1995, and 10 years later more syntactically by Oury, has never given rise to an effective translation. In , we provide the first syntactical translation from ETT to ITT with uniqueness of identity proofs and functional extensionality. This translation has been defined and proven correct in Coq and yields an executable plugin that translates a derivation in ETT into an actual Coq typing judgment. Additionally, we show how this result is extended in the context of homotopy to a two-level type theory.

The MetaCoq project , aims to provide a certified meta-programming environment in Coq. It builds on Template-Coq, a plugin for Coq originally implemented by Malecha (2014), which provided a reifier for Coq terms and global declarations, as represented in the Coq kernel, as well as a denotation command. Recently, it was used in the CertiCoq certified compiler project (Anand et al., 2017), as its front-end language, to derive parametricity properties (Anand and Morrisett, 2018). However, the syntax lacked semantics, be it typing semantics or operational semantics, which should reflect, as formal specifications in Coq, the semantics of Coq's type theory itself. The tool was also rather bare bones, providing only rudimentary quoting and unquoting commands. We generalize it to handle the entire Polymorphic Calculus of Cumulative Inductive Constructions (pCUIC), as implemented by Coq, including the kernel's declaration structures for definitions and inductives, and implement a monad for general manipulation of Coq's logical environment. We demonstrate how this setup allows Coq users to define many kinds of general purpose plugins, whose correctness can be readily proved in the system itself, and that can be run efficiently after extraction. We give a few examples of implemented plugins, including a parametricity translation and a certifying extraction to call-by-value λ-calculus. We also advocate the use of MetaCoq as a foundation for higher-level tools.

Coq is built around a well-delimited kernel that perfoms typechecking for definitions in a variant of the Calculus of Inductive Constructions (CIC). Although the metatheory of CIC is very stable and reliable, the correctness of its implementation in Coq is less clear. Indeed, implementing an efficient type checker for CIC is a rather complex task, and many parts of the code rely on implicit invariants which can easily be broken by further evolution of the code. Therefore, on average, one critical bug has been found every year in Coq. presents the first implementation of a type checker for the kernel of Coq (without the module system and template polymorphism), which is proven correct in Coq with respect to its formal specification and axiomatisation of part of its metatheory. Note that because of Gödel's incompleteness theorem, there is no hope to prove completely the correctness of the specification of Coq inside Coq (in particular strong normalisation or canonicity), but it is possible to prove the correctness of the implementation assuming the correctness of the specification, thus moving from a trusted code base (TCB) to a trusted theory base (TTB) paradigm. Our work is based on the MetaCoq project which provides metaprogramming facilities to work with terms and declarations at the level of this kernel. Our type checker is based on the specification of the typing relation of the Polymorphic, Cumulative Calculus of Inductive Constructions (pCUIC) at the basis of Coq and the verification of a relatively efficient and sound type-checker for it. In addition to the kernel implementation, an essential feature of Coq is the so-called extraction: the production of executable code in functional languages from Coq definitions. We present a verified version of this subtle type-and-proof erasure step, therefore enabling the verified extraction of a safe type-checker for Coq.

Definitional equality—or conversion—for a type theory with a decidable type checking is the simplest tool to prove that two objects are the same, letting the system decide just using computation. Therefore, the more things are equal by conversion, the simpler it is to use a language based on type theory. Proof-irrelevance, stating that any two proofs of the same proposition are equal, is a possible way to extend conversion to make a type theory more powerful. However, this new power comes at a price if we integrate it naively, either by making type checking undecidable or by realising new axioms—such as uniqueness of identity proofs (UIP)—that are incompatible with other extensions, such as univalence. In , taking inspiration from homotopy type theory, we propose a general way to extend a type theory with definitional proof irrelevance, in a way that keeps type checking decidable and is compatible with univalence. We provide a new criterion to decide whether a proposition can be eliminated over a type (correcting and improving the so-called singleton elimination of Coq) by using techniques coming from recent development on dependent pattern matching without UIP. We show the generality of our approach by providing implementations for both Coq and Agda, both of which are planned to be integrated in future versions of those proof assistants.

Homotopy type theory is an extension of type theory that enables synthetic reasoning about spaces and homotopy theory. This has led to elegant computer formalizations of multiple classical results from homotopy theory. However, many proofs are still surprisingly complicated to formalize. One reason for this is the axiomatic treatment of univalence and higher inductive types which complicates synthetic reasoning as many intermediate steps, that could hold simply by computation, require explicit arguments. Cubical type theory offers a solution to this in the form of a new type theory with native support for both univalence and higher inductive types. In , we show how the recent cubical extension of Agda can be used to formalize some of the major results of homotopy type theory in a direct and elegant manner.

In model-driven engineering, model transformation (MT) verification is essential for reliably producing software artifacts. While recent advancements have enabled automatic Hoare-style verification for non-trivial MTs, there are certain verification tasks (e.g. induction) that are intrinsically difficult to automate. Existing tools that aim at simplifying the interactive verification of MTs typically translate the MT specification (e.g. in ATL) and properties to prove (e.g. in OCL) into an interactive theorem prover. However, since the MT specification and proof phases happen in separate languages, the proof developer needs a detailed knowledge of the translation logic. Naturally, any error in the MT translation could cause unsound verification, i.e. the MT executed in the original environment may have different semantics from the verified MT. In , we propose an alternative solution by designing and implementing an internal domain specific language, namely CoqTL, for the specification of declarative MTs directly in the Coq interactive theorem prover. Expressions in CoqTL are written in Gallina (the specification language of Coq), increasing the possibilities of reusing native Coq libraries in the transformation definition and proof. CoqTL specifications can be directly executed by our transformation engine encoded in Coq, or a certified implementation of the transformation can be generated by the native Coq extraction mechanism. We ensure that CoqTL has the same expressive power of Gallina (i.e. if a MT can be computed in Gallina, then it can also be represented in CoqTL). In this article, we introduce CoqTL, evaluate its practical applicability on a use case, and identify its current limitations.

Finding an elementary form for an antiderivative is often a difficult task, so numerical integration has become a common tool when it comes to making sense of a definite integral. Some of the numerical integration methods can even be made rigorous: not only do they compute an approximation of the integral value but they also bound its inaccuracy. Yet numerical integration is still missing from the toolbox when performing formal proofs in analysis. In , we present an efficient method for automatically computing and proving bounds on some definite integrals inside the Coq formal system. Our approach is not based on traditional quadrature methods such as Newton-Cotes formulas. Instead, it relies on computing and evaluating antiderivatives of rigorous polynomial approximations, combined with an adaptive domain splitting. Our approach also handles improper integrals, provided that a factor of the integrand belongs to a catalog of identified integrable functions. This work has been integrated to the CoqInterval library.

**Vercoma** (Atlanstic 2020/Attractivity grant)

Goal: Verified computer mathematics.

Coordinator: A. Mahboubi.

Duration: 08/2018 - 08/2021.

**FastRelax** (ANR-14-CE25-0018).

Goal: Develop computer-aided proofs of numerical values, with certified and reasonably tight error bounds, without sacrificing efficiency.

Coordinator: Bruno Salvy (Inria, ENS Lyon).

Participant: A. Mahboubi.

Duration: 2014-2019.

Website: http://

Note: This project started when A. Mahboubi was still in the Specfun project at the Saclay Île-de-France CRI. The budget is still managed there, within the Toccata project, but remains available to A. Mahboubi.

Title: Coq for Homotopy Type Theory

Programm: H2020

Type: ERC

Duration: June 2015 - May 2020

Coordinator: Inria

Inria contact: Nicolas TABAREAU

Every year, software bugs cost hundreds of millions of euros to companies and administrations. Hence, software quality is a prevalent notion and interactive theorem provers based on type theory have shown their efficiency to prove correctness of important pieces of software like the C compiler of the CompCert project. One main interest of such theorem provers is the ability to extract directly the code from the proof. Unfortunately, their democratization suffers from a major drawback, the mismatch between equality in mathematics and in type theory. Thus, significant Coq developments have only been done by virtuosos playing with advanced concepts of computer science and mathematics. Recently, an extension of type theory with homotopical concepts such as univalence is gaining traction because it allows for the first time to marry together expected principles of equality. But the univalence principle has been treated so far as a new axiom which breaks one fundamental property of mechanized proofs: the ability to compute with programs that make use of this axiom. The main goal of the CoqHoTT project is to provide a new generation of proof assistants with a computational version of univalence and use them as a base to implement effective logical model transformation so that the power of the internal logic of the proof assistant needed to prove the correctness of a program can be decided and changed at compile time—according to a trade-off between efficiency and logical expressivity. Our approach is based on a radically new compilation phase technique into a core type theory to modularize the difficulty of finding a decidable type checking algorithm for homotopy type theory. The impact of the CoqHoTT project will be very strong. Even if Coq is already a success, this project will promote it as a major proof assistant, for both computer scientists and mathematicians. CoqHoTT will become an essential tool for program certification and formalization of mathematics.

Program: COST

Project acronym: EUTYPES

Project title: The European research network on types for programming and verification

Duration: 21/03/2016 - 20/03/2020.

Coordinator: Herman Geuvers (Radboud Univerity, Nijmegen, The Netherlands)

Abstract: Types are pervasive in programming and information technology. A type defines a formal interface between software components, allowing the automatic verification of their connections, and greatly enhancing the robustness and reliability of computations and communications. In rich dependent type theories, the full functional specification of a program can be expressed as a type. Type systems have rapidly evolved over the past years, becoming more sophisticated, capturing new aspects of the behaviour of programs and the dynamics of their execution.

This COST Action will give a strong impetus to research on type theory and its many applications in computer science, by promoting (1) the synergy between theoretical computer scientists, logicians and mathematicians to develop new foundations for type theory, for example as based on the recent development of "homotopy type theory”, (2) the joint development of type theoretic tools as proof assistants and integrated programming environments, (3) the study of dependent types for programming and its deployment in software development, (4) the study of dependent types for verification and its deployment in software analysis and verification. The action will also tie together these different areas and promote cross-fertilisation.

Europe has a strong type theory community, ranging from foundational research to applications in programming languages, verification and theorem proving, which is in urgent need of better networking. A COST Action that crosses the borders will support the collaboration between groups and complementary expertise, and mobilise a critical mass of existing type theory research.

**Inria Chile**

Associate Team involved in the International Lab:

Title: Gradual verification and robust proof Engineering for COq

International Partner (Institution - Laboratory - Researcher):

Universidad de Chile (Chile) - Centrum Wiskunde & Informatica - Éric Tanter

Start year: 2018

See also: http://

The development of tools to construct software systems that respect a given specification is a major challenge of current and future research in computer science. Interactive theorem provers based on type theory, such as Coq, have shown their effectiveness to prove correctness of important pieces of software like the C compiler of the CompCert project. Certified programming with dependent types is attracting a lot of attention recently, and Coq is the de facto standard for such endeavors, with an increasing amount of users, pedagogical material, and large-scale projects. Nevertheless, significant work remains to be done to make Coq more usable from a software engineering point of view.

This collaboration project gathers the expertise of researchers from Chile (Inria Chile, Universidad de Chile, Universidad Católica de Valparaíso) and France (Inria Nantes, Inria Paris), in different areas that are crucial to develop the vision of certified software engineering. The focus of this project is both theoretical and practical, covering novel foundations and methods, design of concrete languages and tools, and validation through specific case studies.

The end result will be a number of enhancements to the Coq proof assistant (frameworks, tactic language) together with guidelines and demonstrations of their applicability in realistic scenarios.

A. Mahboubi holds a part-time endowed professor position in the Department of Mathematics at the Vrije Universiteit Amsterdam (the Netherlands).

Matias Toro (U. Chile) visited 1 week in January to work with G. Munch-Maccagnoni.

G. Munch-Maccagnoni visited E. Tanter and M. Toro (U. Chile) in March.

A. Mahboubi has served as co-PC chair of the 8th ACM SIGPLAN International Conference on Certified Programs and Proofs (CPP'19).

A. Mahboubi has served as co-chair of the CoqPL'20 workshop, satellite of POPL'20.

P.-M. Pédrot has been a member of the program committee of the Coq Workshop'19 and JFLA'20.

N. Tabareau has been a member of the program committee of FSCD'19 and POPL'19.

G. Munch-Maccagnoni has been a member of the external review committee for ICFP'19 conference, and a member of the program committees for the workshops SD'19 (affiliated with FSCD) and LOLA'19 (affilitated with LICS).

A. Mahboubi has been a member of the program commitee of the ITP'19, Frocos'19 and CPP'20 international conferences, and of the TFP'19 workshop.

G. Jaber has been a member of the program committee of the Student Research Competition of POPL'20.

P.-M. Pédrot has served as an external reviewer for CPP'19, FoSSaCS'19, ICFP'19 and LICS'19.

N. Tabareau has served as an external reviewer for LICS'19, CPP'19.

G. Munch-Maccagnoni has served as an external reviewer for FoSSaCS'20.

G. Jaber has served as an external reviewer for CONCUR'19, ITP'19, FoSSaCS'19 and POPL'20.

A. Mahboubi is a member of the editorial board of the Journal of Automated Reasoning.

A. Mahboubi is co-editing the post-proceedings of the TYPES 2019 conference.

N. Tabareau has been a reviewer for Mathematical Strucutre in Computer Science.

P.-M. Pédrot gave a 3 hour long invited class on effectful types theories at the JFLA 2019.

N. Tabareau has given an invited talk at Coq Workshop 2019 in Portland, Oregon.

G. Munch-Maccagnoni gave a talk at ITU Copenhaguen in July and a talk at the seminar of philosophy in computer science Codes Sources in Paris in June.

A. Mahboubi has given an invited talk repectively at the MPC'19, CiE'19, Cade'19 and Types'19 international conferences, at the PxTP'19 international workshop and at the Codes Sources seminar.

N. Tabareau is a member of the scientific committee of the GdR of Algebraic Topology.

A. Mahboubi is a member of the core managment group of the EUTypes project, and leader of the “Tools” working group.

A. Mahboubi is a member of the steering committee of the ITP conference.

A. Mahboubi is a member of the scientific committee of the GdR “Informatique Mathématique”.

Licence : Julien Cohen, Discrete Mathematics, 48h, L1 (IUT), IUT Nantes, France

Licence : Julien Cohen, Introduction to proof assistants (Coq), 8h, L2 (PEIP : IUT/Engineering school), Polytech Nantes, France

Licence : Julien Cohen, Functional Programming (Scala), 22h, L2 (IUT), IUT Nantes, France

Master : Julien Cohen, Object oriented programming (Java), 32h, M1 (Engineering school), Polytech Nantes, France

Master : Julien Cohen, Functional programming (OCaml), 18h, M1 (Engineering school), Polytech Nantes, France

Master : Julien Cohen, Tools for softaware engineering (proof with Frama-C, test, code management), 20h, M1 (Engineering school), Polytech Nantes, France

Licence : Rémi Douence, Object Oriented Design and Programming, 45h, L1 (engineers), IMT-Atlantique, Nantes, France

Licence : Rémi Douence, Introduction to scientific research in computer science (Project: an Haskell interpreter in Java) Project, 20h, L1 (engineers), IMT-Atlantique, Nantes, France

Licence : Rémi Douence, Object Oriented Design and Programming Project, 30h, L1 (apprenticeship), IMT-Atlantique, Nantes, France

Master : Rémi Douence, Functional Programming with Haskell, 20h, M1 (apprenticeship), IMT-Atlantique, Nantes, France

Master : Rémi Douence, Introduction to scientific research in computer science (Project: an Haskell interpreter in Java), 45h, M2 (apprenticeship), IMT-Atlantique, Nantes, France

Licence : Hervé Grall, Algorithms and Discrete Mathematics, 25h , L3 (engineers), IMT-Atlantique, Nantes, France

Licence : Hervé Grall, Object Oriented Design and Programming, 25h , L3 (engineers), IMT-Atlantique, Nantes, France

Licence, Master : Hervé Grall, Modularity and Typing, 40h, L3 and M1, IMT-Atlantique, Nantes, France

Master : Hervé Grall, Service-oriented Computing, 40h, M1 and M2, IMT-Atlantique, Nantes, France

Master : Hervé Grall, Research Project - (Linear) Logic Programming in Coq, 90h (1/3 supervised), M1 and M2, IMT-Atlantique, Nantes, France

Licence : Guilhem Jaber, Computer Tools for Science, 36h, L1, Université de Nantes France

Master : Guilhem Jaber, Verification and Formal Proofs, 18h, M1, Université de Nantes, France

Master : Nicolas Tabareau, Homotopy Type Theory, 24h, M2 LMFI, Université Paris Diderot, France

PhD : Gaetan Gilbert, A new foundation for the Coq proof assistant based on the insight of Homotopy Type Theory, IMT Atlantique, advisors: Matthieu Sozeau and Nicolas Tabareau

PhD : Ambroise Lafont,Towards an unbiased approach to specify, implement, and prove properties on programming languages, IMT Atlantique, advisors: Tom Hirschowitz and Nicolas Tabareau

PhD in progress: Xavier Montillet, Rewriting theory for effects and dependent types, Univ Nantes, advisors: Guillaume Munch-Maccagnoni and Nicolas Tabareau

PhD in progress: Théo Winterhalter, Extending the flexibility of the universe hierarchy in type theory, Univ Nantes, advisors: Matthieu Sozeau and Nicolas Tabareau

PhD in progress: Joachim Hotonnier, Deep Specification for Domain-Specific Modelling, advisors: Gerson Sunye (Naomod team), Massimo Tisi (Naomod team), Hervé Grall

PhD in progress: Igor Zhirkov, Certified Refactoring of C in the Coq proof assistant, advisors: Rémi Douence and Julien Cohen.

E. Bauer has visited the team for an L3 research internship from June to July on the subject “Categorical models of differential linear logic”, supervised by Marie Kerjean.

A. Ben Mansour has visited the team for an L3 research internship from June to August on the subject “Parametricity for languages with effects”, supervised by G. Munch-Maccagnoni.

M. Bertrand has visited the team from February to July for an internship on the subject “Effects in Type Theory”, supervised by N. Tabareau.

L. Pujet has visited the team from April to August for an internship on the subject “Interpreting Cubical Type Theory using forcing”, supervised by N. Tabareau.

G. Combette is visiting the team from October 2019 to February 2020 for an internship on the subject “Axiomatic denotational semantics for resource management in systems programming”, supervised by G. Munch-Maccagnoni.

L. Escot has visited the team from October to December for an internship on the subject “Univalent Parametricity at Scale”, supervised by N. Tabareau.

P. Geneau de Lamarlière has visited the team for an L3 research internship from June to July on the subject “Symbolic computations in algebraic number theory”, co-supervised by A. Mahboubi and S. Dahmen (VU Amsterdam).

N. Tabareau has served as external member on the PhD jury of Kenji Maillard, defended November 25th at Inria Paris - Université Paris Sciences et Lettres.

A. Mahboubi has served as external member on the PhD jury of Florian Faissole, defended December 13th at Paris Saclay University.

A. Mahboubi has served as external member on the PhD jury of Gaëtan Gilbert, defended December 20th at IMT Atlantique.

A. Mahboubi has served on the 2019 jury for recruiting Inria CRCN at Inria Rennes Bretagne Atlantique.

A. Mahboubi has served on the 2019 jury for recruiting an “Agrégé Préparateur” at ENS Rennes.

Hervé Grall has contributed to the project Merite, which aims to promote science learning in middle and high schools. He is the main contributor to the theme "Communication between machines". The project is coordinated by IMT-Atlantique in partnership with 7 other french higher education institutions, the rectorates of the Nantes and Rennes academies, and financed by the "Investments in the Future" and the fund FEDER Pays-de-la-Loire.

A. Mahboubi has worked with composer Alessandro Bossetti and students of the professional high-school Lycée Michelet, on a project “Art and Mathematics”, supported by the Athenor theater.

P.-M. Pédrot was invited to give a talk about his scientific activities in Le Pleynet, during the "Semaine Sport-Études" of the first year students in computer science from the ENS Lyon.

P.-M. Pédrot gave a similar talk about his scientific activities at the LS2N, during the visit of first year students in computer science from the ENS Cachan.

A. Mahboubi participates to the Irisa/Inria mentoring program.