## Section: New Results

### Coq as a functional programming language

Participants : Stéphane Glondu, Pierre Letouzey, Matthias Puech.

#### Certified libraries

In the second semester of 2009, thanks to the beginning of an INRIA “délégation” period, Pierre Letouzey has started a deep reform of some parts of the Standard Library of Coq, mainly the FSets library for finite sets and maps, and also the Numbers library of generic / efficient arithmetic. The idea is to take advantage of recent improvements of the Coq system in term of modularity (Type Classes by M. Sozeau and better Modules by E. Soubiran). This is meant to be the first steps on the way to a truly modular Standard Library for Coq, where properties and decision procedures would be shared amongst many structures, and in particular numerical datatypes. A particular attention is also taken to speed-up computations upon structures for instance with finite sets with (less / more isolated) proofs parts. This library redesign effort is still in preliminary form, but a good part of it should already appear in the forthcoming release of Coq 8.3.

Concerning decision procedures, Pierre Letouzey has supervised Lukasz Fronc (Master1 Internship), about the subject of a fully reflexive decision procedure for Presburger Arithmetic in Coq. The result of this internship was promising, and some more work needs to be done to finish and integrate it as a fully-operational tool for Coq.

#### Certified extraction

Stéphane Glondu summarized his work on formalizing the extraction in B. Barras's formalization of the Calculus of Inductive Constructions in [17] .

This year, he worked mainly on the internal extraction. The goal of his work is to propose an extraction for the current Coq system, similar to the existing one, but that would also generate correctness proofs of the generated programs. The target language is ML, which is a source language considered by Z. Dargaye in [35] . The semantics of the target language is formalized in Coq, but the semantics of the source language—in which correctness proofs are stated—is left implicit.

Stéphane Glondu investigated several ways of defining the correctness of an extracted term, based on the type of its source term—atomic type constructions being inductive types and products. He also proved manually the correctness of the extraction of some basic functions involving recursion, logical preconditions, and re-use of previously defined functions (and their associated extracted term and proof). For this aim, he has designed a set of tactics to automatize boring parts of the proof (such as symbolic evaluation of ML terms). Proofs of extracted terms are not yet fully automated, and Stéphane Glondu is now making further investigations towards fully automatized generation of proofs (for some to-be-characterized terms).

#### Proof language

Matthias Puech worked on an inference mechanism in the proof assistant
*Coq* for the recognition of mathematical structures. It involved the
specification of a tool, integrated into the type theory, recognizing
user-definable patterns in developments, and its efficient
implementation as part of *Coq* 's distribution.

For this purpose, he developed an original data structure for term indexing, allowing fast retrieval of unifiers of a query term in a database of terms. When used in the type theory framework, it also allows the retrieval of instances or subterms of a lemma. It was devised, implemented and used as part of Coq's search facilities.

He is now working on the integration of this technology into the
automated theorem proving tactics of *Coq* , to enhance both the
efficiency and the expressiveness of these tactics: context-dependent
proof search (as opposed to the current goal-directed proof search),
shorter resulting proofs (avoiding useless -expansions).