Team Parsifal

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Other Grants and Activities
Dissemination
Bibliography

Section: Scientific Foundations

Keywords : higher-order abstract syntax, lambda-tree syntax, fixed points, definitions, LINC, generic judgments.

Reasoning about logic specifications

Once a computational system (e.g., a programming language, a specification language, a type system) is given a logic (relational) specifications, how do we reason about the formal properties of such specifications? New results in proof theory are being developed to help answer this question.

The traditional architecture for systems designed to help reasoning about the formal correctness of specification and programming languages can generally be characterized at a high-level as follows: First: Implement mathematics.This often involves choosing between a classical or constructive (intuitionistic) foundation, as well as a choosing abstraction mechanism (eg, sets or functions). The Coq and NuPRL systems, for example, have chosen intuitionistically typed $ \lambda$ -calculus for their approach to the formalization of mathematics. Systems such as HOL [32] use classical higher-order logic while systems such as Isabelle/ZF [49] use classical logic. Second: Reduce programming correctness problems to mathematics.Thus, data structures, states, stacks, heaps, invariants, etc, all are represented as various kinds of mathematical objects. One then reasons directly on these objects using standard mathematical techniques (induction, primitive recursion, fixed points, well-founded orders, etc).

Such an approach to formal methods is, of course, powerful and successful. There is, however, growing evidence that many of the proof search specifications that rely on such intensional aspects of logic as bindings and resource management (as in linear logic) are not served well by encoding them into the traditional data structures found in such systems. In particular, the resulting encoding can often be complicated enough that the essential logical character of a problem is obfuscated.

Despeyroux, Pfenning, Leleu, and Schürmann proposed two different type theories [2] , [1] based on modal logic in which expressions (possibly with binding) live in the functional space A$ \rightarrow$B while general functions (for case and iteration reasoning) live in the full functional space Im1 ${\#9633 A\#8594 B}$ . These works give a possible answer to the problem of extending the Edinburgh Logical Framework, well suited for describing expressions with binding, with recursion and induction principles internalized in the logic (as done in the Calculus of Inductive Constructions). However, extending these systems to dependent types seems to be difficult (see [28] where an initial attempt was given).

The LINC logic of [57] appears to be a good meta-logical setting for proving theorems about such logical specifications The three key ingredients of LINC can be described as follows.

First, LINC is an intuitionistic logic for which provability is described similarly to Gentzen's LJ calculus [29] . Quantification at higher-order types (but not predicate types) is allowed and terms are simply typed $ \lambda$ -terms over $ \beta$$ \eta$ -equivalence. This core logic provides support for $ \lambda$ -tree syntax , a particular approach to higher-order abstract syntax . Considering a classical logic extension of LINC is also of some interest, as is an extension allowing for quantification at predicate type.

Second, LINC incorporates the proof-theoretical notion of definition (also called fixed points ), a simple and elegant device for extending a logic with the if-and-only-if closure of a logic specification and for supporting inductive and co-inductive reasoning over such specifications. This notion of definition was developed by Hallnäs and Schroeder-Heister [35] and, independently, by Girard [31] . Later McDowell, Miller, and Tiu have made substantial extensions to our understanding of this concept [40] , [7] , [45] . Tiu and Momigliano [46] , [57] have also shown how to modify the notion of definition to support induction and co-induction in the sequent calculus.

Third, LINC contains a new (third) logical quantifier $ \nabla$ (nabla). After several attempts to reason about logic specifications without using this new quantifier [39] , [41] , [7] , it became clear that when the object-logic supports $ \lambda$ -tree syntax, the generic judgment [45] , [9] and its associated quantifier could provide a strong and declarative role in reasoning. This new quantifier is able to help capture internal (intensional) reasons for judgment to hold generically instead of the universal judgment that holds for external (extensional) reasons. Another important observation to make about $ \nabla$ is that if given a logic specification that is essentially a collection of Horn clauses (that is, there is no uses of negation in the specification), there is no distinctions to be made between $ \forall$ or $ \nabla$ in the premise (body) of semantic definitions. In the presence of negations and implications, a difference between these two quantifiers does arise [9] .


previous
next

Logo Inria