Parsifal is an INRIA-Futurs project-team located at the CNRS laboratory LIX (Laboratoire d'Informatique de l'Ecole Polytechnique). Two additional people join the team at the end 2007: Stephane Lengrand, CNRS CR2 (1 January 2008) and Matteo Capelletti INRIA post doc (1 December 2007).

The aim of the Parsifal team is to develop and exploit the theories of proofs and types to support the specification and verification of computer systems. To achieve these goals, the team works on several level.

The team has expertise in
*proof theory*and
*type theory*and conducts basic research in these fields: in particular, the team is developing results that help with the automation of deduction and with the formal manipulation and
communication of proofs.

Based on experience with computational systems and theoretical results, the team
*designs*new logical principles, new proof systems, and new theorem proving environments.

Some of these new designs are appropriate for
*implementation*and the team works at developing prototype systems to help validate basic research results.

By using the implemented tools, the team can develop examples of specifications and verification to test the success of the design and to help suggest new logical and proof theoretic principles that need to be developed in order to improve ones ability to specify and verify.

The foundational work of the team focuses on the proof theory of classical, intuitionistic, and linear logics making use, primarily, of sequent calculus and deep inference formalisms. A major challenge for the team is the reasoning about computational specifications that are written in a relational style: this challenge is being addressed with the introduction of some new approaches to dealing with induction, co-induction, and generic judgments. Another important challenge for the team is the development of normal forms of deduction: such normal forms can be used to greatly enhance the automation of search (one only needs to search for normal forms) and for communicating proofs (and proof certificates) for validation.

The principle application areas of concern for the team currently are in functional programming (e.g., -calculus), concurrent computation (e.g., -calculus), interactive computations (e.g., games), and biological systems.

The team has released the Bedwyr model checker that can be used to work with linguistic expressions involving bound variables in a completely declarative fashion. The system comes with
several examples, including a declarative and direct implementation of bisimulation checking for the finite
-calculus. The team has also produced a number of papers on extending and applying the important notion of
*focusing proofs*.

In the specification of computational systems, logics are generally used in one of two approaches. In the
*computation-as-model*approach, computations are encoded as mathematical structures, containing such items as nodes, transitions, and state. Logic is used in an external sense to make
statements
*about*those structures. That is, computations are used as models for logical expressions. Intensional operators, such as the modals of temporal and dynamic logics or the triples of Hoare
logic, are often employed to express propositions about the change in state. This use of logic to represent and reason about computation is probably the oldest and most broadly successful use
of logic in computation.

The
*computation-as-deduction*approach, uses directly pieces of logic's syntax (such as formulas, terms, types, and proofs) as elements of the specified computation. In this much more rarefied
setting, there are two rather different approaches to how computation is modeled.

The
*proof normalization*approach views the state of a computation as a proof term and the process of computing as normalization (know variously as
-reduction or cut-elimination). Functional programming can be explained using proof-normalization as its theoretical basis
and has been used to justify the design of new functional programming languages
.

The
*proof search*approach views the state of a computation as a sequent (a structured collection of formulas) and the process of computing as the process of searching for a proof of a
sequent: the changes that take place in sequents capture the dynamics of computation. Logic programming can be explained using proof search as its theoretical basis
and has been used to justify the design of new logic programming languages
.

The divisions proposed above are informal and suggestive: such a classification is helpful in pointing out different sets of concerns represented by these two broad approaches (reductions, confluence, etc, versus unification, backtracking search, etc). Of course, a real advance in computation logic might allow us merge or reorganize this classification.

Although type theory has been essentially designed to fill the gap between these two kinds of approaches, it appears that each system implementing type theory up to now only follows one of the approaches. For example, the Coq system implementing the Calculus of Inductive Constructions (CIC) uses proof normalization while the Twelf system , implementing the Edinburgh Logical Framework (LF, a sub-system of CIC), follows the proof search approach (normalization appears in LF, but it is much weaker than in, say, CIC).

The Parsifal team works on both the proof normalization and proof search approaches to the specification of computation.

Once a computational system (e.g., a programming language, a specification language, a type system) is given a logic (relational) specifications, how do we reason about the formal properties of such specifications? New results in proof theory are being developed to help answer this question.

The traditional architecture for systems designed to help reasoning about the formal correctness of specification and programming languages can generally be characterized at a high-level as
follows:
**First: Implement mathematics.**This often involves choosing between a classical or constructive (intuitionistic) foundation, as well as a choosing abstraction mechanism (eg, sets or
functions). The Coq and NuPRL systems, for example, have chosen intuitionistically typed
-calculus for their approach to the formalization of mathematics. Systems such as HOL
use classical higher-order logic while systems such as Isabelle/ZF
use classical logic.
**Second: Reduce programming correctness problems to mathematics.**Thus, data structures, states, stacks, heaps, invariants, etc, all are represented as various kinds of mathematical
objects. One then reasons directly on these objects using standard mathematical techniques (induction, primitive recursion, fixed points, well-founded orders, etc).

Such an approach to formal methods is, of course, powerful and successful. There is, however, growing evidence that many of the proof search specifications that rely on such intensional
aspects of logic as bindings and resource management (as in linear logic) are not served well by encoding them into the traditional data structures found in such systems. In particular, the
resulting encoding can often be complicated enough that the
*essential logical character*of a problem is obfuscated.

Despeyroux, Pfenning, Leleu, and Schürmann proposed two different type theories
,
based on modal logic in which expressions (possibly with binding) live in the functional space
ABwhile general functions (for case and iteration reasoning) live in the full functional space
. These works give a possible answer to the problem of extending the Edinburgh Logical Framework, well suited for describing expressions with binding, with recursion and induction
principles internalized in the logic (as done in the Calculus of Inductive Constructions). However, extending these systems to dependent types seems to be difficult (see
where an initial attempt was given).

The LINC logic of appears to be a good meta-logical setting for proving theorems about such logical specifications The three key ingredients of LINC can be described as follows.

First, LINC is an intuitionistic logic for which provability is described similarly to Gentzen's LJ calculus
. Quantification at higher-order types (but not predicate types) is allowed and terms are simply typed
-terms over
-equivalence. This core logic provides support for
*-tree syntax*, a particular approach to
*higher-order abstract syntax*. Considering a classical logic extension of LINC is also of some interest, as is an extension allowing for quantification at predicate type.

Second, LINC incorporates the proof-theoretical notion of
*definition*(also called
*fixed points*), a simple and elegant device for extending a logic with the if-and-only-if closure of a logic specification and for supporting inductive and co-inductive reasoning over
such specifications. This notion of definition was developed by Hallnäs and Schroeder-Heister
and, independently, by Girard
. Later McDowell, Miller, and Tiu have made substantial extensions to our understanding of this concept
,
,
. Tiu and Momigliano
,
have also shown how to modify the notion of definition to support induction and co-induction in the sequent
calculus.

Third, LINC contains a new (third) logical quantifier
(nabla). After several attempts to reason about logic specifications without using this new quantifier
,
,
, it became clear that when the object-logic supports
-tree syntax, the
*generic judgment*
,
and its associated quantifier could provide a strong and declarative role in reasoning. This new quantifier is
able to help capture internal (intensional) reasons for judgment to hold generically instead of the universal judgment that holds for external (extensional) reasons. Another important
observation to make about
is that if given a logic specification that is essentially a collection of Horn clauses (that is, there is no uses of negation in the specification), there is no distinctions to be made
between
or
in the premise (body) of semantic definitions. In the presence of negations and implications, a difference between these two quantifiers does arise
.

There is a great deal of non-determinism that is present in the search for proofs (in the sense of automated deduction). The non-determinism involved with generating lemmas is one extreme:
when attempting to prove one formula it is possible to generate a potential lemma and then attempt to prove it and to use it to prove the original formula. In general, there are no clues as to
what is a useful lemma to construct. The famous “cut-elimination” theorem say that it is possible to prove theorems without using lemmas (that is, by restricting to
*cut-free*proofs). Of course, cut-free proofs are not appropriate for all domains of computation logic since they can be vastly larger than proofs containing cuts. Even when restricting to
cut-free proofs make sense (as in logic programming, model checking, and some areas of automated reasoning), the construction of cut-free proofs still contains a great deal of
non-determinism.

Structuring the non-deterministic choices within the search for cut-free proofs has received increasing attention in recent years with the development of
*focusing proofs systems*. In such proof systems, there is a clear separation between non-deterministic choices for which no backtracking is required (“don't care non-determinism”) and
choices were backtracking may be required (“don't know non-determinism”). Furthermore, when a backtrackable choice is required, that choice actually extends over a series of inference rules,
representing a “focus” during the construction of a proof. One focusing-style proof systems was developed within the early work of providing a proof-theoretic foundations for logic programming
via
*uniform proofs*
. The first comprehensive analysis of focusing proofs was done in linear logic by Andreoli
. There it was shown that proofs are constructed in two alternating phases: a
*negative phase*in which don't-care-non-determinism is done and a
*positive phase*in which a focused sequence of don't-know-non-determinism choices is applied.

Since a great deal of automated deduction (in the sense of logic programming, type inference, and model checking) is done in intuitionistic and classical logic, there is a strong need to have comprehensive focusing results for these logics as well. In linear logic, the assignment of inference rules to the positive and negative phases is canonical (only the treatment of atomic formulas is left as a non-canonical choice). Within intuitionistic and classical logic, a number of inference rules do not have canonical treatments. Instead, several focusing-style proof systems have been developed one-by-one for these logics. A general scheme for putting all of these choices together has recently been developed within the team and will be described below.

There has been a good deal of concern in the proof theory literature on the nature of proofs as objects. An example of such a concern is the question as to whether or not two proofs should be considered equal. Such considerations were largely of interest to philosophers and logicians. Computer scientists started to get involved with the structure of proofs more generally in essentially two ways. The first is the extraction of algorithms from constructive proofs. The second is the use of proof-like objects to help make theorem proving systems more sophisticated (proofs could be stored, edited, and replayed, for example).

It was not until the development of the topic of
*proof carrying code (PCC)*that computer scientists from outside the theorem proving discipline took a particular interesting in having proofs as actual data structure within computations.
In the PCC setting, proofs of safety properties need to be communicated from one host to another: the existence of the proof means that one does not need to trust the correctness of a piece of
software (for example, that it is not a virus). Given this need to produce, communicate, and check proofs, the actual structure and nature of proofs-as-objects becomes increasingly
important.

Often the term
*proof certificate*(or just
*certificate*) is used to refer to some data structure that can be used to communicate a proof so that it can be checked. A number of proposals have been made for possible structure of
such certificates: for examples, proof scripts in theorem provers such as Coq are frequently used. Other notions include oracle
and fixed points
.

The earliest papers on PCC made use of logic programming systems Twelf and Prolog . It seems that the setting of logic programming is a natural one for exploring the structure of proofs and the trade-offs between proof size and the need for run-time proof search. For example, there is a general trade-off between the size of a proof object and the amount of search one must do to verify that an object does, in fact, describe a proof. Exploring such trade-off should be easy and natural in the proof search setting where such search is automated. In particular, focused proof systems should be a large component of such an analysis.

Deep inference , is a novel methodology for presenting deductive systems. Unlike traditional formalisms like the sequent calculus, it allows rewriting of formulas deep inside arbitrary contexts. The new freedom for designing inference rules creates a richer proof theory. For example, for systems using deep inference, we have a greater variety of normal forms for proofs than in sequent calculus or natural deduction systems. Another advantage of deep inference systems is the close relationship to categorical proof theory. Due to the deep inference design one can directly read off the morphism from the derivations. There is no need for a counterintuitive translation.

One reason for using categories in proof theory is to give a precise algebraic meaning to the identity of proofs: two proofs are the same if and only if they give rise to the same morphism
in the category. Finding the right axioms for the identity of proofs for classical propositional logic has for long been thought to be impossible, due to “Joyal's Paradox”. For the same
reasons, it was believed for a long time that it it not possible to have proof nets for classical logic. Nonetheless, Lutz Straßburger and François Lamarche provided proof nets for classical
logic in
, and analyzed the category theory behind them in
. In
and
, one can find a deeper analysis of the category theoretical axioms for proof identification in classical logic.
Particular focus is on the so-called
*medial rule*which plays a central role in the deep inference deductive system for classical logic.

The following research problems are investigated by members of the Parsifal team:

Find deep inference system for richer logics. This is necessary for making the proof theoretic results of deep inference accessible to applications as they are described in the previous sections of this report.

Investigate the possibility of focusing proofs in deep inference. As described before, focusing is a way to reduce the non-determinism in proof search. However, it is well investigated only for the sequent calculus. In order to apply deep inference in proof search, we need to develop a theory of focusing for deep inference.

Use the results on deep inference to find new axiomatic description of categories of proofs for various logics. So far, this is well understood only for linear and intuitionistic logics. Already for classical logic there is no common accepted notion of proof category. How logics like LINC can be given a categorical axiomatisation is completely open.

Proof nets are abstract (graph-like) presentations of proofs such that all "trivial rule permutations" are quotiented away. More generally, we investigate combinatoric objects and correctness criteria for studying proofs independently from syntax. Ideally the notion of proof net should be independent from any syntactic formalism. But due to the almost absolute monopoly of the sequent calculus, most notions of proof nets proposed in the past were formulated in terms of their relation to the sequent calculus. Consequently we could observe features like “boxes” and explicit “contraction links”. The latter appeared not only in Girard's proof nets for linear logic but also in Robinson's proof nets for classical logic. In this kind of proof nets every link in the net corresponds to a rule application in the sequent calculus.

The concept of deep inference allows to design entirely new kinds of proof nets. Recent work by Lamarche and Straßburger and have extended the theory of proof nets for multiplicative linear logic to multiplicative linear logic with units. This seemingly small step—just adding the units—had for long been an open problem, and the solution was found only by consequently exploiting the new insights coming from deep inference. A proof net no longer just mimicks the seqent calculus proof tree, but rather an additional graph structure that is put on top of the formula tree (or sequent forest) of the conclusion. The work on proof nets within the team is focused on the following two directions

Extend the work of Lamarche and Straßburger to larger fragments of linear logic, containing the additives, the exponentials, and the quantifiers.

Finding (for classical logic) a notion of proof nets that is deductive, i.e., can effectively be used for doing proof seach. An important property of deductive proof nets must be that the correctness can be checked in linear time. For the classical logic proof nets by Lamarche and Straßburger this takes exponential time (in the size of the net). We hope that eventually deductive proof nets will provide a “bureaucracy-free” formalism for proof search.

Systems in molecular biology, such as those for regulatory gene networks or protein-protein interactions, can be seen as state transition systems that have an additional notion of
*rate*of change. Methods for specifying such systems is an active research area. However, to our knowledge, no logic (more powerful than the boolean logic) have been proposed so far to
both specify and reason about these systems.

One current and prominent method uses process calculi, such as the stochastic -calculus, that has a built in notion of rate . Process calculi, however, have the deficiency that reasoning about the specifications is external to the specifications themselves, usually depending on simulations and trace analysis.

Kaustuv Chaudhuri and Joëlle Despeyroux are considering the problem of giving a
*logical*instead of a
*process-based*treatment both to specify and to reason about biological systems in a uniform linguistic framework. The logic they have proposed, called HyLL, is an extension of
(intuitionistic) linear logic with a modal situated truth that may be reified by means of the
operator from
*hybrid logic*. A variety of semantic interpretation can be given to this logic, including the rates and the delay of formation.

The expressiveness of the logic has been demonstrated on small examples and first meta-theoretical properties of the logic have been proven. Considerable work needs to be done before this proposal succeeds as a natural logical framework for systems biology. Remaining work mainly includes the description of larger examples (requiring more specifications of usual biological notions), and automating reasoning about the specifications. It also includes further studies of the meta-theoretical properties of the logic, and of course eventual extensions of the logic (for example to get branching semantics).

When operational semantics is presented as inference rules, it can often be encoded naturally as a logic program, which means that it is usually easy to animated such semantic specifications in direct and natural ways. Given the natural duality between finite success and finite failure (given a proof theoretic foundations in papers such as and ) it is also possible to describe model checking systems from a proof theoretic setting.

One application area for this work is, thus, the development of model checking software that can work on linguistic expressions that may contain bound variables. Specific applications could be towards checking bisimulation of -calculus and -calculus expressions.

More about a prototype model checker in this area is described below.

There has been increasing interest in the international community with the use of formal methods to provide proofs of properties of programs and entire programming languages. The example of proof carrying code is one such example. Two more examples for which the team's efforts should have important applications are the following two challenges.

Tony Hoare's Grand Challenge titled “Verified Software: Theories, Tools, Experiments” has as a goal the construction of “verifying compilers” to support a vision of a world where programs would only be produced with machine-verified guarantees of adherence to specified behavior. Guarantees could be given in a number of ways: proof certificates being one possibility.

When one looks at systems of biochemical reactions in molecular biology, such as gene-protein and protein-protein interaction systems, one observes two basic phenomena: state change (where, for example, two or more molecules interact to form other molecules) and delay. Each of the state changes has an associated delay before the state change is observed, or, more precisely, a probability distribution over possible delays: the rate of the change. A system of biochemical reactions can therefore be seen as a stochastic computation.

The HyLL logic proposed this year by Kaustuv Chaudhuri and Joëlle Despeyroux is a first attempt at providing a logical framework for both specifying and reasoning about such computations.

In order to provide some practical validation of the formal results mentioned above regarding the logic LINC and the quantifier , we picked a small but expressive subset of that logic for implementation. While that subset did not involve the proof rules for induction and co-induction (which are difficult to automate) the subset did allow for model-checking style computation. During 2006 and 2007, the Parsifal team, with contributions from our close colleagues at the University of Minnesota and the Australian National University, designed and implemented the Bedwyr system for doing proof search in that fragment of LINC. This system is organized as an open source project and is hosted on INRIA's GForge server. It has been described in the conference papers and . This systems, which is implemented in OCaml, has been download about 200 times since it was first released.

Bedwyr is a generalization of logic programming that allows model checking directly on syntactic expressions possibly containing bindings. This system, written in OCaml, is a direct implementation of two recent advances in the theory of proof search.

It is possible to capture both finite success and finite failure in a sequent calculus. Proof search in such a proof system can capture both may and must behavior in operational semantics.

Higher-order abstract syntax is directly supported using term-level -binders, the -quantifier, higher-order pattern unification, and explicit substitutions. These features allow reasoning directly on expressions containing bound variables.

Bedwyr has served well to validate the underlying theoretical considerations while at the same time providing a useful tool for exploring some applications. The distributed system comes with several example applications, including the finite -calculus (operational semantics, bisimulation, trace analysis, and modal logics), the spi-calculus (operational semantics), value-passing CCS, the -calculus, winning strategies for games, and various other model checking problems.

During the summer of 2007, Baelde (LIX PhD student) and visiting intern Zachery Snow (PhD student from the University of Minnesota) built a prototype theorem prover, called
*Taci*, that we are currently using “in-house” to experiment in a number of large examples and a few different logics. We hope to make the tool available eventually once we have settled
into the exact logic that it should support.

The operational semantics of programming and specification languages is often presented via inference rules and these can generally be mapped into logic programming-like clauses. Such logical encodings of operational semantics can be surprisingly declarative if one uses logics that directly account for term-level bindings and for resources, such as are found in linear logic. Traditional theorem proving techniques, such as unification and backtracking search, can then be applied to animate operational semantic specifications. Of course, one wishes to go a step further than animation: using logic to encode computation should facilitate formal reasoning directly with semantic specifications. In the paper , Miller outlined an approach to reasoning about logic specifications that involves viewing logic specifications as theories in an object-logic and then using a meta-logic to reason about properties of those object-logic theories.

The Bedwyr system (described in the Software section) has been used to validate at least some of the design ideas presented in . These results are encouraging and the team plans to develop those themes to a greater extent.

The effort to implement Bedwyr provided the team with a number of challenges, including dealing with how best to implement -terms to effectively support unification, backtracking search, and the “flipping” of variable status (internally, Bedwyr has two provers and the status of variables in these two systems are essentially duals). Most of these implementation details are documented directly in the code and, in part, in the Bedwyr user manual .

When working with proof search and logic programming within linear logic, the completeness of
*focusing proofs*by Andreoli
provides a critical normal form for proofs in all of linear logic. Chaudhuri and his colleagues have applied the
focusing proof system of linear logic to forward and backward reasoning systems. Miller and Saurin have provided a new and modular proof of focusing for linear logic in
. This proof allows for direct extensions: for example, the focusing result in Baelde and Miller's paper
were based on this new modular proof.

In contrast, a flexible and general definition of focusing proofs in intuitionistic and classical proofs have not been provided. Althought there had been a number of focusing-style proof systems that have been defined for these two logics, a general framework to relate all of them was needed. In , Liang and Miller provided just such a framework. In particular, the proof systems LJF and LKF were introduced that provides for a great deal of flexibility in the description of how focusing could be done in these two logics. In particular, polarities could be mixed (a result that was a challenge to get for intuitionistic logic). Many other focusing systems for intuitionistic logic could be mapped compositionally into the LJF via the simple insertion of “delay” operators (simple logical expressions that stop the focusing process).

To help validate our energies at exploring focusing proof systems, the team has looked at various applications of such proof theoretic results. Miller and Nigam in
have shown how focusing proof systems can be exploited to provide a declarative approach to the use of tables in
proof search (addressing the issue of lemma
*reuse*instead of
*reproof*in the case of atomic lemmas). They also showed how a table of lemmas can be used to give a new form of proof certificate.

There appears to be a very close connection between focusing proof systems and certain kinds of game semantics for proof search. The team has been working on understanding a “neutral approach to proof and refutation”. In , Miller and Saurin attempted to use game semantics to provide a “neutral” approach to proof and refutation. Their approach worked well for additive games (which are essentially Hintikka games). If the nature of multiplicatives were greatly restricted (to simple “guards”), then this game theoretic approach worked well again. It was unclear, however, how to extend that effort to handle general multiplicative connectives. This past year, Olivier Delande has been developing a solution to this problem that involves several innovations in the earlier notion of games. In particular, since games are no longer determinate (games might end in a draw), game playing must continue even after one player has failed (in order to find out if the other play wins or draws). A paper on this work is planned for the end of 2007.

Systems like the model-checker Bedwyr provides properties of computational system by exploring all of its finite behaviors. As a result, most such systems cannot handle infinite state spaces and, hence, cannot handle the vast majority of computer systems. Baelde and Miller have been exploring a proof theory for induction and coinduction within linear logic: given the cut-elimination and focusing results that they have obtained, it should be possible to developing some effective tools to help automate proofs that require induction and coinduction: at least when the required invariants are easy to guess.

Exploring the ideas of
which allowed including the units into the theory of proof nets, Lutz Straßburger developed a new theory of
proof nets that also includes the quantifiers, without relying on boxes, as in Girard's original work. The results are not yet published, but available from Straßburger's webpage

Geometric or combinatoric correctness criteria are important for studying proofs independently from syntax. In , Lutz Straßburger gives such a criterion for the medial rule which is central in the deep inference presentation for classical logic . This means that there are now two independent approaches towards a notion of proof identification: First, via algebraic considerations, i.e., categories , and second, via combinatorial or graph-theoretical considerations, i.e., proof nets and correctness criteria.

Hybrid languages are modal languages which use formulas to refer to specific points in a model. There is a fast growing community because of many applications. In Lutz Straßburger presents a deep inference deductive system for hybrid logic. Thus, the rich proof theory related to deep inference is made available for hybrid logics, which so far have mainly been studied via model theory. The HyLL logic proposed by Kaustuv Chaudhuri and Joëlle Despeyroux for systems biology (see below) also provides a proof theoretic presentation of hybrid logics, in a traditional natural and sequent style.

Kaustuv Chaudhuri and Joëlle Despeyroux have proposed a logic, called HyLL, for reasoning in and about reactions in systems and molecular biology. Such reaction systems can be imagined as state transition systems where the transitions are equipped with a stochastic rate function. In HyLL, these transitions are linear implications (from linear logic). The rate is determined using a modal judgement that makes the rate of formation explicit. The modal judgement is then reified by means of the connective from hybrid logic. They have demonstrated the use of HyLL on the repressilator, an example of a regulatory gene network. The expressivity of HyLL have been explicated by giving an embedding of the stochastic -calculus. A paper has been submitted and a technical report is in preparation.

The -calculus has been proposed in recent years by M. Parigot as a term calculus inspired by classical logic (similar to the way that the -calculus is inspired by intuitionistic logic. Later on, David & Py proved that the separation rules failed for the -calculus, leading to the need to fix the calculus. In , Saurin showed a way to regaining the separation theorem by a natural extension of the -calculus to the -calculus.

Static analysis of logic programs can provide useful information for programmers and compilers. Typing systems, such as in Prolog , have proved valuable during the development of code: type errors often represent program errors that are caught at compile time when they are easier to find and fix than at runtime when they are much harder to repair. Static type information also provides valuable documentation of code since it provides a concise approximation to what the code does.

While this work focused exclusively on the static analysis of first-order Horn clauses, it does so by making substitution instances of such Horn clauses that carry them into linear logic. Proofs for the resulting linear logic formulas are then attempted as part of static analysis.

Work such as this suggests that declarative languages might permit rich kinds of static analysis. Such preliminary work provided a partial justification of the need for flexible connections between programming languages and static analysis that is briefly described in .

The ANR-project blanc titled “INFER: Theory and Application of Deep Inference” that is coordinated by Lutz Straßburger has been accepted in September 2006. Besides Parsifal, the teams associated with this effort are represented by Francois Lamarche (INRIA-Loria) and Michel Parigot (CNRS-PPS).

Slimmer stands for
*Sophisticated logic implementations for modeling and mechanical reasoning*is an “Equipes Associées” with seed money from INRIA. This project is initially designed to bring together the
Parsifal personnel and Gopalan Nadathur's Teyjus team at the University of Minnesota (USA). Separate NSF funding for this effort has also been awards to the University of Minnesota. We also
hope to expand the scope of this project to include other French and non-French sites, in particular, Marino Miculan (University of Udine, Italy) and Brigitte Pientka (McGill University,
Canada).

Mobius stands for “Mobility, Ubiquity and Security” and is a Proposal for an Integrated Project in response to the call FP6-2004-IST-FETPI. This proposal involve numerous site in Europe and was awarded in September 2005. This large, European based project is coordinated out of INRIA-Sophia.

TYPES is an European project (a coordination action from the IST program) aiming at developing the technology of formal reasoning based on type theory. The project brings together 36 universities and research centers from 8 European countries (France, Italy, Germany, Netherlands, United Kingdom, Sweden, Poland and Estonia). It is the continuation of a number of European projects since 1992. The funding from the present project enables the maintaining of collaboration within the community by supporting an annual workshop, a few smaller thematic workshops, one summer school, and visits of researchers to one another's labs.

The "PAI" Amadeus for collaboration between France and Austria has approved the grant "The Realm of Cut Elimination" in November 2006. This proposal will allow for collaborations between the Parsifal team and the groups of Agata Ciabattoni at Technische Universität Wien (Austria) and Michel Parigot at CNRS-PPS.

This collaboration between Paris and Bern (Germany) aims at exploring some questions in the structure and identity of proofs. People involved in the Paris area are Lutz Straßburger, Dale Miller, Alexis Saurin, David Baelde, Michel Parigot, Stéphane Lengrand, and Séverine Maingaud. People involved in Bern are Kai Brünnler, Richard McKinley, and Phiniki Stouppa.

Joëlle Despeyroux co-organized the ACM SIGPLAN-SIGACT international conference on Principles of Programming Languages (POPL), with Martin Hofmann (LMU university, Munich, Germany). The conference was held in Nice, from 14 to 20 January.

Lutz Straßburger organized a workshop on "The Realm of Cut Elimination" on May 14, 2007 at LIX. This workshop was part of the PAI Amadeus with TU Wien.

Lutz Straßburger organized a workshop on "Theory and Application of Deep Inference" on June 21, 2007 at LIX. This workshop is part of (and partially financed by) the ANR project INFER on "Theory and Application of Deep Inference", and the PAI Germaine De Stael project on "Deep Inference and the Essence of Proofs". It served as first meeting of the participants of these projects.

Dale Miller has the following editorial duties.

*Theory and Practice of Logic Programming*Member of Advisory Board since 1999. Cambridge University Press.

*ACM Transactions on Computational Logic (ToCL)*Area editor for
*Proof Theory*since 1999. Published by ACM.

*Journal of Functional and Logic Programming*. Permanent member of the Editorial Board since 1996. MIT Press.

*Journal of Logic and Computation.*Associate editor since 1989. Oxford University Press.

Dale Miller was a program committee member for the following conferences.

LFMTP'07: Workshop on Logical Frameworks and Meta-Languages: Theory and Practice, August, Bremem, Germany.

WoLLIC'07: Fourteenth Workshop on Logic, Language, Information and Computation, Rio de Janeiro, 2-5 July.

CADE-21: 21st Conference on Automated Deduction, 17 - 20 July Bremen, Germany.

David Baelde and Dale Miller were invited speakers at the ICMS Workshop on Mathematical Theories of Abstraction, Substitution and Naming in Computer Science, Edinburgh, UK, 25-29 May 2007.

Dale Miller and Lutz Straßburger were invited participants to the meeting
*Collegium Logicum 2007: Proofs and Structures*, 24-25 October 2007, Vienna, Austria.

From December 17 to December 22, 2007, Lutz Straßburger teaches a course on "Introduction to Deep Inference and Proof Nets" (together with Paola Bruscoli, University of Bath) at the Technische Universität Dresden for the International MSc Program in Computational Logic.

Dale Miller co-teaches the course “Logique Linéaire et paradigmes logiques du calcul” in the new masters program MPRI (Master Parisien de Recherche en Informatique) (2004, 2005, 2006, 2007).

Dale Miller was an external reporter for the Habilitation of Agata Ciabattoni (Technische Universität Wien), March 2007, and was an external reporter (rapporteur) for the PhD thesis of Sébastien Briais, École Polytechnique Fédérale de Lausanne, 17 Dec 2006.

From March to August 2007, Nicolas Guenot (MPRI) was writing his Master's thesis on “Proof search, multi-focussing and deep inference for linear logic” under the supervision of Lutz Straßburger.