Inria / Raweb 2004

Search in Activity Report, year 2004:


Project-Team : calligramme

Section: Scientific Foundations

Keywords: Complexity theory, theory of programming, types, lambda calculus, Curry-Howard isomorphism, termination orders.

Implicit Complexity of Computations

Participants: Guillaume Bonfante, Adam Cichon, Paulin Jacobé de Naurois, Jean-Yves Marion, Jean-Yves Moyen, Romain Péchoux.

The construction of software which is certified with respect to its specifications is more than ever a great necessity. It is crucial to ensure, while developing a certified program, the quality of the implementation in terms of efficiency and computational resources. Implicit complexity is an approach to the analysis of the resources that are used by a program. Its tools come essentially from proof theory. The aim is to compile a program while certifying its complexity.

The meta-theory of programming traditionally answers questions with respect to a specification, like termination. These properties all happen to be extensional, that is, described purely in terms of the relation between the input of the program and its output. However, other properties, like the efficiency of a program and the resources that are used to effect a computation, are excluded from this methodology. The reason for this is inherent to the nature of the questions that are posed. In the first case we are treating extensional properties, while in the second case we are inquiring about the manner in which a computation is effected. Thus, we are interested in intensional properties of programs.

The complexity of a program is a measure of the resources that are necessary for its execution. The resources taken into account are usually time and space. The theory of complexity studies the problems and the functions that are computable given a certain amount of resources. One should not identify the complexity of functions with the complexity of programs, since a function can be implemented by several programs. Some are efficient, others are not.

One achievement of complexity theory is the ability to tell the ``programming expert'' the limits of his art, whatever the amount of gigabytes and megaflops that are available to him. Another achievement is the development of a mathematical model of algorithmic complexity. But when facing these models the programming expert is often flabbergasted. There are several reasons for this; let us illustrate the problem with two examples. The linear acceleration theorem states that any program which can be executed in time T(n) (where n is the size of the input) can be transformed into an equivalent problem that can be executed in time $ \epsilon$T(n), where $ \epsilon$ is ``as small as we want''. It turns that this result has no counterpart in real life. On the other hand a function is feasible if it can be calculated by a program whose complexity is acceptable. The class of feasible functions is often identified with the class Ptime of functions that are calculable in polynomial time. A typical kind of result is the definition of a progamming language Im7 ${LPL}$ and the proof that the class of functions represented by that language is exactly the class Ptime. This type of result does not answer the programming expert's needs because the programming language Im7 ${LPL}$ does not allow the ``right algorithms'', the ones he uses daily. The gulf between the two disciplines is also explained by differences in points of view. The theory of complexity, daughter of the theory of computatibility, has conserved an extensional point of view in its modelling practices, while the theory of programming is intrinsically intensional.

The need to reason on programs is a relevant issue in the process of software development. The certification of a program is an essential property, but it is not the only one. Showing the termination of a program that has exponential complexity does not make sense with respect to our reality. Thus arises the need to construct tools for reasoning on algorithms. The theory of implicit complexity of computations takes a vast project to task, namely the analysis of the complexity of algorithms.