## Section:
Scientific Foundations2>
### Proof of program transformations for multicores3>

Participants : Éric Violard, Julien Narboux, Nicolas Magaud, Vincent Loechner, Alexandra Jimborean.

#### State of the art4>
##### Certification of low-level codes.5>

Among the languages allowing to exploit the power of multicore architectures, some of them supply the programmer a library of functions that corresponds more or less to the features of the target architecture : for example, CUDA (http://www.nvidia.com/object/cuda_what_is.html ) for the architectures of type GPGPU and more recently the standard OpenCL (http://www.khronos.org/opencl ) that offers a unifying programming interface allowing the use of most of the existing multicore architectures or a use of heterogeneous aggregate of such architectures. The main advantage of OpenCL is that it allows the programmer to write a code that is portable on a large set of architectures (in the same spirit as the MPI library for multi-processor architectures). However, at this low level, the programming model is very close to the executing model, the control of parallelism is explicit. Proof of program correctness has to take into account low-level mechanisms such as hardware interruptions or thread preemption, which is difficult.

In [38] , Feng *et al.* propose a logic inspired from the Hoare
logic in order to certify such low-level programs with hardware interrupts
and preempted threads. The authors specify this logic by using the meta-logic
implemented in the Coq proof assistant [24] .

##### Certification of a compiler.5>

The problem here is to prove that transformations or optimizations preserve the operational behaviour of the compiled programs.

Xavier Leroy in [27] , [50] formalizes the analyses and optimizations performed by a C compiler: a big part of this compiler is written in the specification language of Coq and the executable (Caml) code of this compiler is obtained by automatic extraction from the specification.

Optimizing compilers are complex softwares, particularly in the case of
multi-threaded programs. They apply some subtle code transformations.
Therefore some errors in the compiler may occur and the compiler may produce incorrect executable codes.
Work is to be done to remedy this problem. The technique of validation
*a posteriori* [69] , [70] is an interesting alternative
to full verification of a compiler.

##### Semantics of directives.5>

As it was mentioned in subsection 3.2.3 , the use of directives is an interesting approach to adapt languages to multicore architectures. It is a syntactic means to tackle the increasing need of enriching the operational semantics of programs.

Ideally, these directives are only comments: they do not alter the correction
of programs and they are a good means to improve their performance. They allow
the separation of concerns: *correction* and *efficiency*.

However, using directives in that sense and in the context of automatic parallelization, raises some questions: for example, assuming that directives are not mandatory, how to ensure that directives are really taken into account? How to know if a directive is better than another? What is the impact of a directive on performance?

In his thesis [40] , that was supervised by Éric Violard, Philippe Gerner
addresses similar questionings and states a formal framework in which the
semantics of compilation directives can be defined. In this framework, any directive
is encoded into one equation which is added to an algebraic specification.
The semantics of the directives can be precisely defined via an order relation
(called relation of *preference*) on the models of this specification.

##### Definition of a parallel programming model.5>

Classically, the good definition of a programming model is based on a semantic domain and on the definition of a “toy” language associated with a proof system, which allows to prove the correctness of the programs written in that language. Examples of such “toy” languages are CSP for control parallelism and [28] for data parallelism. The proof systems associated with these two languages, are extensions of the Hoare logic.

We have done some significant works on the definition of data parallelism
[11] . In particular, a crucial problem for the good definition
of this programming model, is the semantics of the various syntactic constructs
for data locality.
We proposed a semantic domain which unifies two concepts:
*alignment* (in a data-parallel language like HPF)
and *shape* (in the data-parallel extensions of C).

We defined a “toy” language, called PEI, that is made of
a small number of syntactic constructs. One of them,
called *change of basis*, allows the programmer
to exhibit parallelism in the same way as a placement
or a scheduling directive [41] .

##### Programming models for multicore architectures.5>

The multicore emergence questions the existing parallel programming models.

For example, with the programming model supported by OpenMP, it is difficult to master both correctness and efficiency of programs. Indeed, this model does not allow programmers to take optimal advantage of the memory hierarchy and some OpenMP directives may induce unpredictable performances or incorrect results.

Nowadays, some new programming models are experienced to help at designing both efficient and correct programs for multicores. Because memory is shared by the cores and its hierarchy has some distributed parts, some works aim at defining a hybrid model, between task parallelism and data parallelism. For example, languages like UPC (Unified Parallel C) (http://upc.gwu.edu ) or Chapel (http://chapel.cs.washington.edu ) combine the advantages of several programming paradigms.

In particular, the model of memory transactions (or transactional memory [47] ) retains much attention since it offers the programmer a simple operational semantics including a mutual exclusion mechanism which simplifies program design. However, much work remains to define the precise operational meaning of transactions and the interaction with the other languages features [56] . Moreover, this model leaves the compiler a lot of work to reach a safe and efficient execution on the target architecture. In particular, it is necessary to control the atomicity of transactions [39] and to prove that code transformations preserve the operational semantics.

##### Refinement of programs.5>

Refinement [22] , [42] is a classical approach for gradually
building correct programs: it consists in transforming an initial specification
by successive steps, by verifying that each transformation preserves the
correctness of the previous specification. Its basic principle is to derive simultaneously
a program and its own proof. It defines a formal framework in which some rules and strategies
can be elaborated to transform specifications written by using the same formalism.
Such a set of rules is called a *refinement calculus*.

Unity [32] and Gamma [23] are classical examples of such formalisms, but they are not especially designed for refining programs for multicore architectures. Each of these formalisms is associated with a computing model and thus each specification can be viewed as a program. Starting with an initial specification, a proof logic allows a user to derive a specification which is more suited to the target architecture.

Refinement applies for the programming of a large range of problems and architectures. It allows to pass the limitations of the polyhedral model and of automatic parallelization. We designed a refinement calculus to build data parallel programs [71] .

#### Main objective: formal proof of analyses and transformations4>

Our main objective consists in certifying the critical modules of our optimization tools (the compiler and the virtual machine). First we will prove the main loop transformation algorithms which constitute the core of our system.

The optimization process can be separated into two stages: the transformations consisting in optimizing the sequential code and in exhibiting parallelism, and those consisting in optimizing the parallel code itself. The first category of optimizations can be proved within a sequential semantics. For the other optimizations, we need to work within a concurrent semantics. We expect the first stage of optimizations to produce data-race free code. For the second stage of optimizations, we will first assume that the input code is data-race free. We will prove those transformations using Appel's concurrent separation logic [44] . Proving transformations involving program which are not data-race free will constitute a longer term research goal.

#### Proof of transformations in the polyhedral model4>

The main code transformations used in the compiler and the virtual machine
are those carried out
in the polyhedral model [49] , [37] .
We will use the Coq proof assistant to formalize proofs of analyses and
transformations based on the polyhedral model.
In [31] , Cachera and Pichardie formalized nested loops in Coq
and showed how to prove *properties* of those loops. Our aim is slightly
different as we plan to prove *transformations* of nested loops in the
polyhedral model.
We will first prove the simplest unimodular transformations, and later
we will focus on more complex transformations which are specific to multicore
architectures.
We will first study scheduling optimizations and then optimizations improving
data locality.

#### Validation under hypothesis4>

In order to prove the correction of a code transformation it is possible to:

prove that is correct in general,

*i.e.*, prove that for all , is equivalent to .prove

*a posteriori*that the applied transformation has been correct in the particular case of a code .

The second approach relies on the definition of a program called *validator*
which verifies if two pieces of program are equivalent. This program can be modeled
as a function such that, given two programs and ,
only if has the same semantics as .
This approach has been used in the field of optimizations certification [59] , [58] . If the validator itself contains a bug then the
certification process is broken. But if the validator is proved formally (as
it was achieved by Tristan and Leroy for the Compcert
compiler [69] , [70] )
then we get a transformed program which can be trusted in the same way as if the
transformation is proved formally.

This second approach can be used only for the *effective parallelism*,
when the static analysis provides enough information to parallelize the code.
For the *hypothetical parallelism*, the necessary hypotheses have to be verified at run time.

For instance, the absence of aliases in a piece of code is difficult to decide statically but can be more easily decided at run time.

In this framework, we plan to build a *validator under hypotheses*:
a function such that, given two programs and and an hypothesis
, if , then implies that has the same semantics
as . The validity of the hypothesis will be verified dynamically by the
virtual machine. This verification process, which is part of the virtual machine,
will have to be proved as correct as well.

#### Rejecting incorrect parallelizations4>

The goal of the project is to exhibit potential parallelism. The source code can
contain many sub-routines which could be parallelized under some hypothesis that
the static analysis fails to decide. For those optimizations, the virtual machine
will have to verify the hypotheses dynamically.
Dynamically dealing with the potential parallelism can be complex and costly
(profiling, speculative execution with rollbacks). To reduce the overhead of
the virtual machine, we will have to provide efficient methods to rule out
quickly incorrect parallelism. In this context, we will provide hypotheses which
are easy to check dynamically and which can tell when a transformation cannot
be applied, *i.e.*, hypotheses which are sufficient conditions for the non-validity
of an optimization.