The goal of the Celtique project is to improve the security and reliability of software through software certificates that attest to the well-behavedness of a given software. Contrary to certification techniques based on cryptographic signing, we are providing certificates issued from semantic software analysis. The semantic analyses extract approximate but sound descriptions of software behaviour from which a proof of security can be constructed. The analyses of relevance include numerical data flow analysis, control flow analysis for higher-order languages, alias and points-to analysis for heap structure manipulation and data race freedom of multi-threaded code.

Existing software certification procedures make extensive use of systematic test case generation. Semantic analysis can serve to improve these testing techniques by providing precise software models from which test suites for given test coverage criteria can be manufactured. Moreover, an emerging trend in mobile code security is to equip mobile code with proofs of well-behavedness that can then be checked by the code receiver before installation and execution. A prominent example of such proof-carrying code is the stack maps for Java byte code verification. We propose to push this technique much further by designing certifying analyses for Java byte code that can produce compact certificates of a variety of properties. Furthermore, we will develop efficient and verifiable checkers for these certificates, relying on proof assistants like Coq to develop provably correct checkers. We target two application domains: Java software for mobile devices (in particular mobile telephones) and embedded C programs.

Celtique is a joint project with the CNRS, the University of Rennes 1 and ENS Cachan.

Sandrine Blazy received the 2011 La Recherche award in Information Sciences for her contributions to the CompCert verified C compiler, together with Zaynah Dargaye, Xavier Leroy and Jean-Baptiste Tristan.

Static program analysis is concerned with obtaining information about the run-time behaviour of a program without actually running it. This information may concern the values of variables, the relations among them, dependencies between program values, the memory structure being built and manipulated, the flow of control, and, for concurrent programs, synchronisation among processes executing in parallel. Fully automated analyses usually render approximate information about the actual program behaviour. The analysis is correct if the information includes all possible behaviour of a program. Precision of an analysis is improved by reducing the amount of information describing spurious behaviour that will never occur.

Static analysis has traditionally found most of its applications in the area of program optimisation where information about the run-time behaviour can be used to transform a program so that it performs a calculation faster and/or makes better use of the available memory resources. The last decade has witnessed an increasing use of static analysis in software verification for proving invariants about programs. The Celtique project is mainly concerned with this latter use. Examples of static analysis include:

Data-flow analysis as it is used in optimising compilers for imperative languages. The properties can either be approximations of the values of an expression (“the value
of variable

Analyses of the memory structure includes shape analysis that aims at approximating the data structures created by a program. Alias analysis is another data flow analysis that finds out which variables in a program addresses the same memory location. Alias analysis is a fundamental analysis for all kinds of programs (imperative, object-oriented) that manipulate state, because alias information is necessary for the precise modelling of assignments.

Control flow analysis will find a safe approximation to the order in which the instructions of a program are executed. This is particularly relevant in languages where parameters or functions can be passed as arguments to other functions, making it impossible to determine the flow of control from the program syntax alone. The same phenomenon occurs in object-oriented languages where it is the class of an object (rather than the static type of the variable containing the object) that determines which method a given method invocation will call. Control flow analysis is an example of an analysis whose information in itself does not lead to dramatic optimisations (although it might enable in-lining of code) but is necessary for subsequent analyses to give precise results.

Static analysis possesses strong
**semantic foundations**, notably abstract interpretation
, that allow to prove its correctness. The implementation of static
analyses is usually based on well-understood constraint-solving techniques and iterative fixpoint algorithms. In spite of the nice mathematical theory of program analysis and the solid
algorithmic techniques available one problematic issue persists,
*viz.*, the
*gap*between the analysis that is proved correct on paper and the analyser that actually runs on the machine. While this gap might be small for toy languages, it becomes important when it
comes to real-life languages for which the implementation and maintenance of program analysis tools become a software engineering task. A
*certified static analysis*is an analysis that has been formally proved correct using a proof assistant.

In previous work we studied the benefit of using abstract interpretation for developing
**certified static analyses**
,
. The development of certified static analysers is an ongoing
activity that will be part of the Celtique project. We use the Coq proof assistant which allows for extracting the computational content of a constructive proof. A Caml implementation can hence
be extracted from a proof of existence, for any program, of a correct approximation of the concrete program semantics. We have isolated a theoretical framework based on abstract interpretation
allowing for the formal development of a broad range of static analyses. Several case studies for the analysis of Java byte code have been presented, notably a memory usage analysis
. This work has recently found application in the context of Proof
Carrying Code and have also been successfully applied to particular form of static analysis based on term rewriting and tree automata
.

Precise context-sensitive control-flow analysis is a fundamental prerequisite for precisely analysing Java programs. Bacon and Sweeney's Rapid Type Analysis (RTA)
is a scalable algorithm for constructing an initial call-graph of
the program. Tip and Palsberg
have proposed a variety of more precise but scalable call graph
construction algorithms
*e.g.,*MTA, FTA, XTA which accuracy is between RTA and 0'CFA. All those analyses are not context-sensitive. As early as 1991, Palsberg and Schwartzbach
,
proposed a theoretical parametric framework for typing
object-oriented programs in a context-sensitive way. In their setting, context-sensitivity is obtained by explicit code duplication and typing amounts to analysing the expanded code in a
context-insensitive manner. The framework accommodates for both call-contexts and allocation-contexts.

To assess the respective merits of different instantiations, scalable implementations are needed. For Cecil and Java programs, Grove
*et al.,*
,
have explored the algorithmic design space of contexts for
benchmarks of significant size. Latter on, Milanova
*et. al.,*
have evaluated, for Java programs, a notion of context called
*object-sensitivity*which abstracts the call-context by the abstraction of the
`this`pointer. More recently, Lhotak and Hendren
have extended the empiric evaluation of object-sensitivity using
a BDD implementation allowing to cope with benchmarks otherwise out-of-scope. Besson and Jensen
proposed to use
datalogin order to specify context-sensitive analyses. Whaley and Lam
have implemented a context-sensitive analysis using a BDD-based
datalogimplementation.

Control-flow analyses are a prerequisite for other analyses. For instance, the security analyses of Livshits and Lam and the race analysis of Naik, Aiken and Whaley both heavily rely on the precision of a control-flow analysis.

Control-flow analysis allows to statically prove the absence of certain run-time errors such as "message not understood" or cast exceptions. Yet it does not tackle the problem of "null pointers". Fahnrich and Leino propose a type-system for checking that after object creation fields are non-null. Hubert, Jensen and Pichardie have formalised the type-system and derived a type-inference algorithm computing the most precise typing . The proposed technique has been implemented in a tool called NIT . Null pointer detection is also done by bug-detection tools such as FindBugs . The main difference is that the approach of findbugs is neither sound nor complete but effective in practice.

Static analyses yield qualitative results, in the sense that they compute a safe over-approximation of the concrete semantics of a program, w.r.t. an order provided by the abstract domain structure. Quantitative aspects of static analysis are two-sided: on one hand, one may want to express and verify (compute) quantitative properties of programs that are not captured by usual semantics, such as time, memory, or energy consumption; on the other hand, there is a deep interest in quantifying the precision of an analysis, in order to tune the balance between complexity of the analysis and accuracy of its result.

The term of quantitative analysis is often related to probabilistic models for abstract computation devices such as timed automata or process algebras. In the field of programming languages which is more specifically addressed by the Celtique project, several approaches have been proposed for quantifying resource usage: a non-exhaustive list includes memory usage analysis based on specific type systems , , linear logic approaches to implicit computational complexity , cost model for Java byte code based on size relation inference, and WCET computation by abstract interpretation based loop bound interval analysis techniques .

We have proposed an original approach for designing static analyses computing program costs: inspired from a probabilistic approach , a quantitative operational semantics for expressing the cost of execution of a program has been defined. Semantics is seen as a linear operator over a dioid structure similar to a vector space. The notion of long-run cost is particularly interesting in the context of embedded software, since it provides an approximation of the asymptotic behaviour of a program in terms of computation cost. As for classical static analysis, an abstraction mechanism allows to effectively compute an over-approximation of the semntics, both in terms of costs and of accessible states . An example of cache miss analysis has been developed within this framework .

The semantic analysis of programs can be combined with efficient constraint solving techniques in order to extract specific information about the program,
*e.g.*, concerning the accessibility of program points and feasibility of execution paths
,
. As such, it has an important use in the automatic generation of
test data. Automatic test data generation received considerable attention these last years with the development of efficient and dedicated constraint solving procedures and compositional
techniques
.

We have made major contributions to the development of
**constraint-based testing**, which is a two-stage process consisting of first generating a constraint-based model of the program's data flow and then, from the selection of a testing
objective such as a statement to reach or a property to invalidate, to extract a constraint system to be solved. Using efficient constraint solving techniques allows to generate test data
that satisfy the testing objective, although this generation might not always terminate. In a certain way, these constraint techniques can be seen as efficient decision procedures and so,
they are competitive with the best software model checkers that are employed to generate test data.

The term "software certification" has a number of meanings ranging from the formal proof of program correctness via industrial certification criteria to the certification of software developers themselves! We are interested in two aspects of software certification:

industrial, mainly process-oriented certification procedures

software certificates that convey semantic information about a program

Semantic analysis plays a role in both varieties.

Criteria for software certification such as the Common criteria or the DOA aviation industry norms describe procedures to be followed when developing and validating a piece of software. The higher levels of the Common Criteria require a semi-formal model of the software that can be refined into executable code by traceable refinement steps. The validation of the final product is done through testing, respecting criteria of coverage that must be justified with respect to the model. The use of static analysis and proofs has so far been restricted to the top level 7 of the CC and has not been integrated into the aviation norms.

The testing requirements present in existing certification procedures pose a challenge in terms of the automation of the test data generation process for satisfying functional and
structural testing requirements. For example, the standard document which currently governs the development and verification process of software in airborne system (DO-178B) requires the
coverage of all the statements, all the decisions of the program at its higher levels of criticality and it is well-known that DO-178B structural coverage is a primary cost driver on avionics
project. Although they are widely used, existing marketed testing tools are currently restricted to test coverage monitoring and measurements

Static analysis tools are so far not a part of the approved certification procedures. For this to change, the analysers themselves must be accepted by the certification bodies in a process called “Qualification of the tools” in which the tools are shown to be as robust as the software it will certify. We believe that proof assistants have a role to play in building such certified static analysis as we have already shown by extracting provably correct analysers for Java byte code.

The particular branch of information security called "language-based security" is concerned with the study of programming language features for ensuring the security of software.
Programming languages such as Java offer a variety of language constructs for securing an application. Verifying that these constructs have been used properly to ensure a given security
property is a challenge for program analysis. One such problem is confidentiality of the private data manipulated by a program and a large group of researchers have addressed the problem of
tracking information flow in a program in order to ensure that
*e.g.*, a credit card number does not end up being accessible to all applications running on a computer
,
. Another kind of problems concern the way that computational
resources are being accessed and used, in order to ensure that a given access policy is being implemented correctly and that a given application does not consume more resources that it has
been allocated. Members of the Celtique team have proposed a verification technique that can check the proper use of resources of Java applications running on mobile telephones
.
**Semantic software certificates**have been proposed as a means of dealing with the security problems caused by mobile code that is downloaded from foreign sites of varying trustworthiness
and which can cause damage to the receiving host, either deliberately or inadvertently. These certificates should contain enough information about the behaviour of the downloaded code to
allow the code consumer to decide whether it adheres to a given security policy.

In the basic PCC architecture, the only components that have to be trusted are the program logic, the proof checker of the logic, and the formalization of the security property in this logic. Neither the mobile code nor the proposed proof—and even less the tool that generated the proof—need be trusted.

In practice, the
*proof checker*is a complex tool which relies on a complex Verification Condition Generator (VCG). VCGs for real programming languages and security policies are large and non-trivial
programs. For example, the VCG of the Touchstone verifier represents several thousand lines of C code, and the authors observed that "there were errors in that code that escaped the thorough
testing of the infrastructure"
. Many solutions have been proposed to reduce the size of the
trusted computing base. In the
*foundational proof carrying code*of Appel and Felty
,
, the code producer gives a direct proof that, in some
"foundational" higher-order logic, the code respects a given security policy. Wildmoser and Nipkow
,
. prove the soundness of a
*weakest precondition*calculus for a reasonable subset of the Java bytecode. Necula and Schneck
extend a small trusted core VCG and describe the protocol that
the untrusted verifier must follow in interactions with the trusted infrastructure.

One of the most prominent examples of software certificates and proof-carrying code is given by the Java byte code verifier based on
*stack maps*. Originally proposed under the term “lightweight Byte Code Verification” by Rose
, the techniques consists in providing enough typing information
(the stack maps) to enable the byte code verifier to check a byte code in one linear scan, as opposed to inferring the type information by an iterative data flow analysis. The Java
Specification Request 202 provides a formalization of how such a verification can be carried out.

In spite of the nice mathematical theory of program analysis (notably abstract interpretation) and the solid algorithmic techniques available one problematic issue persists,
*viz.*, the
*gap*between the analysis that is proved correct on paper and the analyser that actually runs on the machine. While this gap might be small for toy languages, it becomes important when
it comes to real-life languages for which the implementation and maintenance of program analysis tools become a software engineering task.

A
*certified static analysis*is an analysis whose implementation has been formally proved correct using a proof assistant. Such analysis can be developed in a proof assistant like
Coq
by programming the analyser inside the assistant and formally
proving its correctness. The Coq extraction mechanism then allows for extracting a Caml implementation of the analyser. The feasibility of this approach has been demonstrated in
.

We also develop this technique through certified reachability analysis over term rewriting systems. Term rewriting systems are a very general, simple and convenient formal model for a large variety of computing systems. For instance, it is a very simple way to describe deduction systems, functions, parallel processes or state transition systems where rewriting models respectively deduction, evaluation, progression or transitions. Furthermore rewriting can model every combination of them (for instance two parallel processes running functional programs).

Depending on the computing system modelled using rewriting, reachability (and unreachability) permits to achieve some verifications on the system: respectively prove that a deduction is feasible, prove that a function call evaluates to a particular value, show that a process configuration may occur, or that a state is reachable from the initial state. As a consequence, reachability analysis has several applications in equational proofs used in the theorem provers or in the proof assistants as well as in verification where term rewriting systems can be used to model programs.

For proving unreachability, i.e. safety properties, we already have some results based on the over-approximation of the set of reachable terms
,
. We defined a simple and efficient algorithm
for computing exactly the set of reachable terms, when it is
regular, and construct an over-approximation otherwise. This algorithm consists of a
*completion*of a
*tree automaton*, taking advantage of the ability of tree automata to finitely represent infinite sets of reachable terms.

To certify the corresponding analysis, we have defined a checker guaranteeing that a tree automaton is a valid fixpoint of the completion algorithm. This consists in showing that for all term recognised by a tree automaton all his rewrites are also recognised by the same tree automaton. This checker has been formally defined in Coq and an efficient Ocaml implementation has been automatically extracted . This checker is now used to certify all analysis results produced by the regular completion tool as well as the optimised version of .

Javalib is an efficient library to parse Java .class files into OCaml data structures, thus enabling the OCaml programmer to extract information from class files, to manipulate and to generate valid .class files.

See also the web page
http://

Version: 2.2

Programming language: Ocaml

Sawja is a library written in OCaml, relying on Javalib to provide a high level representation of Java bytecode programs. It name comes from Static Analysis Workshop for JAva. Whereas Javalib is dedicated to isolated classes, Sawja handles bytecode programs with their class hierarchy and with control flow algorithms.

Moreover, Sawja provides some stackless intermediate representations of code, called JBir and A3Bir. The transformation algorithm, common to these representations, has been formalized and proved to be semantics-preserving.

See also the web page
http://

Version: 1.2

Programming language: Ocaml

Timbuk is a library of Ocamlfunctions for manipulating tree automata. More precisely, Timbuk deals with finite bottom-up tree automata (deterministic or not). This library provides the classical operations over tree automata (intersection, union, complement, emptiness decision) as well as exact or approximated sets of terms reachable by a given term rewriting system. This last operation can be certified using a checker extracted from a Coq specification.

Version: 3.1

Programming language: Ocaml

Control-flow analysis (CFA) of functional programs is concerned with determining how the program's functions call each other. In the case of the lambda calculus, this amounts to computing
the flow of lambda expressions in order to determine what functions are effectively called in an application

The systematic derivation of a CFA for a higher-order functional language from a well-known operational semantics provides the resulting analysis with strong mathematical foundations. Its correctness follows directly from the general theorems of abstract interpretation.

The approach is easily adapted to different variants of the source language. We demonstrate this by deriving a CFA for functional programs written in continuation-passing style.

The common framework of these analyses enables their comparison. We take advantage of this to settle a question about the equivalence between the analysis of programs in direct and continuation-passing style.

The resulting equations can be given an equivalent constraint-based presentation, providing
*ipso facto*a rational reconstruction and a correctness proof of constraint-based CFA.

This work was presented at the Japanese Shonan workshop on Verification of higher-order functional programs in September 2011. A journal article is accepted to appear in Information and Computation.

Satisfiability Modulo Theory (SMT) solvers are efficient automatic provers for combination of theories. Those solvers have proved very successful in program verification because they
discharge automatically and efficiently challenging verification conditions. SMT solvers are therefore
*d*e facto part of the Trusted Computing Base of many program verification methodologies. A consequence is that a soundness bug in a SMT solver can make the whole program verification
process unsound.

To tackle this problem, we propose a new methodology for exchanging unsatisfiability proofs between an untrusted SMT solver and a sceptical proof assistant with computation capabilities like Coq. We advocate modular SMT proofs that separate boolean reasoning and theory reasoning; and structure the communication between theories using Nelson-Oppen combination scheme.

We present the design and implementation of a Coq reflexive verifier that is modular and allows for fine-tuned theory-specific verifiers. The current verifier is able to verify proofs for quantifier-free formulae mixing linear arithmetic and uninterpreted functions. Our proof generation scheme benefits from the efficiency of state-of-the-art SMT solvers while being independent from a specific SMT solver proof format. Our only requirement for the SMT solver is the ability to extract unsat cores and generate boolean models. In practice, unsat cores are relatively small and their proof is obtained with a modest overhead by our proof-producing prover. We present experiments assessing the feasibility of the approach for benchmarks obtained from the SMT competition.

This work has been presented at the CPP conference and the international PxTP workshop , .

Exchanging mutable data objects with untrusted code is a delicate matter because of the risk of creating a data space that is accessible by an attacker. Consequently, secure programming guidelines for Java stress the importance of using defensive copying before accepting or handing out references to an internal mutable object.

However, implementation of a copy method (like clone()) is entirely left to the programmer. It may not provide a sufficiently deep copy of an object and is subject to overriding by a malicious sub-class. Currently no language-based mechanism supports secure object cloning.

We propose a type-based annotation system for defining modular copy policies for class-based object-oriented programs. A copy policy specifies the maximally allowed sharing between an object and its clone. We provide a static enforcement mechanism that will guarantee that all classes fulfill their copy policy, even in the presence of overriding of copy methods, and establish the semantic correctness of the overall approach in Coq.

The mechanism has been implemented and experimentally evaluated on clone methods from several Java libraries. The work as been presented at ESOP this year and is under reviewing for a journal special issue.

Nowadays, constraint programs are written in high-level modelling languages. Their verification is currently based on trace analysis techniques but does not integrate systematic testing techniques. In this work, we developped a Testing framework for catching the peculiarities of constraint program development, throughout the notions of conformity relations, fault localization and correction.

Within the context of the Nadjib Lazaar's PhD (defense on 5 Dec. 2011), we explored in 2011 the testing of constraint programs written in OPL and the development of trace-based fault localization and correction techniques . Lazaar's tool called CPTEST showed impressive experimental results on four hard problems of the CP Community, leading to a publication (in progress) in the Constraints Journal.

Programs including floating-point computations are known to be hard-to-test. Generating test inputs for those programs requires solving constraints over floating-points computations, which led us to the development of specific constraint filtering techniques. In this work, we extended the Marre and Michel property regarding the use of internal floating-point representation to increase the filtering capabilities of addition to the case of multiplication/division. We came up with an optimized implementation of FPSE (our current FP constraint solver) that was able to deal with large C programs that include (non-linear) floating-point computations. We already got a first publication of this work .

The problem of automatically inferring polynomial (non-linear) invariants of programs is still a major challenge in program verification. We have proposed an abstract interpretation based
method to compute polynomial invariants for imperative programs. Our analysis is a backward propagation approach that computes preconditions for equalities like

The DECERTproject (2009–2011) is funded by the call Domaines Emergents 2008, a program of the Agence Nationale de la Recherche.

The objective of the DECERT project is to design an architecture for cooperating decision procedures, with a particular emphasis on fragments of arithmetic, including bounded and unbounded arithmetic over the integers and the reals, and on their combination with other theories for data structures such as lists, arrays or sets. To ensure trust in the architecture, the decision procedures will either be proved correct inside a proof assistant or produce proof witnesses allowing external checkers to verify the validity of their answers.

This is a joint project with Systerel, CEA List and INRIA teams Mosel, Cassis, Marelle, Proval and Celtique (coordinator).

The CAVERN project (ANR, 2007-2011) gathers national research teams to study the capabilities of Constraint Programming for Program Verification (
http://

The goal of the PiCoq projectis to develop an environment for the formal verification of properties of distributed, component-based programs. The project's approach approach lies at the interface between two research areas: concurrency theory and proof assistants. Achieving this goal relies on three scientific advances, which the project intends to address:

Finding mathematical frameworks that ease modular reasoning about concurrent and distributed systems: due to their large size and complex interactions, distributed systems cannot be analysed in a global way. They have to be decomposed into modular components, whose individual behaviour can be understood.

Improving existing proof techniques for distributed/modular systems: while behavioural theories of first-order concurrent languages are well understood, this is not the case for higher-order ones. We also need to generalise well-known modular techniques that have been developed for first-order languages to facilitate formalisation in a proof assistant, where source code redundancies should be avoided.

Defining core calculi that both reflect concrete practice in distributed component programming and enjoy nice properties w.r.t. behavioural equivalences.

The project partners include INRIA, LIP, and Université de Savoie. The project runs from November 2010 to October 2014.

The ANR U3CATproject (2009–2012) is built upon the results of the RNTL CAT project, which delivered the Frama-C platform for the analysis of C programs and the ACSL assertion language. The ANR U3CAT project focuses on providing a unified interface that would allow to perform several analyses on a same code and to study how these analyses can cooperate in order to prove properties that culd not have been established by one single technique. The other members of the project are the CEA LIST laboratory (project leader), Proval (Inria Futurs), Gallium (Inria Paris-Rocquencourt), Cedric (CNAM), Atos Origin, CS, Dassault-Aviation, Sagem Defense and Airbus Industries.

The ASCERT project (2009–20012) is founded by the
*Fondation de Recherche pour l'Aéronautique et l'Espace*. It aims at studying the formal certification of static analysis using and comparing various approaches like certified programming
of static analysers, checking of static analysis result and deductive verification of analysis results. It is a joint project with the INRIA teams
Abstraction,
Galliumand
POP-ART.

The CERTLOGS project (2009–20012) is funded by the CREATE action of the
*Région Bretagne*. The objective of this project is to develop new kinds of program certificates and innovating certifying verification techniques using static analysis as the
fundamental tool and combine this with techniques coming from probabilistic algorithms and cryptography.

COST Action IC0701is a European scientific cooperation. The Action aims at developing verification technology with the power to ensure dependability of object-oriented programs on industrial scale. The action is composed of 15 countries. The COST action has been a forum for presenting our results concerning the data race analysis and our proposal for an intermediate language into which Java byte code can be transformed in order to faciliate the static analysis of byte code programs.

This year, we built the VALVES (Variability Testing of Highly-Variable Systems) European proposal gathering University of Sevilla, University of Namur, University of Uppsala, Isotrol, Thales and INRIA Rennes (Arnaud Gotlieb being the coordinator of the proposal). The proposal was submitted to the FP7 program (Call 7, challenge 3.3) and got well evaluated but not enough to be funded this year. From this, we got a support of the Brittany Region to organize a physical meeting during Fall 2011 and prepare a new submission. This meeting was held in the Paris INRIA's offices, the 18th November 2011.

Since three years, we have developed a long-term collaboration with Yahia Lebbah, from University of Oran, Algeria. This collaboration has been fruitful with several publications, the last one being and the INRIA International programme support DGRI. This fund permitted us to visit each other's group in 2011 with the 1-month visit of N. Lazaar to the University of Oran and the 1-week visit of Y. Lebbah to INRIA Rennes in Dec. 2011.

David Pichardie served in the program committees of JFLA 2011, BYTECODE 2011, PxTP 2011, PSATTT 2011 and FoVeOOS 2011. Arnaud Gotlieb served in the PC of the International conferences QSIC'11 and TAP'11, and the VAST'11 workshop. He co-organized the CSTVA'11 satellite workshop of the ICST'11 conference that was held in Berlin in March. He will be hold the workshop chair of ICST'11 next year. A. Schmitt is a member of the steering committee of the Journées Françaises des Langages Applicatifs (JFLA). David Cachera served on the program committee of FOPARA 2011. Sandrine Blazy served on the program committee of the ITP 2011. Thomas Jensen served on the program committee of FoVeOOS 2011 and on the FPS 2011 conference.

Sandrine Blazy and Thomas Jensen organize a seminar devoted to security and formal methods. The seminar is funded by DGA-MI. It takes place at INRIA twice a month. It is open to the public and attended by researchers and engineers.

Licence :

Thomas Genet, Programmation fonctionnelle, 44h, L3, Rennes 1, France

David Cachera, Logique et calculabilité, 36h, L3, ENS Cachan Bretagne, France

David Cachera, Algorithmique avancée, 18h, L3, ENS Cachan Bretagne, France

David Cachera, Langages formels, 24h, L3, ENS Cachan Bretagne, France

Sandrine Blazy, Programmation fonctionnelle, 20h, L3, Rennes 1, France

Master :

Alan Schmitt, Méthodes Formelles pour le développement de logiciels sûrs, 36h, M1, Rennes 1, France

Frédéric Besson, Compilation, 30h, M1, Insa Rennes, France

Thomas Genet, Bases de la cryptographie, 18h, M2, Rennes 1, France

Thomas Genet, Analyse et conception objet, 48h, M1, Rennes 1, France

Thomas Genet, Validation et vérification formelle, 38h, M1, Rennes 1, France

David Cachera, Sémantique des langages de programmation, 36h, M1, Rennes 1, France

David Cachera, Préparation à l'agrégation, 60h, M2, ENS Cachan Bretagne, France

Sandrine Blazy, Méthodes Formelles pour le développement de logiciels sûrs, 48h, M1, Rennes 1, France

Sandrine Blazy, Conception de logiciels surs, 40h, M2, Rennes1, France.

Sandrine Blazy, Évaluation des vulnérabilités des logiciels, 21h, M2, Rennes 1, France.

Sandrine Blazy, Veille technologique, 19h, M2, Rennes 1, France

Thomas Jensen, Program Analysis and Semantics, 20h, M2, Rennes 1, France

Thomas Jensen, Software Security, 20h, M2, Rennes 1, France.

PhD & HdR :

HdR : Arnaud Gotlieb, Contributions to Constraint-Based Testing, Université de Rennes, 12 Décembre 2011

PhD : Mickael Delahaye, Généralisation de chemins infaisables pour l'exécution symbolique dynamique, Université de Rennes, 26 Octobre 2011, Arnaud Gotlieb et Thomas Jensen

PhD : Nadjib Lazaar, Méthodologie et outil de test, de localisation de fautes et de correction automatique des programmes contraintes, Université de Rennes, 5 Décembre 2011, Arnaud Gotlieb and Thomas Jensen

PhD in progress: Valérie Murat, Automatic verification of infinite state systems using tree automata completion, octobre 2010, Thomas Genet and Axel Legay

PhD in progress: Yann Salmon, Optimized rewriting proof search using approximations and tree automata, octobre 2011, Thomas Genet

PhD in progress: Arnaud Jobin, Dioïdes et idéaux de polynômes en analyse statique, ENS Cachan, septembre 2008, soutenance prévue en janvier 2012, David Cachera and Thomas Jensen

PhD in progress: Andre Oliveira Maroneze, Compilation vérifiée et calcul de temps d'exécution au pire cas, septembre 2010, Sandrine Blazy and Isabelle Puaut

PhD in progress: Stéphanie Riaud, Transformations de programmes pertinentes pour la sécurité du logiciel, septembre 2011, Sandrine Blazy

PhD in progress: Pierre-Emmanuel Cornilleau, PCC certificates for static analysis, octobre 2009, Thomas Jensen and Frédéric Besson

PhD in progress: Zhoulai Fu, Abstract interpretation and memory analysis, octobre 2009, Thomas Jensen and David Pichardie

PhD in progress: Delphine Demange, Certified Intermediate Representations, octobre 2009, Thomas Jensen and David Pichardie.