Team Pop Art

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Static Analysis and Abstract Interpretation

Participants : Alain Girault, Bertrand Jeannet [contact person] , Lies Lakhdar-Chaouch, Peter Schrammel, Pascal Sotin.

Combining Control and Data Abstraction for the Verification of Hybrid Systems

We have studied the verification of hybrid systems built as the composition of a discrete software controller interacting with a physical environment exhibiting a continuous behavior. Our goal is to tackle the problem of the combinatorial explosion of discrete states that may happen when a complex software controller is considered. We propose to extend an existing abstract interpretation technique, namely dynamic partitioning, to hybrid systems. Dynamic partitioning, which shares some common principles with predicate abstraction, allows us to finely tune the tradeoff between precision and efficiency in the analysis.

We have extended the NBac tool (Section  5.1 ) according to these principle, and showed the efficiency of the approach by a case study that combines a non trivial controller specified in the synchronous dataflow programming language Lustre with its physical environment [36] [9] .

Extending Abstract Acceleration Methods to Data-Flow Programs with Numerical Inputs

Acceleration methods are commonly used for computing precisely the effects of loops in the reachability analysis of counter machine models. Applying these methods on synchronous data-flow programs with Boolean and numerical variables, e.g., Lustre programs, firstly requires the enumeration of the Boolean states in order to obtain a control graph with numerical variables only. Secondly, acceleration methods have to deal with the non-determinism introduced by numerical input variables.

We addressed in [23] the latter problem by extending the concept of abstract acceleration of Gonnord et al. [61] , [60] to numerical input variables. This extension raises some subtle points. We show how to accelerate loops composed of a translation with resets and inputs, provided that the guard of the loop constrains separately state and input variables, and we evaluate the gain in precision that we obtain with this method, compared to the more traditional approach based on the use of widening. A journal version has been submitted to a special issue of Journal of Symbolic Computation, focusing on invariant generation and advanced techniques for reasoning about loops.

We worked more recently on the first point. Our goal is to apply acceleration techniques to data-flow programs without resorting to an exhaustive enumeration of Boolean states. To this end, we are studying (1) methods for applying abstract acceleration to general control flow graphs, and (2) heuristics for controlled partitioning, i.e., partially unfolding the control structure in order to gain precision on numerical variables during analysis while treating symbolically Boolean states as much as possible.

A Relational Approach to Interprocedural Shape Analysis

This work addresses the verification of properties of imperative programs with recursive procedure calls, heap-allocated storage, and destructive updating of pointer-valued fields, i.e., interprocedural shape analysis. It presents a way to apply some previously known approaches to interprocedural dataflow analysis — which in past work have been applied only to a much less rich setting — so that they can be applied to programs that use heap-allocated storage and perform destructive updating.

Our submission to ACM TOPLAS has been published this year [12] . This work has been done in collaboration with T. Reps (Univ. of Madison-Wisconsin), M. Sagiv (Univ. of Tel Aviv) and A. Loginov (GrammaTech).

Concrete Memory Models for Shape Analysis

The purpose of shape analysis is to infer properties on the runtime structure of the memory heap. Like most static analyses, shape analyses perform approximations. One has thus to distinguish the concrete memory model that a shape analysis tackles, and the abstract memory model/representation used by the analysis to express properties. For instance, in [83] and in [12] the concrete memory model is an unbounded 2-valued logical structure, and the abstract memory representation is a bounded 3-valued logical structure. But other analyses describe concrete (and abstract) memory models with separation-logic formulas [40] .

These concrete models do actually abstract some properties, as they do not completely model the physical memory of a computer. For instance, the physical numerical addresses may be ignored, as it is the case for [83] which cannot define the semantics of C pointer arithmetics.

In [25] we propose a classification of various concrete memory models and we try to clarify the equivalences and differences between them. In particular, we discuss to which extend the semantics of the C language can be encoded within these models, as the C-like programming languages are the most expressive ones in term of pointer manipulation.

Relational Interprocedural Analysis of Concurrent Programs

We have studied the extension of the relational approach to interprocedural analysis of sequential programs to concurrent programs, composed of a fixed number of threads [73] .

In the relational approach, a sequential program is analyzed by computing summaries of procedures, and by propagating reachability information using these summaries. We propose an extension to concurrent programs, which is technically based on an instrumentation of the standard operational semantics, followed by an abstraction of tuple of call-stacks into sets. This approach allows us to extend relational interprocedural analysis to concurrent programs. We have implemented it for programs with scalar variables, in the ConcurInterproc online analyzer (see § 5.5.3 ).

We have experimented several classical synchronisation protocols in order to investigate the precision of our technique, but also to analyze the approximations it performs.

This year a journal version has been submitted to SOSYM (Software and Systems Modeling) and is currently under revision process. The journal version improves on the conference version with better notation and generalization to backward analysis.

We also worked on new techniques for applying the widening extrapolation operator in the context of concurrent programs. This is the topic of the PhD of Lies Lakhdar-Chaouch, co-advised by Bertrand Jeannet and Alain Girault, and funded by OpenTLM . A conference paper is in preparation.

Precise Interprocedural Analysis in the Presence of Pointers to the Stack

In a language with procedure calls and pointers as parameters, an instruction can modify memory locations anywhere in the call-stack. The presence of such side effects breaks most generic interprocedural analysis methods (such as the one described in the sections  6.4.3 and 6.4.5 ) which assume that only the top of the stack may be modified.

We present a method that addresses this issue, based on the definition of an equivalent local semantics in which writing through pointers has a local effect on the stack. The idea of this local semantics, inspired by [35] , is that a procedure works on local copies (called external locations) of the locations that it can reach with its pointer parameters. When the procedure returns to its caller, the side-effects performed on such copies will be propagated back the corresponding locations in the caller, which may be themselves local or external w.r.t. their own caller.

Our second contribution in this context is an adequate representation of summary functions that models the effect of a procedure, not only on the values of its scalar and pointer variables, but also on the values contained in pointed memory locations. Our implementation in the interprocedural analyser PInterproc (see section  5.5.3 ) results in a verification tool that infers relational properties on the value of Boolean, numerical and pointer variables.

We submitted this year a paper to the ESOP'2011 conference, which has been accepted [84] .

Software Engineering of Abstract Interpretation Tools

The “right” way of writing and structuring compilers is well-known. The situation is a bit less clear for static analysis tools. It seems to us that a static analysis tool is ideally decomposed into three building blocks: (1) a front-end, which parses programs, generates semantic equations, and supervises the analysis process; (2) a fixpoint equation solver, which takes equations and solves them; (3) and an abstract domain, on which equations are interpreted. The expected advantages of such a modular structure is the ability of sharing development efforts between analyzers for different languages, using common solvers and abstract domains. However putting in practice such ideal concepts is not so easy, and some static analyzers merge for instance the blocks (1) and (2).

In [22] , we describe how we instantiated these principles with three different static analyzers (addressing resp. imperative sequential programs (Interproc ), imperative concurrent programs (ConcurInterproc ), and synchronous dataflow programs (NBac ), a generic fixpoint solver (Fixpoint ), and two different abstract domains (Apron and BddApron ), see sections  5.5.3 , 5.1 , and 5.4 . We discuss our experience on the advantages and the limits of this approach compared to related work.


previous
next

Logo Inria