Team VerTeCs

Overall Objectives
Scientific Foundations
Application Domains
New Results
Contracts and Grants with Industry
Other Grants and Activities

Section: Scientific Foundations

Automatic Test Generation

In testing, we are mainly interested in conformance testing. Conformance testing consists in checking whether a black box implementation under test (the real system that is only known by its interface) behaves correctly with respect to its specification (the reference which specifies the intended behavior of the system). In the line of model-based testing, we use formal specifications and their underlying models to unambiguously define the intended behavior of the system, to formally define conformance and to design test case generation algorithms. The difficult problems are to generate test cases that correctly identify faults (the oracle problem) and, as exhaustiveness is impossible to reach in practice, to select an adequate subset of test cases that are likely to detect faults. Hereafter we detail some elements of the models, theories and algorithms we use.

Models: We use IOLTS (or IOSTS) as formal models for specifications, implementations, test purposes, and test cases. Most often, specifications are not directly given in such low-level models, but are written in higher-level specification languages (e.g. SDL, UML, Lotos). The tools associated with these languages often contain a simulation API that implements their semantics under the form of IOLTS. On the other hand, the IOSTS model is expressive enough to allow a direct representation of most constructs of the higher-level languages.

Conformance testing theory: We adapt a well established theory of conformance testing [45] , which formally defines conformance as a relation between formal models of specifications and implementations. This conformance relation, called ioco is defined in terms of visible behaviors (called suspension traces ) of the implementation I (denoted by STraces(I) ) and those of the specification S (denoted by STraces(S) ). Suspension traces are sequence of inputs, outputs or quiescence (absence of action denoted by $ \delta$ ), thus abstract away internal behaviors that cannot be observed by testers. The conformance relation ioco was originally written in [45] as follows:

Im6 ${I~ioco~S\mover =\#9653 {\#8704 \#963 \#8712 STraces(S),Out(I~after~\#963 )\#8838 Out(S~after~\#963 )}}$

where Mafter$ \sigma$ is the set of states where M can stay after the observation of the suspension trace $ \sigma$ , and Out(Mafter$ \sigma$) is the set of outputs and quiescence allowed by M in this set. Intuitively, IiocoS if, after a suspension trace of the specification, the implementation I can only show outputs and quiescences of the specification S. We re-formulated ioco as a partial inclusion of visible behaviors as follows

Im7 ${{I~ioco~S\#8660 STraces(I)\#8745 [STraces(S).}\#923 _!^\#948 {\#8726 STraces(S)]}=\#8709 }$

Intuitively, this says that suspension traces of I which are suspension traces of S prolongated by an output or quiescence, should still be suspension traces of S. Interestingly, this characterization presents conformance with respect to S as a safety property of suspension traces of I. In fact Im8 ${{STraces(S).}\#923 _!^\#948 {\#8726 STraces(S)}}$ characterizes finite unexpected behaviours. Thus conformance with respect to S is clearly a safety property of I which negation can be specified by a ``non conformance'' observer A¬iocoS built from S and recognizing these unexpected behaviours. However, as I is a black box, one cannot check conformance exhaustively, but may only experiment I using test cases, expecting the detection of some non-conformances. In fact the non-conformance observer A¬iocoS can also be thought as the canonical tester of S for ioco , i.e. the most general testing process of S for ioco . It then serves also as a basis for test selection.

Test cases are processes executed against implementations in order to detect non-conformance. They are also formalized by IOLTS (or IOSTS) with special states indicating verdicts . The execution of test cases against implementations is formalized by a parallel composition with synchronization on common actions. Usually a Fail verdict means that the IUT is rejected and should correspond to non-conformance, a Pass verdict means that the IUT exhibited a correct behavior and some specific targeted behaviour has been observed, while an Inconclusive verdict is given to a correct behavior that is not targeted. Based on these models, the execution semantics, and the conformance relation, one can then define required properties of test cases and test suites (sets of test cases). Typical properties are soundness (only non conformant implementations should be rejected by a test case) and exhaustiveness (every non conformant implementation may be rejected by a test case). Soundness is not difficult to obtain, but exhaustiveness is not possible in practice and one has to select test cases.

Test selection: in the literature, in particular in white box testing, test selection is often based on coverage of some criteria (state coverage, transition coverage, etc). But in practice, test cases are often associated with test purposes describing some particular behaviors targeted by a test case. We have developed test selection algorithms based on the formalization of these test purposes . In our framework, test purposes are specified as IOLTS (or IOSTS) associated with marked states or dedicated variables, giving them the status of automata or observers accepting runs (or sequences of actions or suspension traces). We note ASTraces(S, TP) the suspension traces of these accepted runs. Now selection of test cases amounts at selecting these traces ASTraces(S, TP) , and then complement them with unspecified outputs leading to Fail . Alternatively, this can be seen as the computation of a sub-automaton of the canonical tester A¬iocoS whose accepting traces are ASTraces(S, TP) and failed traces are a subset of Im9 ${{[STraces(S).}\#923 _!^\#948 {\#8726 STraces(S)]}}$ . The resulting test case is then both an observer of the negation of a safety property (non-conformance wrt. S), and an observer of a reachability property (acceptance by the test purpose).

Test selection algorithms are based on the computation of the visible behaviors of the specification STraces(S) , involving the identification of quiescence ($ \delta$ actions) followed by determinisation, the construction of a product between the specification and test purpose which accepted behavior is ASTraces(TP) , and finally the selection of these accepted behaviors. Selection can be seen reduced to a model-checking problem where one wants to identify states (and transitions between them) which are both reachable from the initial state and co-reachable from the accepting states. We have proved that these algorithms ensure soundness. Moreover the (infinite) set of all possibly generated test cases is also exhaustive. Apart from these theoretical results, our algorithms are designed to be as efficient as possible in order to be able to scale up to real applications.

Our first test generation algorithms are based on enumerative techniques, thus adapted to IOLTS models, and optimized to fight the state-space explosion problem. We have developed on-the-fly algorithms, which consist in performing a lazy exploration of the set of states that are reachable in both the specification and the test purpose [4] . This technique is implemented in the TGV tool (see  5.1 ). However, this enumerative technique suffers from some limitations when specification models contain data.

More recently, we have explored symbolic test generation techniques for IOSTS specifications [8] . This is a promising technique whose main objective is to avoid the state space explosion problem induced by the enumeration of values of variables and communication parameters. The idea consists in computing a test case under the form of an IOSTS , i.e., a reactive program in which the operations on data is kept in a symbolic form. Test selection is still based on test purposes (also described as IOSTS) and involves syntactical transformations of IOSTS models that should ensure properties of their IOLTS semantics. However, most of the operations involved in test generation (determinisation, reachability, and coreachability) become undecidable. For determinisation we employ heuristics that allow us to solve the so-called bounded observable non-determinism (i.e., the result of an internal choice can be detected after finitely many observable actions). The product is defined syntactically. Finally test selection is performed as a syntactical transformation of transitions which is based on a semantical reachability and co-reachability analysis. As both problems are undecidable for IOSTS, syntactical transformations are guided by over-approximations using abstract interpretation techniques. Nevertheless, these over-approximations still ensure soundness of test cases [5] . These techniques are implemented in the STG tool (see  5.3 ), with an interface with NBAC used for abstract interpretation.


Logo Inria