Team Cassis

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Model-based Testing

Our research in Model-Based Testing (MBT) aims to extend the coverage of tests. The coverage refers to two artefacts: model and scenario. The test generation uses symbolic animation of models [51] by dedicated constraints or SMT solvers.

Automated Test Generation from Behavioral Models

Participants : Fabrice Bouquet, Pierre-Christophe Bué, Kalou Cabrera, Jérome Cantenot, Frédéric Dadeau, Stéphane Debricon, Elizabeta Fourneret, Adrien de Kermadec, Jonathan Lasalle.

We have introduced an original model-based testing approach that takes an UML behavioural view of the system under testing and automatically generates test cases and executable test scripts according to model coverage criteria. We have extended this result to SysML specifications for validating embedded systems [26] .

We are working on improving test generation in two directions:

The first direction is based on the preliminary computation of an abstraction of the model. We have experimented two techniques for automatically computing a symbolic transition system representing an abstraction of a behavioral model. First, we use a machine learning algorithm (à la Angluin) that is combined with model animation [35] . Second, we have experimented the use of behavioral decomposition of the model operation to compute the abstraction state, whereas transitions feasibility is computed using constraint solvers [34] . In both cases, the abstraction is used to produce test cases built according to state/transition coverage criteria.

The second direction exploits the evolution of requirements to classify test sequences, and precisely target the parts of the system impacted by this evolution. We have proposed to define the life cycle of a test via three test classes: (i) Regression, used to validate that unimpacted parts of the system did not change, (ii) Evolution, used to validate that impacted parts of the system correctly evolved, and (iii) Stagnation, used to validate that impacted parts of the system did actually evolve. The associated algorithms are under implementation in a dedicated prototype to be used in the SecureChange european project.

Scenario-Based Verification and Validation

Participants : Fabrice Bouquet, Pierre-Christophe Bué, Kalou Cabrera, Frédéric Dadeau, Elizabeta Fourneret, Adrien de Kermadec.

Test scenarios represent an abstract test case specification that aims at guiding the model animation in order to produce relevant test cases. Contrary to the previous section, this technique is not fully automated since it requires the user to design the scenario, in addition to the model.

In the context of ANR TASCCC project, we are investigating the automation of test generation from Security Functional Requirements (SFR), as defined in the Common Criteria terminology. SFRs represent security functions that have to be assessed during the validation phase of security products (in the project, the Global Platform, an operating system for last-generation smart cards). To achieve that, we are working on the definition of security property description patterns, to which a given set of SFRs can be related. These properties are used to automatically generate test scenarios that produce model based test cases. The traceability, ensured all along the testing process, makes it possible to provide evidences of the coverage of the SFR by the tests, required by the Common Criteria to reach the highest Evaluation Assurance Levels.

Also, we have experimented the use of scenarios to compute an abstraction of a model [48] , [33] . This abstraction can be used in two ways: to evaluate the coverage of test sequences, and to compute test sequences themselves.

In the context of the SecureChange project, we also investigate the evolution of test scenarios. As the system evolves, the model evolves, and the associated test scenarios may also evolve. We are currently extending the tests generation and management of system evolutions to ensure the preservation of the security.

Mutation-based Testing of Security Protocols

Participants : Frédéric Dadeau, Pierre-Cyrille Héam.

Verification of security protocols models is an important issue. Nevertheless, the verification reasons on a model of the protocol, and does not consider its concrete implementation. While representing a safe model, the protocol may be incorrectly implemented, leading to security flaws when it is deployed. We have proposed a model-based approach for testing security protocols implementations. This technique relies on the use of mutations of an original protocol, proved to be correct, for injecting realistic errors that may occur during the protocol implementation (e.g. re-use of existing keys, partial checking of received messages, incorrect formatting of sent messages, use of exponential/xor encryption, etc.). Mutations that lead to security flaws are used to build test cases, which are defined as a sequence of messages representing the behavior of the intruder and leads to the leaking of a secret. We have applied our technique on protocols designed in HLPSL, and implemented a protocol mutation tool that performs the mutations. The mutants are then analyzed by the CL-Atse  [76] front-end of the AVISPA toolset  [57] . Experiments show the relevance of the proposed mutation operators and the efficiency of the CL-Atse tool to conclude on the vulnerability of a protocol and produce an attack trace that can be used as a test case for implementations.

Model Validation

Participants : Pierre-Christophe Bué, Fabrice Bouquet, Frédéric Dadeau, Adrien de Kermadec.

In model-based testing the model design is a complex activity that falls to the test engineer. The model validation is mainly done by animation to validate the model behavior and check that it corresponds to the informal requirements. We have proposed to define and assess the quality of B models in order to provide an automated feedback on a model by performing systematic checks on its content. We define and classify classes of automatic verification steps that help the modeller in checking whether his model is well-written or not. From a behavioral model, verification conditions are automatically computed and discharged using a dedicated tool. This technique has been adapted to B abstract machines, and is implemented within a tool interfaced with a constraint solver that is able to find counter-examples to invalid verification conditions [39] . In addition, we have designed an abstraction technique that makes it possible to extract, for a behavioral model, a graphical representation as a labeled transition system [34] .

Combination of Static Analysis and Test Generation

Participant : Alain Giorgetti.

We participate to the design of original combinations of static analysis and structural program testing for C program debugging. We have presented a prototype [36] called SANTE (Static ANalysis and TEsting). It calls a static analysis tool (Frama-C) which generates alarms when it cannot ensure the absence of run-time errors. Then these alarms guide a structural test generation tool (PathCrawler) trying to confirm alarms by activating bugs on some test cases. Experiments on real-life software show that this combination can outperform the use of each technique independently.


previous
next

Logo Inria