Section: New Results
Model-based optimization and compilation techniques
Participants : Vincent Aranega, Abou El Hassan Benyamina, Pierre Boulet, Jean-Luc Dekeyser, Cédric Dumoulin, Anne Etien, Calin Glitia, Frédéric Guyomarc'h, Thomas Legrand, Emmanuel Leguy, Jean-Marie Mottu, Alexis Muller, Wendell Rodrigues, Vlad Rusu.
Our investigations tend to make up a complex model transformation from smaller transformations jointly working in order to build a single output model. These transformations involve different parts of the same input metamodel (e.g. the MARTE metamodel); their application field is localized. We propose a new way to define transformations focusing on the impacted concepts. The localization of the transformation is ensured by the definition of the intermediary metamodels as delta. The delta metamodel only contain the few concepts involved in the transformation (i.e. modified, or read). The specification of the transformations only use the concepts of these deltas. We define the Extend operator to build the complete metamodel from the delta and transpose the corresponding transformations. The complete metamodel corresponds to the merge between the delta and the MARTE metamodel or an intermediary metamodel. The transformation then becomes the chaining of metamodel shifts and the localized transformation.
This new way to define the model transformations has been used in the Gaspard2 environment. It allows a better modularity and thus also reusability between the various chains. The maintenance is enhanced since each transformation involves only a few concepts, has a well defined and restrict purpose and contains maximum ten rules (i.e. 250 lines of code). Finally, this approach should ease the evolvability of the chain after the evolution of one metamodel for example to answer new requirements, or to reach a new target.
Compilation for GPU
The GPU has a particular memory hierarchy. In order to model the memory details, we purpose an approach to extend the MARTE metamodel to describe low level characteristics of the memory. The model described in UML with Marte profile model is chained in several inout transformations that adds and/or transforms elements in the model. For adding memory allocation concepts to the model, a QVT transformation based on «Memory Allocation Metamodel» provides information to facilitate and optimize the code generation. Then a model to text transformation allows to generate the C code for GPU architecture. Before the standard releases, Acceleo is appropriate to get many aspects from the application and architecture model and transform it in CUDA (.cu, .cpp, .c, .h, Makefile) and OpenCL (.cl, .cpp, .c, .h, Makefile) files. For the code generation, it's required to take into account intrinsic characteristics of the GPUs like data distribution, contiguous memory allocation, kernels and host programs, blocks of threads, barriers and atomic functions. For the moment, we work in both sides (model and generated code) to elaborate a chain of transformations.
Data dependence refactoring
The paper on the formalism of Array-OL with delays ("Array-OL with delays, a domain specific specification language for multidimensional intensive signal processing") was accepted and published in the journal "Multidimensional Systems and Signal Processing".
The study on the interaction between the high-level data-parallel transformations and the inter-repetition dependences (allowing the specification of uniform dependences) was accepted and presented at DASIP'09 conference. Because the ODT formalism behind the Array-OL transformations cannot express dependences between the elements of the same multidimensional space, in order to take into account the uniform dependences we proposed and proved an algorithm that, starting from the hierarchical distribution of repetition before and after a transformation, is capable to compute the new uniform dependences that express the same exact dependences as before the transformations. It all comes down to solving an (in)equations system, interpreting the solutions and translating them into new uniform dependences.
The algorithm was implemented and integrated into the refactoring toolbox and enables the use of the transformations on models containing inter-repetition dependences.
In order to validate the theoretical work around the high-level Array-OL refactoring based on the data-parallel transformations, together with Eric Lenormand and Michel Barreteau from THALES Research & Technology we worked on a study on optimization techniques in the context of an industrial radar application, work partially part of the Ter@ops project. We have proposed a strategy to use the refactoring toolbox to help explore the design space, illustrated on the radar application modeled using the Modeling and Analysis of Real-time and Embedded systems (MARTE) UML profile.
Our traceability solution relies on two models the Local and the Global Trace metamodels proposed in  . The former is used to capture the traces between the inputs and the outputs of one transformation. The Global Trace metamodel is used to link Local Traces according to the transformation chain.
The huge modifications of the transformation chain engine involved to consequently adapt the existing trace navigation/creation algorithms. Furthermore, based on our trace metamodels, we develop new algorithms to ease the model transformation debug. Based on the trace, the localization of an error is eased by reducing the search field to the sequence of the transformation rule calls  .
We start to automate the mutation analysis process dedicated to model transformations. This technique aims to qualify a test model set. If the test model set is not enough qualified new models have to be added in order to raise the set quality  . The local trace, coupled to a mutation matrix, helps the tester to create adequate new test models and thus improve the test data set. The first obtained results are really promising and we are currently working on the test data set improvement full automation.
Model transformation towards Pthreads
The strategy in previous version of the Gaspard framework imposes a global synchronization mechanism between all the tasks of the application. This mechanism does not allow one to reach an optimal execution. We have investigated a new strategy to overcome this problem, based on fine grain synchronizations between the different tasks of the modeled application. For this new strategy, we use the pthread API. Each task of the UML application model is transformed into a thread. The data exchanges between the tasks are ensured by a buffer-based strategy. The best compromise between the memory used and the performance can be reached by adjusting the size of each buffer. Moreover, we have developed this strategy to facilitate its use in simulation targets such as SystemC-PA.
Verifying conformance and semantics-preserving model transformations
We give formal executable semantics to the notions of conformance and of semantics-preserving model transformations in the model-driven engineering framework  . Our approach consists in translating models and meta-models (possibly enriched with OCL invariants) into specifications in Membership Equational Logic , an expressive logic implemented in the Maude tool. Conformance between a model and a meta-model is represented by the validity of a certain theory interpretation , of the specification representing the meta-model, in the specification representing the model. Model transformations between origin and destination meta-models are mappings between the sets of models that conform to the those meta-models, respectively, and can be represented by rewrite rules in Rewriting Logic , a superset of Membership Equational Logic also implemented in Maude. When the meta-models involved in a transformation are endowed with dynamic semantics, the transformations between them are also typically required to preserve those semantical aspects. We propose to represent the notion of dynamic semantics preservation by means of algebraic simulations expressed in Membership Equational Logic. Maude can then be used for automatically verifying conformance, and for automatically verifying dynamic semantics preservation up to a bounded steps of the dynamic semantics. These works will eventually be incorporated in the Gaspard2 environment and will hopefully lead to better understood meta-models and models, and to model transformations containing fewer errors.
Scheduling data-parallel tasks with inter-repetition dependences
The introduction of uniform inter-repetition dependences in the data-parallel tasks of Gaspard2 has had several consequences. Aside the modification of the refactoring (see section 6.2.3 ), we have studied the compilation of such tasks. This compilation involves the scheduling of such repetitions on repetitive grids of processors and the code generation. This scheduling problem is NP-complete and we have proposed a heuristic based on the automatic parallelization techniques to compute a good (efficient both in time and code size) schedule in the case when all loop bounds and processor array shapes are known.