Section: New Results
Companies using domain specific languages in a model-driven development process need to measure their models. However, developing and maintaining a measurement software for each domain specific modeling language is costly. Our contribution is a model-driven measurement approach  ,  . This measurement approach is model-driven from two viewpoints: 1) it measures models of a model-driven development process; 2) it uses models as unique and consistent metric specifications, w.r.t a metric specification metamodel. This declarative specification of metrics is then used to generate a fully fledged implementation. The benefit derived from using model-driven technologies has been evaluated by several real-size case studies  ,  ,  . They indicate that this approach seems to reduce the domain-specific measurement software development cost.
Executable Software Process Modeling
One of the main objectives of the Model-Driven Engineering vision is to increase software productivity through the extensive use of models since earliest software development phases. The challenge targeted by this initiative is to use models not only for documentation purposes but also for production aims. In the area of software process modeling, software process modeling languages have not yet reached the level required for the specification of executable models. Executable software process models can help in improving coordination between development teams, in automating iterative and no-interactive tasks and in managing the different tools and artifacts used during the software construction. At this aim, we have proposed UML4SPM, a model-driven and executable language for software process modeling, and we have shown how it was implemented using Kermeta  .
Model transformation testing
Model transformations can automate specific tasks in the software development. In  , we contribute to model transformation testing. Testing such model transformations for correctness presents some new challenges. First, we adapt mutation analysis to model transformations in order to qualify fault detecting effectiveness of a set of test models by considering model transformation specific faults. Second, in  ,  we propose a set of functions to express test oracles for detecting faults in a transformation. We evaluate them regarding the complexity and reuse of model transformations. Finally, we integrate our techniques in tools that are used to develop reliable model transformation components and to assist in further model transformation testing studies. In particular, we compare different strategies for automatic test model synthesis  .