Section: New Results
Middleware systems for computational grids
Parallel CORBA objects and components
Distributed parallel object/component appears to be a key technology for programming distributed numerical simulation systems. It extends the well-known object/component-oriented model with a parallel execution model. Previous works such as PaCO and GridCCM focused on communications between two parallel objects and components.
In 2008, we worked on hierarchical parallel component models as well as on the adaptation of hierarchical component models for Numamachines. With respect to hierarchical component models, we developed Dhico , an implementation of the DISCOGRID API. The originality of this model relies on the hierarchical management of partitioned data so as to let the runtime to optimize the communications (neighborhood as well global communications) while enabling a resource transparency for the user. The preliminary experiments on Grid'5000 showed that Dhico is able to outperform grid-enabled MPIimplementations while easing the developer task for real CEM and CFD applications. In order to evaluate component models on top of Numamachines, we developed Frim , a multithread implementation of the Fractal component model. Experiments showed the inability of plain implementation to fully exploit Numamachines because of thread and memory placement. Hence, we have started studying how transfer placement and workflow information available at the assembly to the thread and memory sub-systems of the operating sytem.
On one hand, we plan to complete the undergoing experiments of Dhico on Grid'5000 to consolidate the results with real applications. It should lead to the actual resolution of large problem size. On the other hand, we will continue to study the adaptation of component models on Numamachines. We target to combine both effort within a common model and implementation.
Spatio-temporal skeleton software component models
Software component models have succeeded in handling another level of the software complexity by dealing with system architecture. Moreover, we showed through STCM that they can be enhanced to also support temporal composition such as workflow or data flow.
In 2008, we start an implementation of STCM to show its feasibility and its benefits through real applications, and in particular a climatology application. Moreover, we deal with the next challenge that was to combine both advantages of component models and of skeleton models so as to enable more abstract and and generic compositions. In cooperation with the university of Pisa, we define STKM , an enhancement of STCM with algorithmic skeleton concepts. Programmers are therefore enabled to assembly applications specifying both temporal and spatial relations among components and instantiating predefined skeleton composite components to implement all those application parts that can be easily modeled with the available skeletons. We also explore the feasibility of such a model on top of SCA . Experimental results on kernel applications show the need and benefits of the high level of abstraction offered by STKM .
STKM seems to be a model rich enough to express applications independently of the resources. Hence, next steps are to study how to efficiently implement STKM and how to efficiently execute a STKM application on heterogeneous and dynamic resources.
Application deployment on computational Grids
The deployment of parallel, component-based applications is a critical issue in using computational Grids. It consists in selecting a number of nodes and in launching the application on them. We proposed a generic deployment model that aims to automatically deploy complex, static applications on Grids and Adage , an implementation of the model.
In 2008, we add a simple mechanism for handling dynamicity to Adage based on the concept of re-deployment. While it is a pseudo-dynamic mechanism, it turns out to be enough to validate other works like STCM / STKM and CORDAGE. Moreover, we finalize the specification of SAMURAAIE, a generic data model to abstract (dynamic) deployment of users' applications on resources. Not only, it abstracts instances of applications and of resources, but also of actions and events from them. SAMURAAIE systematically considers deployment as containers fitting contents. Therefore, it maintains information about containers, contents, and linkages. A running prototype shows the feasibility of the model and some its advantages. It clearly outperforms its predecessor GADe by being more expressive and generic.
In the future, we will study the integration of SAMURAAIE with SALOME, an open source integration platform for numerical simulation as well as the benefits that can be derived from SAMURAAIE with respect to the scheduling of structure applications on resources.
Adaptation for data management
The usage of context-aware data management in mobile environments has been investigated by Françoise André in collaboration with Mayté Segarra and Jean-Marie Gilliot from ENST Bretagne (Brest). A context-aware data replication and consistency system that adapts dynamically to changes in the environment has been proposed, based on the use of the Dynaco framework. This work has been supported by a contract ( ReCoDEM ) between ENST Bretagne and Orange Labs (previously known as France-Télécom R&D)
In the ReCoDEM project, the distributed aspects of the adaptation system has not been thoroughly investigated. Therefore, a new subject is launched since October 2007 (with M. Zouari as PhD student) to propose a generic distributed adaptation framework. This work will use data management in Grid and mobile environments as an illustrative application. Mayté Segarra from ENST Bretagne is co-adviser for the PhD thesis of M. Zouari.
Adaptation for fault tolerance
The use of adaptive framework as been studied to build dependable applications for Grids in the context of the SafeScale Project. Standard cases of attacks have been simulated and taken into account using the Dynaco framework and the MPICH-V communication library developed at LRI. The use of such a framework for a platform for ubiquitous computing has been studied in  . We have connected the Dynaco framework to the Kaapi environment developed at IMAG/LIG in order to be able to adapt the execution of task graphs to faulty environments. We have been able to demonstrate the control of task steeling and task cloning to generate challenges with this environment.