Section: New Results
Open Middleware for the CCM
Generic Framework for Large Scale Distributed Deployment
Deployment of distributed applications on large and complex systems, such as grid infrastructures or ubiquitous environments, becomes a more and more complex activity. Deployers/users spend a lot of time to prepare, install software, libraries or binaries on remote nodes, configure environments and middleware, start application servers, and eventually start their applications. The problem is that the deployment process is composed of many heterogeneous tasks that have to be orchestrated in a specific order. Indeed, dependencies and synchronization problems often exist between software or elementary deployment tasks. As a consequence, the automatization of the deployment process is currently very difficult to achieve.
To address this problem, we propose in  a generic deployment framework called FDF (Fractal Deployment Framework), which allows automation of the execution of heterogeneous tasks composing the whole deployment process. Our approach is based on the reification as software components of all required deployment mechanisms or existing tools involved in the deployment process, such as remote access and file transfer protocols, shells, user access information, node ports and hostnames. Software is also reified as particular components which are called personalities. Personalities are written once by developers for each kind of software to deploy. Bindings between components represent dependencies. These components are composed and assembled together and the obtained composite represents the final configuration to deploy. In other words, execution of the composite means execution of the deployment process. FDF automatically orchestrates the deployment process. FDF allows users to just describe the configuration to deploy in a natural language instead of programming or scripting how the deployment process is executed.
FDF is implemented using the Fractal component model and the Java programming language. It is independent of the technology of the software to deploy and its granularity. Personalities for many component-based platforms have been written such as OpenCCM, JOnAS, JBoss, OSGi or PeTALS servers. Furthermore, FDF has been successfully experimented for the deployment of OpenCCM middleware and CORBA component-based applications on one thousand nodes of Grid5000, the french grid infrastructure dedicated to computer science research.
Distributed Autonomous Component-based Architectures
With the emergence of open distributed environments (ODE), such as grid and ubiquitous computing, the management of systems during their execution has become a new challenge. Machines appear and disappear in an unpredictable way, and applications deployed on these environments must adapt their structures to address these changes. IBM proposes autonomic computing for building systems able to manage autonomously one concern of their execution (e.g. , self-healing, self-sizing, self-optimization). This proposition relies on the core concept of a control loop , which consists of four classical phases: monitoring the system, analyzing changes monitored, preparing a reconfiguration to handle the change, and finally executing this change. All of these phases rely on the knowledge that encompasses all relevant information about the application. The autonomic policies , in charge of defining the way an application should self-adapt, are defined using this paradigm.
Our goal is to propose a framework with adequate mechanisms to build reliable autonomic component-based architectures. The result of our research work is called Dacar for Distributed Autonomous Component-based ARchitectures.
In  , we have defined the core concepts of our approach. First, we would like the description of autonomic architecture to be generic and independent from the underlying middleware. Then, we propose to use the OMG specification called Deployment and Configuration of Distributed Component-based Applications (D&C). It provides generic concepts to express the structure of component-based applications. The reified architecture built using D&C can be then extended with some autonomic policies. We found that the Event-Condition-Action (ECA) rule paradigm is well tailored to specify fine-grained autonomic policies. At runtime, the system is reified and the autonomic policies expressed using ECA are applied to this representation. The reified architecture is causally linked to the real system using some dedicated ECA policies. In  , we illustrate the feasibility of our ECA approach in a complete ubiquitous example.
In  , we show that our reified architecture in fact fits well with the definition of a model. This is an abstract representation of a system in order to facilitate the management of one concern of the application. In our case the concern is about self-configuration (and self-deployment) of an application. We propose a modeling process to build self-configuring component-based applications based on D&C and ECA metamodels. However, these models are not classical models since they must be executed: they are runtime models that must be causally linked to the running system.
Current work aims at exploiting this modelisation in order to go deeper into building efficient autonomic policies. Indeed all other works focusing on autonomic applications propose to build a control loop in a programmative way, which is prone to error. Moreover, using programmation it is impossible to check the execution behaviour of the set of autonomous policies together. Our approach based on models allows the building of autonomous systems in a more abstract way, without getting lost in implementation details. Moreover, providing adequate concepts in our Dacar metamodel should allow us to check statically the resulting models, and to validate the autonomic policies as well as their interaction. We are currently working on the definition of a metamodel of reliable autonomous component-based applications relying on statically validated autonomic policies.
The Dacar prototype has been tested with many examples using the OpenCCM platform as the component-based application execution support.
Component-Based Software Framework for Building Transaction Services
GoTM is a component-based software framework for building middleware transaction services.
Overview of the GoTM Activity
Transactions have always been involved in various applications since they have been introduced in databases. Many transaction services have been developed to address various transaction standards and various transaction models. Furthermore, these transaction services are more and more difficult to build since the complexity of the transaction standards is constantly increasing. Each transaction service implements pieces of code that have already been written in other transaction services. As a consequence, there is no code factorization between the transaction services and the added values of each transaction service, such as extensibility or performance, are never reused in another transaction service.
In  and  , we present GoTM, a Component-Based Adaptive Middleware (CBAM) software framework. It can be used to build various transaction services that are compliant with existing transaction standards (OMG OTS, Sun JTS, etc.). GoTM provides adaptive properties to support different transaction models and standards in the same transaction service. GoTM also supports the definition of new transaction models and standards as new components of the framework. Finally, GoTM provides (re)configurability, extensibility and adaptability properties as added values. The implementation of the GoTM framework is based on the Fractal component model. The next sections illustrate two experiences performed this year with the GoTM framework.
Building Heterogeneous Transaction Services
The diversity of transaction services leads to compatibility problems among applications using different transaction standards. This compatibility usually ensures that transaction services can cooperate in a system. To deal with this issue, current trends use coordination protocols. Coordination protocols are responsible for synchronizing the execution of transaction services based on different transaction standards. Nevertheless, these protocols can be intrusive and often introduce an additional complexity into the system.
In  and  , we present an approach to build an Adapted Transaction Service , called ATS, that supports several transaction standards concurrently. The objective of ATS is to make the transaction standards composition easier. To introduce ATS, we present how the Object Transaction Service (OTS), Web Services Atomic Transaction (WS-AT) and Java Transaction Service (JTS) standards can be composed. To achieve this, the OTS, WS-AT and JTS interfaces are analyzed and the required/provided functions are identified. The functions are specialized as strategies to implement transaction standard semantics. The resulting ATS is built by composition of these strategies and adapters. Adapters ensure the compliance with transaction standard interfaces. Besides, the ATS implementation is introduced, which is implemented with the GoTM framework and the Fractal component model.
We show that this approach does not introduce an additional overhead to legacy applications and supports scalability well. Moreover, this approach can be easily extended to support additional transaction standards. Future work will investigate the definition of personalities for Web Services Transaction and Activity Services.
Supporting Dynamic Adaptation of 2-Phase Commit Protocols
For 30 years, transactional protocols have been defined to address specific application needs. Traditionally, when implementing a transaction service, a protocol is chosen and it remains the same during the system execution. Nevertheless, the dynamic nature of current application contexts (e.g., mobile, ad-hoc, peer-to-peer) and behavioral variations (semantic-related aspects) motivate the need for application adaptation. Next generation system applications should be adaptive or even better self-adaptive. In  and  , we propose (1) a component-based architecture of standard 2PC-based protocols designed using UML sequence diagrams and (2) a Context-Aware Transaction Service, named CATE. Self-adaptation is obtained by behaviour awareness and component-based reconfiguration. This allows CATE to select the most appropriate protocol according to the context.
We have shown that using CATE performs better than using only one commit protocol in a variable system and that the reconfiguration cost can be negligible. Future work will investigate the design of a dedicated high-level model to describe transaction validation protocols.
Leveraging Component-Oriented Programming Using Attribute-Oriented Programming
Fraclet is an attribute-oriented programming model for developing reliable components.
Leveraging Fractal-Based Developments
Component-Based Software Engineering (CBSE) is concerned with the development of highly reusable business components which declare contractually specified interfaces to communicate with each other. CBSE facilitates the development of high quality applications with shorter development cycles and reduces coding effort. However, in practice, it appears that the component developer's task is not only devoted to the design and implementation of the business logic of the applications but also to the integration of redundant and error-prone non-functional properties.
A convenient way to address this issue is to use Attribute-Oriented Programming (@OP) techniques. @OP proposes to mark program code with metadata to clearly separate the business logic from a domain-specific logic (typically non-functional properties). @OP is gaining popularity with the recent introduction of annotations in Java 2 standard edition (J2SE) 5.0, XDoclet, and attributes in C#. Recently, the Enterprise Java Bean (EJB) 3.0 specification extensively uses annotations to make EJB programming easier. The Service Component Architecture (SCA) component implementation model provides a series of annotations which can be placed in the code to mark significant elements of the implementation which are used by the SCA runtime.
In  ,  ,  , we present Fraclet , an annotation-based framework using @OP to leverage Fractal component programming. Fraclet provides a set of dedicated annotations to mark the Fractal-related non-functional properties in the program code. To achieve this goal, Fraclet introduces a seamless design process for Fractal component developers. Thanks to @OP, most of the component artifacts are automatically generated.
Fraclet is developed to deal with the Java implementations of the Fractal component model. We have experimented with two different, but functionally equivalent, implementations of the Fraclet annotation framework. The first one uses XDoclet and Velocity to define the code generators and to produce the various artifacts required by the Fractal component model. The second one uses Spoon  , a Java 5-compatible processing tool, which supports the processing of Java 5 annotations. The Fraclet developer can then take advantage of Java 5 type safety and annotations. Regardless of the implementations, we show that, using Fraclet, about 50% of the handwritten program code can be kept while the rest of the program code is automatically generated and continuously integrated, without losing the semantics of the application.
Supporting Various Component Models
Component-oriented programming has achieved wide acceptance in the domain of software engineering by improving productivity, reusability and composition. This success has also encouraged the emergence of a plethora of component models. Nevertheless, even if the abstract models of existing component models are quite similar, their programming models can differ a lot. The programming model applies these concepts to a particular programming language, while introducing some non-functional code specific to the component model. Thus, this non-functional code is tangled with the business code of the application. Furthermore, if the abstract models of existing component models are quite similar, their programming models can differ a lot. This drawback limits the reuse and composition of components implemented using different programming models.
In  and  , we introduce a reification of an abstract model common to several component models. This reification is presented as an annotation framework, which allows the developer to annotate the program code with the elements of the abstract component model. Then, using a generator, the annotated program code is completed according to the programming model of the component model to be supported by the component runtime environment. This paper shows that this annotation framework provides a significant simplification of the program code by removing all dependencies on the component model interfaces. These benefits are illustrated with the OpenCOM and Fractal component models.