Section: New Results
Open Middleware for the CCM
Component Middleware for Ubiquitous Computing
Multiplication of mobile devices (laptop, PDA, smartphone, etc.) and generalized use of wireless networks imply changes on the design and execution of distributed software applications targeting ubiquitous computing. Many strong requirements have to be addressed: heterogeneity and limited resources of wireless networks and mobile devices, networked communications between distributed applications, dynamic discovery of services and automatic deployment on mobile devices.
The OpenCCM Mosaiques' framework is a component-based software infrastructure to design, discover, deploy, and execute ubiquitous contextual services, i.e. distributed applications providing services to mobile end-users but only available from a particular place. These ubiquitous contextual services are designed as assemblies of distributed software components.
The OpenCCM Mosaiques' infrastructure allows mobile end-users to dynamically discover the mobile assemblies of ubiquitous contextual services according to the end-users physical location and also hardware/software device capabilities. It is based on a multicast discovery protocol that reduces power consumption and network traffic and a negotiation protocol to present to end-users only the mobile assemblies that are adapted to their device capabilities. Next, the OpenCCM Mosaiques' infrastructure allows end-users to automatically deploy the mobile assemblies of ubiquitous contextual services on their own devices, and manages the lifecycle of the services (i.e. cache management of mobile component assemblies for future re-use, uninstallation of services, etc.).
The OpenCCM Mosaiques infrastructure is implemented using the OMG CORBA Component Model (CCM), on top of the OpenCCM open-source platform.
Model-driven Approach to Build Component Middleware Deployment Infrastructure
Most of the component middleware allows now the automatization of applications deployment process. The software that is responsible of the execution of the deployment process (called deployment machine) instantiates the application from its architectural description. This latter describes a concrete configuration of a component-based application. Each applicative configuration contains a set of concepts that are related to the different component models, and can be expressed using various architectural description languages (ADL). For instance, as far as the OMG CORBA Component Model (CCM) is concerned, the ADL used is a XML-based one (which files are called CCM descriptors), and some of the concepts are: home, component, instance, binding, business component property, placement, etc. As another example, the ObjectWeb Fractal component model uses similar concepts (e.g. factory, component, instance, etc.) plus some concepts that are not present in the CCM specification, such as composite / primitive component for example, and its own ADL to describe configurations. Upstream of the deployment process, the concrete applicative configuration is parsed, using a front-end adaptor for the used ADL. Then, thanks to a transformation, an abstract deployment model reifying the concrete configuration is obtained. The deployment machine engine next maps the reified applicative configuration on the targeted platform and instantiates the application, using back-end adaptors wrapping the platform deployment API.
Nevertheless, these deployment APIs are specific to middleware platforms. Thus, deployment machine implementations are bound to middleware platforms. There is no capitalization of deployment concepts, software code or methodology for building these deployment infrastructures. The idea then is to apply the OMG MDA (Model Driven Architecture) approach and make abstraction efforts on deployment process for being independent of both 1- concepts and languages used to describe applicative configurations and, 2- middleware execution platforms (and the associated deployment API).
This approach, presented in  , introduces a workflow metamodel which allows us to define deployment models independently of any targeted component middleware (i.e. a PIM - Platform Independent Model). Indeed, deployment of component-based applications consists of executing an ordered list of basic deployment tasks such as uploading component binaries to the execution sites, loading them in memory, instantiating components from their factories, interconnecting their ports, configuring business and technical properties and final activation of components. These tasks have to be scheduled, coordinated and executed according to a defined order. Each workflow process activity corresponds then to an elementary deployment task. Such a model is then refined (with component model concepts) for each targeted middleware and the obtained model is mapped (following some transformation rules between PIM and PSM metamodels concepts) to a Platform Specific Model (PSM). Finally, the last step is the mapping (according to some generation rules) to a technological execution platform. The software that implements the deployment machine is generated for a specific execution platform. The models and transformations of this approach are illustrated on a CORBA Components deployment machine implemented using the Fractal component model.
In order to validate this approach, in  we have studied related works in the deployment domain. We have shown the every component platform has its own deployment machine, especially because it has its own deployment model. The second point that is to be pointed out, is that current research focusing on deployment domain is spread across the deployment life cycle. There is no integration of any deployment concerns (such as placement of component, architectural rules, dynamic physical architecture discovery, etc.) in current deployment environments/machines. So the proposition presented in  emerges from those two points.
The proposal is a deployment environment based on the Deployment and Configuration specification published by the Object Management Group (OMG). This specification provides a deployment meta-model independant from any component platforms, this meta-model is then used in our deployment environment to represent the deployment model. Many refinements can then be applied to this model, in order to weave any deployment concerns. This process is strongly inspired by the Model Driven Engineering paradigm. The final refinement of this model consists of a transformation from that deployment meta-model, to the deployment task meta-model provided in  . Executing those generated tasks starts the effective deployment process.
A first personnality of this environment has been developped which allows to deploy OpenCCM Components from deployment plans expressed as OMG D&C descriptors. In addition of being more elegant, this deployment process has shown better performance than the current deployment machine shipped with OpenCCM, i.e. OpenCCM Distributed Component Infrastructure (DCI).
Component-Based Software Framework for Building Transaction Services
Keywords : Component-Based Software Frameware, GoTM, Middleware Transaction Services.
GoTM is a component-based software framework for building middleware transaction services.
Overview of the GoTM Activity
Transactions have always been involved in various applications since they have been introduced in databases. Many transaction services have been developed to address various transaction standards and various transaction models. Furthermore, these transaction services are more and more difficult to build since the complexity of the transaction standards is increasing constantly. Each transaction service implements pieces of code that have already been written in another transaction services. As a consequence, there is no code factorization between the transaction services and the added values of each transaction service, such as extensibility or performance, are never reused in another transaction service.
In  , we present GoTM, a Component-Based Adaptive Middleware (CBAM) software framework. It can be used to build various transaction services that are compliant with existing transaction standards (OMG OTS, Sun JTS, etc.). GoTM provides adaptive properties to support different transaction models and standards in the same transaction service. GoTM supports also the definition of new transaction models and standards as new components of the framework. Finally, GoTM provides (re)configurability, extensibility and adaptability properties as added values. The implementation of the GoTM framework is based on the Fractal component model. The next sections illustrate two experiences performed this year with the GoTM framework.
Building Heterogeneous Transaction Services
The diversity of transaction services leads to compatibility problems among applications using different transaction standards. This compatibility usually ensures that transaction services can cooperate in a system. To deal with this issue, current trends use coordination protocols. Coordination protocols are responsible for synchronizing the execution of transaction services based on different transaction standards. Nevertheless, these protocols can be intrusive and often introduce an additional complexity to the system.
In  , we present an approach to build an Adapted Transaction Service , named ATS, which supports several transaction standards concurrently. The objective of ATS is to facilitate the transaction standards composition. To introduce ATS we detail how the Object Transaction Service (OTS), Web Services Atomic Transaction (WS-AT) and Java Transaction Service (JTS) standards can be composed. For this, the OTS, WS-AT and JTS interfaces are analyzed and the required/provided functions are identified. The functions are specialized in strategies to implement transaction standard semantics. ATS is built by composition of these strategies and adapters. Adapters ensure the compliance with transaction standards interfaces. Besides, the ATS implementation is introduced, which uses the GoTM framework and the Fractal component model. GoTM is a software framework that provides various transactional components.
We show that this approach does not introduce an additional overhead to legacy applications and supports well scalability. Moreover, this approach can be easily extended to support additional transaction standards. Future work will investigated the definition of personalities for Web Services Transaction and Activity Services.
Supporting dynamic adaptation of 2-Phase Commit protocols
For years, transactional protocols have been defined to address specific application needs. Traditionally, when implementing a transaction service, a protocol is chosen and it remains the same during the system execution. Nevertheless, the dynamic nature of nowadays application contexts (e.g., mobile, ad-hoc, peer-to-peer) and behavioral variations (semantic-related aspects) motivate the needs for application adaptation. Next generation of system applications should be adaptive or even better self-adaptive. In  , we propose (1) a component-based architecture of standard 2PC-based protocols and (2) a self-Adaptive Component-based cOmmit Management, named ACOM. Self-adaptation is obtained by behaviour awareness and component-based reconfiguration. This allows ACOM to select the most appropriate protocol according to the context. We show that using ACOM performs better than using only one commit protocol in a variable system and that the reconfiguration cost can be negligible.
Future work will investigate the design of a dedicated high-level model to describe transaction validation protocols.
Keywords : Benchmarking, Round-Trip Latency, Java-Based Middleware, ORB, CORBA, Web Services, EJB, CCM.
This work targets to benchmark various heterogeneous middleware platforms in order to help applications' designers to select the right platform according to their applications' performance requirements.
Nowadays, distributed Java-based applications could be built on top of a plethora of middleware technologies such as Object Request Brokers (ORB) like CORBA and Java RMI, Web Services, and component-oriented platforms like Enterprise Java Beans (EJB) or CORBA Component Model (CCM). Choosing the right middleware technology fitting with application requirements is a complex activity driven by various criteria such as economic costs (e.g. commercial or open source availability, engineer training and skills), conformance to standards, advanced proprietary features, performance, scalability, etc. Regarding performance, a lot of basic metrics could be evaluated as round-trip latency, jitter, or throughput of twoway interactions according to various parameter types and sizes.
Many projects have already evaluated these middleware performance metrics. Unfortunately, they have not compared different kinds of middleware platforms simultaneously. This could be helpful for application designers requiring to select both the kind of middleware technology to apply and the best implementation to use.
In  , we present an experiment report on the design and implementation of a simple benchmark to evaluate the round-trip latency of various Java-based middleware platforms, i.e. only measuring the response time of twoway interactions without parameters. Even if simple, this benchmark is relevant as it allows users to evaluate the minimal response mean time and the maximal number of interactions per second provided by a middleware platform. Empirical results and analysis are discussed on a large set of widely available implementations including various ORB (Java RMI, Java IDL, ORBacus, JacORB, OpenORB, and Ice), Web Services projects (Apache XML-RPC and Axis), and component-oriented platforms (JBoss, JOnAS, OpenCCM, Fractal, ProActive). This evaluation shows that our OpenCCM platform already provides better performance results than most of the other evaluated middleware platforms.