Team oasis

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Grid Middleware and Applications

Optimising Distributed Object Computing

Participants : B. Amedro, D. Caromel, L. Henrio, M. Khan.

Sterile Requests differentiation

ProActive rendez-vous is a synchronisation time which occurs each time a request is sent. This time is necessary to ensure causal order of requests. However, in some cases and for performance purpose, we can perform this synchronisation in parallel with computing.

Our work proposes to distinguish a sub category of ProActive requests: sterile requests . A request is known as sterile if it does not have any descendant, i.e. if during its service it does not send new requests, except to itself or to the activity which sent the request it is serving (its parent). Assuming this definition, a sterile request can be sent and its rendez-vous can be delegated to a concurrent thread if the parameters of the request are not modified after the sending. Such a request is invoked using the primitive ForgetOnSend .

A Study of Future Update Strategies

Futures enable an efficient and easy to use programming paradigm for distributed applications. In ProActive, an active object is analogous to a process, having its own thread and a message queue for storing incoming requests. Futures, as used in ASP and ProActive, represent the result of an asynchronous invocation and can be safely transmitted between processes. As references to futures disseminate, a strategy is necessary to propagate the computed result of each future to the processes that need it.

Our work addresses the problem of the efficient transmission of those computed results. It presents three main strategies for updating futures. These include two eager strategies: Eager forward-based , and Eager message-based , and one lazy strategy, Lazy message-based . The two eager strategies update the futures as soon as the results are available, while the lazy strategy is a on-demand strategy, resolving the future only when the value is strictly needed. We focussed on providing a semi-formal description which allows us to perform preliminary cost analysis. To verify our cost analysis, we carried out some experiments to determine the efficiency of each strategy under different conditions. The details of this work appear in [32] .

We are currently working on extending our implementation to support mixed-strategies. We want to be able to specify an active object or component (and later on at future) level which strategy should be used for a active object or component. Another interested and non-trivial problem is developing (and formally proving) a protocol for cancellation of requests in an active object environment. A sub-problem is to allow cancellation of only specific future updates, resulting in improved performance, for example in the case of workflow-based scenarios.

Peer-to-Peer Infrastructure

Participants : I. Filali, F. Huet, F. Bongiovanni, L. Pellegrino.

Researches on P2P networks have focused not only on the network architecture but also on the semantics of the stored data, moving from simple keywords to more sophisticated RDF-based data model. In the context of the SOA4ALL project, we are working on the design and the implementation of a distributed semantic space infrastructure ([33] , [34] , [18] ). We have proposed a multi-layers architecture based on DHTs overlays. The infrastructure aims at fully distributing data among participating peers. In the second part of the project, the infrastructure will be used to store semantic description of services such as monitoring service. We are exploring on how to improve P2P information retrieval mechanisms in order to efficiently query the stored RDF services. We are also investigating the possibility to add the Publish/Subscribe (Pub/Sub) paradigm on top of the semantic space. The semantic space is created on top of multiple Structured Overlay Networks (SONs) (CAN and Chord), which differ in topology, routing schemes and maintenance. This Pub/Sub layer would have to be generic enough to work on any SONs. That is, most of the existing SONs share a DHT abstraction layer (get/put/remove), and we would like to take advantage of this commonality between SONs in order to build a fault-tolerant pub/sub layer abstraction on top of the DHT abstraction.

Grid Computing for Computational Finance

Participants : F. Baude, V. D. Doan.

Computation in financial services includes the over-night calculation and time-critical computations during the daily trading hour. Academic research and industrial technical reports have largely focused on over-night computing tasks and the application of parallel or distributed computing techniques. Instead, in this work we focus on time-critical computations required during trading hours, in particular Monte Carlo simulation for option pricing and other derivative products. We have designed and implemented a software system called PicsouGrid which utilises the ProActive library to parallelise and distribute various option pricing algorithms. Currently, PicsouGrid has been deployed on various grid systems to evaluate its scalability and performance in European option pricing. We previously developed several European option pricing algorithms such as standard, barrier, basket options to experiment in PicsouGrid [50] , [58] , [24] . Then several Bermudan American (BA) option pricing algorithms have been implemented (i.e. Longstaff-Schwartz and Ibanez-Zapatero). Due to the terms of BA options the algorithms have a much higher computational demand, and therefore complicated strategies are employed to improve the efficiency of the option pricing estimate, which in turn complicates the implementation of a parallelisation strategy [14] . Our work is thus focused on finding efficient parallelisation strategies which can be used for a range of pricing algorithms. The objective is to allow algorithm designers to focus on an efficient serial implementation without concern for the parallelisation, and for the model to be used to automatically or semi-automatically provide a load-balanced (for heterogeneous computing resources) parallel implementation.

In previous years, we also investigated the parallelisation of the Classification-Monte Carlo algorithm of Picazo (CMC) for pricing very high dimensional BA options and performed experiments on the Grid'5000 multi-sites test-bed. The results were published in the Workshop on High Performance Computational Finance at the Supercomputing Conference [54] . As part of the Grids@Work conference, we defined the fifth Grid Plugtest for finance - Super Quant Monte Carlo Challenge 2008 with Mireille Bossy and Frédéric Abergel from the MAS laboratory of Ecole Centrale de Paris, [29] . Based on the Master/Slave API of the ProActive library, we designed and implemented an API specially for parallel handling of Monte Carlo simulations and a financial benchmark suite for the Plugtest participants.

In 2009, continuing from the works of the fifth Grid Plugtest for finance, we studied the use of the benchmark suite as a tool for comparing indirectly the grid middlewares performance. Our publication was accepted for presentation, and an extended version of the paper is now under the final proceedings review procedure [22] . Furthermore we investigated on the numerical validation of the simulated results of the benchmark suite (i.e. the problem of the dimension reduction for basket option pricing). The full detail results were reported in the INRIA technical report [29] . For the problem of the use of different classification algorithms of the Picazo algorithm (CMC), we performed the CMC algorithm with several test-cases in order to figure out the trade-off of accuracy and computational time for each classification algorithm.

Federating DSBs at Internet Scale Upon a Component-Based Approach

Participants : F. Baude, V. Legrand, E. Mathias, C. Ruz.

The EU-funded NESSI(Networked European Software and Services Initiative)Service Oriented Architecture for All (SOA4All) Project aims at realizing a world where billions of parties are exposing and consuming services via advanced Web technology: the main objective of the project is to provide a comprehensive framework that integrates complementary and evolutionary technical advances (i.e., SOA, context management, Web principles, Web 2.0 and semantic technologies) into a coherent and domain-independent service delivery platform. In other terms, one expected conceptual outcome of the project is a paradigm often acknowledged as Service Web or Service Cloud . A Service Cloud must be able to span the whole Internet to allow end-users to use and coordinate external services, potentially executed anywhere on the globe.

In practice, the idea is to be capable to host all needed technical services on a domain-independent service delivery platform. This complex platform may imply on the definition of a hybrid underlying infrastructure. To reach the Internet scale, we require to connect in a seamless manner resources gained from clusters, Grids, Clouds, assuming they are made available to the SOA4All platform for further transparent use by all web-connected end-users devices [18] . Such a requirement is however compliant with the current trend by which scale-out, outsourcing, Software as a Service has become popular terms reflecting a shift on the way enterprises support and organize their IT services.

The underlying solution we push forward to integrate resources in a seamless manner is based on GCM components and has already been successfully employed to couple independent MPI applications running in different domains (Section  6.2.4 ). The general architecture and concepts of the ESB federation will be published in [18] .

This component-based communication layer will be responsible for the routing of messages among services distributed in a multidomain environment. Current work consists in adapting the existing solution as an independent communication layer for the PEtALS Enterprise Service Bus. We are also investigating a multi-level registration and lookup strategy integrated to the routing component infrastructure to solve the issue of service localization. The semantic space will be also connected to the federated ESB [34] , and monitoring information going through the ESB will be gathered using specific GCM components; it will then be delivered to the external SLA and analysis tool [38] .

Standardisation of the Grid Component Model

Participants : D. Caromel, L. Henrio, E. Madelaine, B. Sauvan.

The existence of several different grid middleware platforms or job scheduler calls for a standardisation effort in the description of the application being deployed and the grid structure it is deployed on. With the support of our GridCOMP partners, we have been working on the standardisation of various aspect of the Grid Component model (GCM), within the GRID technical committee of ETSI. These standards come in 4 parts:


previous
next

Logo Inria