Overall Objectives
Research Program
Application Domains
New Software and Platforms
New Results
Bilateral Contracts and Grants with Industry
Partnerships and Cooperations
XML PDF e-pub
PDF e-Pub

Section: New Results

Large-Scale Cloud Resource Management

Participants : Yves Caniou, Eddy Caron, Marcos Dias de Assunção, Christian Perez, Pedro de Souza Bento Da Silva.

An Efficient Communication Aware Heuristic for Multiple Cloud Application Placement

To deploy a distributed application on the cloud, cost, resource and communication constraints have to be considered to select the most suitable Virtual Machines (VMs), from private and public cloud providers. This process becomes very complex in large scale scenarios and, as this problem is NP-Hard, its automation must take scalability into consideration. In this work [21], we propose a heuristic able to calculate initial placements for distributed component-based applications on possibly multiple clouds with the objective of minimizing VM renting costs while satisfying applications' resource and communication constraints. We evaluate the heuristic performance and determine its limitations by comparing it to other placement approaches, namely exact algorithms and meta-heuristics. We show that the proposed heuristic is able to compute a good solution much faster than them.

Production Deployment Tools for IaaSes: an Overall Model and Survey

Emerging applications for the Internet of Things (IoT) are complex programs which are composed of multiple modules (or services). For scalability, reliability and performance, modular applications are distributed on infrastructures that support utility computing (e.g., Cloud, Fog). In order to simply operate such infrastructures, an Infrastructure-as-a-Service (IaaS) manager is required. OpenStack is the de-facto open-source solution to address the IaaS level of the Cloud paradigm. However, OpenStack is itself a large modular application composed of more than 150 modules that make it hard to deploy manually. To fully understand how IaaSes are deployed today, we propose in [16] an overall model of the application deployment process which describes each step with their interactions. This model then serves as the basis to analyse five different deployment tools used to deploy OpenStack in production: Kolla, Enos, Juju, Kubernetes, and TripleO. Finally, a comparison is provided and the results are discussed to extend this analysis.

Communication Aware Task Placement for Workflow Scheduling on DaaS-based Cloud

We proposed a framework for building an autonomous workflow manager and developped the different components that are required for this design to work. We believe that this design will help solve current issues with workflow deployment and scalling in the context of shared IaaS Cloud platforms. In that regard, our first contribution is the modelization of network topology [24], which is a key factor in predicting communication patterns and should therefore be considered by clustering algorithms. By designing a generic network model, we managed to improve the results of static scheduling in the context of DaaS-based Cloud platforms. In fact, the resulting clusters are both more efficient in terms of makespan (primary objective) and in terms of deployment cost compared to previous non-network-aware clustering algorithms.

Communication Aware Stochastic Tasks Scheduling Composing Scientific Workflows on a Cloud

In order to study the scheduling of workflows composed of stochastic tasks on a set of resources managed as a cloud, we firstly proposed a new execution model taking into account data transfers, heterogeneity, billing of used resources as close to reality based to a great extend on the offers of three big cloud providers: Google Cloud, Amazon EC2 and OVH [25]. We then studied new scheduling heuristics on a set of worflows taken from the Pegasus benchmark suite [23]. During the mapping process, the budget-aware algorithms make conservative assumptions to avoid exceeding the initial budget; we further improve our results with refined versions that aim at re-scheduling some tasks onto faster virtual machines, thereby spending any budget fraction leftover by the first allocation. These refined variants are much more time-consuming than the former algorithms, so there is a trade-off to find in terms of scalability. We report an extensive set of simulations. Most of the time our budget-aware algorithms succeed in achieving efficient makespans while enforcing the given budget, and despite the uncertainty in task weights.