Section: Application Domains
Evolution of Scheduling Policies
Scheduling on the Grid
Recent developments in grid environment have focused on the need to efficiently schedule tasks onto distributed computational servers. The problem consists in deciding which compute resource should perform which task when, in a view to optimizing some quality metric.
Thus, environments based on the client-agent-server model such as NetSolve , Ninf or DIET are able to distribute client requests on a set of distributed servers. The performance of such environments greatly depends on the scheduling heuristic implemented. In these systems, a server executes each request as soon as it has been received: it never delays the start of the execution.
In order for such a system to be efficient, the mapping function must choose a server that fulfills several criteria. First, the total execution time of the client application (e.g., the makespan) has to be as short as possible. Second, each request of every client must be served as fast as possible. Finally, the resource utilization must be optimized. However, these objectives are often contradictory. Therefore it is required to design multi-criteria heuristics that guarantee a balance between these criteria.
Another characteristic of grid environments is their dynamic nature and volatility. The availability of resources can change with time and they also can be shared with other users. Users may behave unpredictably or maliciously. Moreover, workloads submitted to a grid are subject to uncertainty in terms of duration, or of submission time. In order to cope with these different levels of unpredictability it is important to model this unpredictability and to design scheduling algorithms that use these models. In this case the metrics can be robustness (a schedule is said robust if it is able to absorb some uncertainty) or fault-tolerance (giving a schedule that is efficient in the case of failures).
Message Passing Interface (MPI), is the main standard for message passing and SPMD programming. The libraries implementing this standard are widely used for programming parallel scientific applications.
This standard was designed for small scale systems and shows some limitations for nowadays systems (distributed or very large-scale). One of the main problem is the lack of fault-tolerance. Indeed, in case of a node crash a standard MPI application fails. However, node failure is very common in distributed environments and happens frequently in today supercomputers.
It is therefore required to cope with such failures in the case of MPI programs. There exists several possibilities among which check-point and restart or redundancy.
The P2P-MPI middleware offers a transparent support for redundancy through the replication of computations. The level of replication i.e, the number of replicas per process, can be chosen at runtime depending on the volatility of the environment. To maintain the coherence of the system, extra message exchanges are required, hence adding an overhead increasing with the replication level. We have studied and modeled the overhead depending on the replication level, and which trade-off between execution time and failure probability is optimal for a given failure distribution, based on a study of real failure traces  .