Team grand-large

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Other Grants and Activities
Dissemination
Bibliography

Section: Software

MPICH-V

Currently, MPICH-V proposes 6 protocols: MPICH-V1, MPICH-V2, MPICH-V/CL, and 3 algorithms for MPICH-Vcausal. MPICH-V1 implements an original fault tolerant protocol specifically developed for Desktop Grids relying on uncoordinated checkpoint and remote pessimistic message logging. It uses reliable nodes called Channel Memories to store all in transit messages. MPICH-V2 is designed for homogeneous networks like clusters where the number of reliable component assumed by MPICH-V1 is too high. It reduces the fault tolerance overhead and increases the tolerance to node volatility. This is achieved by implementing a new protocol splitting the message logging into message payload logging and event logging. These two elements are stored separately on the sender node for the message payload and on a reliable event logger for the message events. The third protocol, called MPICH-V/CL, is derived from the Chandy-Lamport global snapshot algorithm. It implements coordinated checkpoint without message logging. This protocol exhibits less overhead than MPICH-V2 for clusters with low fault frequencies. MPICH-Vcausal concludes the set of message logging protocols, implementing a causal logging. It provides less synchrony than the pessimistic logging protocols, allowing messages to influence the system before the sender can be sure that non deterministic events are logged, to the cost of appending some information to every communication. This sum of information may increase with the time, and different causal protocols, with different cut techniques, have been studied with the MPICH-V project.

MPICH-V3 will be studied for the Grids. It will rely on a new protocol mixing causal message logging and pessimistic remote logging of message events. This is a hierarchical protocol able to tolerate fault inside Grid sites (inside clusters) and faults of sites (the complete crash of clusters).

Another effort is pushed on the performances of MPICH-V for high-bandwidth networks. This introduces the necessity of zero-copy implementations and raises new problems with respect to the algorithms and their realization. The goal sought here is to provide fault tolerance without losing high performances.

In addition to fault tolerant properties, MPICH-V:

  1. provides a full runtime environment detecting and re-launching MPI processes in case of faults;

  2. works on high performance networks such as Myrinet, Infiniband, etc (the performances are still divided by two);

  3. allows the migration of a full MPI execution from one cluster to another, even if they are using different high performance networks.

The software, papers and presentations are available at http://www.mpich-v.net/


previous
next

Logo Inria