Project : paris
Section: Scientific Foundations
Localization and routing
Recent research on emerging peer-to-peer (P2P) systems  has focused on designing adequate localization and routing strategies for large-scale, highly-decentralized environments. The proposed algorithms have properties that address the main requirements of such environments: high scalability, fault tolerance (with respect to node or link failures), no (or very little) dependence on centralized entities. The very first approaches (illustrated by Napster) still use a centralized directory for data localization, then switch to direct P2P interaction for the actual data transfers. Later, fully distributed, flooding-based approaches (e.g., Gnutella) have been proposed. A second generation of P2P systems (e.g., KaZaA) have combined the previous techniques by integrating the notion of super-peer: localization is flooding-based between the super-peers, which serve as local directories for groups of regular peers. However, flooding strategies have one main weakness: since they generate a lot of traffic, a limit has to be set on the number of times queries are re-propagated. As a result, queries for data may fail, whereas the data are actually stored in the system.
In order to provide both high fault tolerance and the guarantee to always reach data available in the network, recent research has focused on localization schemes based on Distributed Hash Tables (DHT). This promising approach is illustrated by Chord (MIT), Pastry (Microsoft Research) and Tapestry (UC Berkeley) and has also been used for the latest major version (2.0) of the JXTA generic environment for P2P services started by Sun Microsystems. Efforts are currently under way, in order to define a common API for such DHT-based systems .