Section: Scientific Foundations
Reputation in Dynamic Large Scale Systems
Digital reputation has recently emerged as a promising approach to cope with the specificities of large scale and dynamic systems. Briefly, reputation stimulates the development of relationship among trustworthy entities, while discouraging them in presence of untrustworthy entities. EBay, Amazon, Slashdot, ePinions, Yahoo! auctions, just to cite a few of them rely on a reputation mechanism to foster a trust relationship among entities that do not know each other a priori , and may possibly interact only once. Specifically, similarly to real world, a reputation mechanism expresses a collective opinion about some target entity by gathering and aggregating feedbacks about the past behavior of that target entity. The derived reputation score is used to help entities to decide whether an interaction with that entity is conceivable or not. By encouraging trust or distrust, reputation helps in finding new resources by using trusted entities as sources of knowledge. It is also a powerful tool to incite entities to behave correctly. Indeed, a well behaving entity maintains a good reputation score so that entities are interested in interacting with it. On the other hand, reputation can be also used as a punishment mechanism. By lowering reputation score of misbehaving entities, establishment of relationships with other entities is made harder.
We argue that reputation is a clear added-value to tackle some security issues, and envision to use it as a building block for the deployment of security policies. Those policies will be dynamically set according to the level of hostility perceived by each machine. However, to be considered as a valuable tool for trust assessment, a reputation mechanism has to be itself robust against adversity. In other words, reputation must have the ability to self-heal or at least to self-protect against undesirable behavior that may jeopardize users security. Moreover, attacks in open systems are numerous and can be magnified through collusion. Just to name a few, reputation mechanism must be able to face:
-
whitewashing (badly scored entities leave and rejoin the system to renew their reputation score);
-
masquerading (badly scored entities pretend to be another entity to acquire its good reputation score);
-
bad mouthing (collusion to discredit the reputation of a service provider to lately benefit from it);
-
ballot stuffing (collusion to advertise the quality of service of a service provider more than its real value to increase its reputation to push users to be involved in fraudulent transactions);
-
sybil attack (generation of numerous fake entities to manipulate the reputation score);
-
transaction repudiation (an entity can deny the existence of a transaction).
Increasing the robustness of reputation mechanisms encompasses robustness both at the reputation mechanism itself as previously described, but also at the underlying network level. Specifically appropriate mechanisms should prevent message corruption, rerouting, and denial of service during the feedback collect phase.
In this context, we envision to contribute at the different phases of the reputation mechanism construction. Regarding feedback aggregation, we propose to extend existing works (e.g., [22] , [27] ) by enlarging the behavioral assumptions of interacting entities (e.g., variation of the effort exerted by providing entities according to the entities with which they interact, according to their welfare, or their level of hostility), by minimizing the number of relevant feedback needed to build a fair enough score estimation so that reputation could quickly react to highly dynamic environments. An interesting approach would be to combine credibility-based reputation function with endogenous techniques, well adapted for massive churn [24] . Regarding feedback availability, a classical solution amounts in replicating feedback at different entities hence guaranting that despite disconnections and malicious behavior, feedback information remain available within the system. However this type of solution relies on entities propensity to fully and honestly cooperate. Such assumptions are ideal ones, and cannot be enforced without relying on incentive mechanisms. Finally, it has been shown that peer-to-peer overlay networks can only survive severe (Byzantine) attacks if malicious peers are not able to predict what is going to be the topology of the network for a given sequence of join and leave operations. Induced churn, by which peers are required to rejoin (leave and, immediately after, join again) the system seems to be an appealing solution for the construction of Byzantine-resilient overlays.