Team flowers

Overall Objectives
Scientific Foundations
Application Domains
New Results
Contracts and Grants with Industry

Section: New Results

A Real World User Study of Different Interfaces for Teaching New Visual Objects to a Robot

Participants : Pierre Rouanet, Fabien Danieau, Pierre-Yves Oudeyer, David Filliat.

Social robotics have recently known an important development and in particular these robots are predicted to arrive in every home in the next few years. Yet, an important challenge to solve is to allow these robots to discover their environment so they could adapt themselves to more robustly and efficiently evolve in it. We argue that users should be able to help their robot to achieve this ability: i.e. by teaching it names for the visual objects present in its close environment so it could later on recognize them. We already developed an integrated framework based on state of the art algorithms such as the visual bag-of-words to tackle this problem [56] . While our system deals with the visual perception and machine learning challenges, it especially focused on the human-robot interaction issues as we argue that the design of the interface may strongly impact the quality of the learning examples collected by the users and thus on the performance of the whole system. We already proposed different interfaces based on mediator objects such as an iPhone, a Wiimote and a Wiimote coupled with a laser pointer [57] [55] [26] [27] . We developed a new interface based on gestures where users directly guide the robot through hand or arm gestures. As gesture recognition is still a hard task, we used a Wizard-of-Oz framework where a human (the Wizard) was remotely controlling the robot accordingly to the different gestures he saw. Yet, the wizard only sees the interaction through the robot eyes and thus is restricted to the visual apparatus of the robot which is much limited than the human eye. To really evaluate these different interfaces and especially their impact on the entire system, we designed a large scale user study in a sciences museum in Bordeaux. This study was carefully designed in order to evaluate users in real world conditions. Furthermore, this study took place outside of the laboratory to recruit non-expert users, unfamiliar with robotics. The goal of this study was to ask participants to show and teach different visual objets to a social robot so we can collect the learning examples they gathered with the different interfaces. They also had to answer questionnaires so we can evaluate their user's experience. However, as teaching objects to a robot is still an unusual and artificial task, we designed the experiment as a robotic game and embedded the task into a scenario in order to justify it and immerse the participants. Designing the study as a game also allows us to reproduce a daily life area and stressless environment. 107 users participated to our study and we showed that with simple interfaces such as the Wiimote or the Gestures interfaces that do not provide any feedback to the users, users tend to collect only 50% of good learning examples. Then, we showed that specifically designed interfaces as the Laser and the iPhone interfaces really significantly improve the quality of the learning example gathered. In particular, the visual feedback provided by the iPhone interface improves strongly the quality of the learning examples and allows users to naturally eliminate almost all the bad learning examples. We also showed that the Gestures interface which apriori seems more natural than the other interfaces was in fact judged as less intuitive and harder to use than the other interfaces. To us, it shows that as actual social robots have specific sensorimotor spaces, the classical human-human interaction can not be directly reused in human-robot interaction but new kind of interfaces should be developed to help the human and the robot communicate. Publications on this study have been accepted and will be presented in HRI 2011.


Logo Inria