Team Flowers

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Intuitive gestural interfaces for human-robot language teaching

Participants : Pierre Rouanet, Pierre-Yves Oudeyer, David Filliat.

Social robots are drawing an increasing amount of interest both in scientific and economic communities [35] [25] . These robots should typically be able to interact naturally and intuitively with non-engineer humans, in the context of domestic services or entertainment. Yet, an important obstacle needs to be passed: providing robots with the capacity to adapt to novel and changing environments and tasks, in particular when interacting with non-engineer humans. One of the important difficulties is related to mutual perception and joint attention [42] . For example, when one has to teach a novel word or a new command to a robot, several challenges arise:

  1. Attention drawing: when needed the human shall be able to draw the attention of the robot towards himself and towards the interaction (i.e. the robot should stop its activities and pay attention to the human);

  2. Pointing: once the robot is concentrated on the interaction, the human should be able to show a part of the environment (typically an object) he is thinking of to the robot, typically by pointing, in order to establish a form of joint attention;

  3. Naming: the human should be able to introduce a symbolic form that the robot can detect, register and recognize later on;

Given that users are not engineers, this should be realized both in a very intuitive and very robust manner in completely uncontrolled environments. This implies that relying on traditional vision techniques for detecting and interpreting natural human pointing gestures, and on traditional speech recognition techniques for spotting and recognizing (potentially new) words will not work. One way to achieve intuitive and robust attention drawing, pointing and naming is to develop simple artefacts that will serve as mediators between the man and the robot to enable natural communication, in much the same way as icon based artefacts were developed for leveraging natural linguistic communication between man and bonobos (see Kanzi and Savage-Rumbaugh).

This year, we have continued the development of such artefacts and associated interaction techniques, and built several complete systems as well as evaluated them in realistis settings with user studies. Two main human-robot interface systems are now functional: one is based on the use of an Iphone, and the other one is based on the use of a laser pointer coupled with a Wiimote controller. Furthermore, a complete system involving image processing with SIFT/SURF based local descriptors and visual object learning and recognition with a bag-of-words approach, was built and allowed us to conduct experiment assessing quantitatively how these interfaces allow efficient teaching of new visual objects to a robot. The papers [14] [13] [15] provide descriptions of these systems, as well as associated experiments which show that designing appropriate human-robot interfaces can allow to improve the efficiency of a learning system significantly more than what may expect from improving the machine learning algorithms or the computer vision algorithms.


previous
next

Logo Inria