Section: New Results
Intuitive gestural interfaces for human-robot language teaching
Social robots are drawing an increasing amount of interest both in scientific and economic communities   . These robots should typically be able to interact naturally and intuitively with non-engineer humans, in the context of domestic services or entertainment. Yet, an important obstacle needs to be passed: providing robots with the capacity to adapt to novel and changing environments and tasks, in particular when interacting with non-engineer humans. One of the important difficulties is related to mutual perception and joint attention  . For example, when one has to teach a novel word or a new command to a robot, several challenges arise:
Attention drawing: when needed the human shall be able to draw the attention of the robot towards himself and towards the interaction (i.e. the robot should stop its activities and pay attention to the human);
Pointing: once the robot is concentrated on the interaction, the human should be able to show a part of the environment (typically an object) he is thinking of to the robot, typically by pointing, in order to establish a form of joint attention;
Naming: the human should be able to introduce a symbolic form that the robot can detect, register and recognize later on;
Given that users are not engineers, this should be realized both in a very intuitive and very robust manner in completely uncontrolled environments. This implies that relying on traditional vision techniques for detecting and interpreting natural human pointing gestures, and on traditional speech recognition techniques for spotting and recognizing (potentially new) words will not work. One way to achieve intuitive and robust attention drawing, pointing and naming is to develop simple artefacts that will serve as mediators between the man and the robot to enable natural communication, in much the same way as icon based artefacts were developed for leveraging natural linguistic communication between man and bonobos (see Kanzi and Savage-Rumbaugh).
This year, we have began to experiment the development of such artefacts and associated interaction techniques, and evaluate them. These interaction techniques are based on gestural interaction through a portable touch screen that serves the role of mediator between the human and the robot. More information is available in  .
More details publications are under way.