Team flowers

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Dissemination
Bibliography

Section: New Results

Incremental Local Online Gaussian Mixture Regression for Imitation Learning of Multiple Tasks

Participants : Thomas Cederborg, Pierre-Yves Oudeyer, Adrien Baranès.

Imitation learning in robots, also called programming by demonstration, has made important advances in recent years, allowing humans to teach context dependant motor skills/tasks to robots. We have proposed to extend the usual contexts investigated to also include acoustic linguistic expressions that might denote a given motor skill, and thus we target joint learning of the motor skills and their potential acoustic linguistic name. In addition to this, a modification of a class of existing algorithms within the imitation learning framework has been made so that they can handle the unlabeled demonstration of several tasks/motor primitives without having to inform the imitator of what task is being demonstrated or what the number of tasks are, which is a necessity for language learning, i.e; if one wants to teach naturally an open number of new motor skills together with their acoustic names. Finally, a mechanism for detecting whether or not linguistic input is relevant to the task has also been proposed, and our architecture also allows the robot to find the right framing for a given identified motor primitive. With these additions it becomes possible to build an imitator that bridges the gap between imitation learning and language learning by being able to learn linguistic expressions using methods from the imitation learning community. In this sense the imitator can learn a word by guessing whether a certain speech pattern present in the context means that a specific task is to be executed. The imitator is however not assumed to know that speech is relevant and has to figure this out on its own by looking at the demonstrations: indeed, the architecture allows the robot to transparently also learn tasks which should not be triggered by an acoustic word, but for example by the color or position of an object or a gesture made by a human in the environment. To demonstrate this ability to find the relevance of speech, we have made experiments where non linguistic tasks are learnt along with linguistic tasks and the imitator has to figure out when speech is relevant (in some tasks speech should be completely ignored and in other tasks the entire policy is determined by speech). These simulated experiments also demonstrated that the imitator can indeed find the number of tasks that has been demonstrated to it, discover what demonstrations are of what task, which framing is associated to which tasks, and for which of the tasks speech is relevant and finally successfully reproduce those tasks when the corresponding context is detected. An initial description of some of the techniques associated with this work is in [23] . Further publications are under review.


previous
next

Logo Inria