Section: Application Domains
There is a strong evidence that visual information of the speaker, especially jaws and lips but also tongue position, noticeably improves the speech intelligibility. Hence, having a realistic augmented head displaying both external and internal articulators could help language learning technology in giving the student a feedback on how to change articulation in order to achieve a correct pronunciation. This task is complex and necessitates a multidisciplinary effort involving speech production modeling and image analysis. The long term aim of the project is the design of a 3D +t articulatory model to be used for the realistic animation of an augmented/talking head. Within this project, we have especially worked on the tracking of the visible articulators using stereo-vision techniques and we intend to supplement the model with internal articulators (tongue, larynx) obtained from medical imaging (ultrasound images for tongue tracking and MRI for global model). These activities have been conducted within the European ASPI project (2005-2009) and are continued within the ANR ARTIS (2009-2012).