Team WAM

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Multimedia Authoring

Amaya

Work on Amaya has focused this year on three main topics:

All these new developments are included in version 11.3 of Amaya, released in December 2009.

LimSee3

In collaboration with partners of the Palette project (see section 8.2.1 ), a study was conducted about integrating multimedia contents in a reification process for sharing teaching practice. The issue is that sharing based on raw recordings of courses or meetings brings a very limited benefit, due to the very nature of video and audio. It is indeed too long to listen to a full recording when only some specific pieces are of interest for a given purpose. To solve this issue, textual annotations are associated to recordings and provide an easy way to navigate contents, thanks to synchronization between audio/video and annotations. With LimSee3, annotations are entered at recording time and/or afterwards, by several users. This makes it easy to prepare the recording of a course before discussing it with colleagues. This also allows participants to add more annotations during discussion, to record their agreements or dissents. This study was published in [8] .

Augmented Reality Audio (ARA) Editing

LibA2ML (see section 5.3 ) provides a strong basis for building editors for virtual interactive audio scenes (games for instance) or ARA scenes (guidance applications for instance). Our main interest is in the authoring of ARA scenes in the perspective of the Minalogic Autonomie project (2010-12) upon the indoor-ourdoor guidance of ill-seeing people. The concept of augmented reality audio (ARA) characterizes techniques where a real sound and voice environment is extended with virtual, geolocalized sound sources. An ARA scene is best experimented through the use of ARA bone conduction headsets.

ARA authoring is a non-static task (mobile mixing), for at least two reasons: (1) the author has to move in the rendering zone to apprehend the audio spatialization and the chronology of the audio events which depend upon the position of the listener, (2) the determination of trajectories which will be applied to the virtual sound sources is best done through a tracking system allowing the author to record his mouves and use them as trajectories.

For this non-static authoring task, we are considering an implementation of the see-through touch-screen interface concept to control the localization of the sound sources. The XML language used for the map on which these positions are recorded is OpenStreetMap (OSM) for outdoor authoring and an extension of OSM for indoor authoring.

The ARA scene will be described through the mixing of two XML languages, i.e. A2ML and OSM. This mixed format will allow a textual authoring of the sequencing of the sound sources and DSP acoustics parameters.


previous
next

Logo Inria