Team Bunraku

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Interactive scenario languages for virtual worlds

Collaboration between real users and virtual humans for training

Participants : Bruno Arnaldi [ contact ] , Stéphanie Gerbaud, Valérie Gouranton.

We have proposed models to extend the possibilities of a virtual environment for training (GVT, see section 5.6 ) to training on collaborative procedures where real users and virtual humans collaborate. First an activity model for the actors allows the dynamic substitution of a real user by a virtual human. This model makes an actor perform actions depending on his capabilities, on the scenario, on the environment and also on his partners' activities. We have also extended the scenario language LORA in order to describe collaborative scenarios. Such a scenario describes the assignment of people to actions and integrates some collaborative actions. The scenario has been simplified while making some basic actions implicit such as taking or putting back an object. Finally, we developed an action selection mechanism. Its aim is, on one hand, to enable a virtual human to select an action to perform and, on the other hand, to give some pedagogical advice to a trainee about the best action to choose. These models (detailed in [32] ) have been integrated to GVT in a prototype and validated thanks to two applicative scenarios. The first one consists in collaboratively assembling a piece of furniture delivered in a kit whereas the second one is a collaborative military procedure which consists in preparing a tank to fire. Virtual environments for training, especially collaborative ones, require adaptability in order to be used in various situations. In [51] , we have identified two levels of adaptation needed by those environments. The first one concerns the application setting. The second one is a dynamic adaptation, at runtime. Our proposed models fully satisfy those needs. These scientific contributions have been integrated to GVT software (see section  5.6 ).

Modelling of interactive virtual worlds

Participants : Rémi Cozot [ contact ] , Fabrice Lamarche [ contact ] , Christian Bouville, Noémie Esnault.

Delivering interactive virtual worlds requires the modeling of the geometry of the 3D world but also the modelling of the interactive and autonomous behaviors of objects, actors and camera embedded in the virtual world. Even if many works try to propose user-friendly GUI to model the behaviours, this task remains a programming task which unfortunately requires strong programming skills. Our main focus here is to explore and propose semi-automatic ways to model the world and the behaviors of objects including virtual actors and camera.

In order to easily build 3D interactive worlds from large sets of data, we have proposed a two steps method to design first the topology of the world and secondly to design the geometry and interactive features of the 3D world. The first step consists in building a direct acyclic graph from queries on the data. We use this graph to model the topology of the 3D world and the navigation in the world. The second step builds the 3D geometry of the world according to the topology and according to the data linked at the graph's nodes ([67] ).

Virtual Cinematography

Participants : Marc Christie [ contact ] , Fabrice Lamarche.

The domain of Virtual Cinematography explores the operationalization of rules and conventions pertained to camera placement, light placement and staging in virtual environments. In this context, we have proposed an offline camera reasonning system for editing [45] , some smart approaches to lighting design [24] and a real-time editing technique for virtual storytelling [68] .

Our offline reasonning system for editing [45] allows to perform complex queries (expressed as quantified first-order formulae) over the cinemetographic properties of an animated scene. The process evaluates the range of solutions using interval-based spatio-temporal partitions, and relies on spatio-temporal reasonning to identify and characterize classes of solutions.

Then, in a move towards more automated systems for camera control, we have proposed a spatial reasonning system that encodes cinematographic idioms (an idiom is a stereotypical way to convey actions in movies) as filters of a set of potential visibility volumes defined around target objects in a dynamic evolving environments. The system [68] considers narrative elements as inputs (description of actions or utterances) and plans a sequence of shots that appropriately convey these elements, while maintaining visibility of the target objects, coherency in the application of idioms and enforcement of an editing style.

Last, we considered multiple means to describe and specify lighting in virtual environments, by exploring multiple optimization techniques over the light parameters [24] , in particular for example-based lighting.


previous
next

Logo Inria