Section: Scientific Foundations
Virtual and mixed realities
Participants : François Sillion, Jean-Marc Hasenfratz, Jean-Dominique Gascuel, Alexandrina Orzan.
- Mixed reality
Set of techniques involving the addition of real elements to a virtual world, or virtual elements to the real world
Convergence of real and synthetic imagery becomes a reality a few years ago, with the availability of high-quality 3D graphics and real-time video input on consumer-grade computer. One fundamental issue in mixing real and synthetic imagery lies in the proper combination of the two image sources. Our focus is on the lighting and shadow consistency: Making sure that lighting effects are consistent between the synthetic and real parts of the image remains a challenge, especially for real-time applications.
In the context of Augmented Reality, the goal of Cyber-II(The CYBER-II project is supported by the ACI ``Masse de données'' of the French Department of Research.) project is to simulate, in real-time, the presence of one or more persons (e.g. a TV presenter and his guests, or a teacher) in a virtual environment. This simulation consists mainly in visualizing the combined scenario, and possibly in providing tools for interaction between the real person, the virtual environment, and the observer (e.g. TV spectator or pupil).
For that we need an integration of the actors as realistic as possible and an interaction between the actors and the virtual environment in real time (i.e. 25 frames per second).
In order to achieve a realistic immersion, we have to compute how the actor is re-lighted by the virtual lights and the way he casts shadows on the virtual objects. To do this, a 3D model is necessary. Moreover, a realistic appearance of the integrated persons is needed, and we propose to use real-world images to texture the virtual model.
The main overall technical requirements are thus a highly realistic visualization, which works in real time. We have proposed new methods to capture an actor with no intrusive trackers and without any special environment such as a blue-screen set, to estimate its 3D-geometry and to insert this geometry into a virtual world in real-time. We use several cameras in conjunction with background subtraction to produce silhouettes of the actor as observed from the different camera viewpoints. These silhouettes allow the 3D-geometry of the actor to be estimated by a voxel based method. This geometry is rendered with a marching cube algorithm and inserted into a virtual world. Shadows of the actor corresponding to virtual lights are then added and interactions with objects of the virtual world are proposed.
The main originality of this work is to propose a complete and scalable pipeline that can compute up to 30 frames per second. It has been published in the ``Vision, Video and Graphics'' workshop  and a more interactive version has been published in ``Virtual Environments''  .