Overall Objectives
Scientific Foundations
Application Domains
New Results
Contracts and Grants with Industry
Other Grants and Activities

Section: New Results

Efficient visualization of very large scenes

Participants : Sébastien Barbier, Georges-Pierre Bonneau, Antoine Bouthors, Eric Bruneton, Christian Boucheny, Cyril Crassin, Philippe Decaudin, Cédric Manzoni, Fabrice Neyret.

Visualisation of large numerical simulation data sets

Participants : Sébastien Barbier, Georges-Pierre Bonneau.

The energy industry sector has to perform numerical simulation on very large data sets, in thermodynamics, mechanics, aerodynamics, neutronics, etc. Visualization of the results of these simulations is crucial in order to gain understanding of the phenomena that are simulated. The visualization techniques need to be interactive - if not real time - to be helpful for engineers. Therefore multiresolution techniques are required to accelerate the visual exploration of the data sets. In the PhD of Fabien Vivodtzev (who is now working at CEA on Visualization systems) we have developed multiresolution algorithms devoted to volumetric data sets based on tetrahedral grids in which inner structures of dimension 2, 1 or 0 are preserved. Typically these algorithms are used to compute a sequence of simplified volumetric meshes with good properties. In his first PhD year, Sébastien Barbier has worked on the interactive rendering of these simplified meshes. He is integrating today's standard visualization algorithms - including slicing, iso-surfacing, volume rendering - with the multiresolution models developed previously. This work has been published in [Oops!] .

Figure 11. Region of interest (ROI) extracted in a multiresolution volumetric mesh

Perceptive Visualization

Participants : Georges-Pierre Bonneau, Christian Boucheny.

This project is part of a collaboration with the research and development department of EDF, and with LPPA (Laboratoire de Physiologie de la Perception et de l'Action, Collège de France). The focus in this project in on the following problem: How should human perception be taken into account in Visualization algorithms, and more specifically in algorithms based on multiresolution techniques. Previous works in this area are mostly based on image analysis techniques, that are used to measure important features in a static image resulting from some visualization algorithm. These results do not take into account information on the specific person using the visualization system. We are especially interested in taking into account such information, like the point where the user is looking at. We also want to insert dynamic parameters in the perceptive measure, like the movement of the user's head, since such parameters greatly influence the actual perception of the rendered scene. In the framework of this collaboration, EDF is funding a PhD grant on these topics, started by Christian Boucheny in December 2005. Last year we have worked on a perceptive evaluation of Direct Volume Rendering (DVR) techniques. We found out some limitations of DVR techniques in the perception of depth, and have shown how dynamic rendering allows in some cases to overcome these limitations. This work has been published in [Oops!] . An extended version has been submitted to the journal ACM TAP.

Efficient representation of landscapes

Participants : Eric Bruneton, Cédric Manzoni, Fabrice Neyret.

Figure 12. Real-time rendering and editing of large landscapes

The goal of this work is the real-time rendering and editing of large landscapes with forests, rivers, fields, roads, etc. with high rendering quality, especially in term of details and continuity. A first step toward this goal is the modeling, representation and rendering of the terrain itself. Since an explicit representation of the whole terrain elevation and texture at the maximum level of detail would be impossible, we generate them procedurally on the fly (completely from scratch or based on low resolution digital elevation models). Our main contribution, in this context, is to use vector-based data to efficiently and precisely model linear features of the landscape (such as rivers, hedges or roads), from which we can compute in real time the terrain texture and the terrain elevation (in order to correctly insert roads and rivers in the terrain - see figure  12 ). We demonstrate the scalability of our approach with a 100×100 km 2 terrain in the Alps. We also show how the vector data can be used to control the procedural generation of vegetation and other objects on the terrain (such as bridges).

Efficient representation of plants and trees

Participants : Philippe Decaudin, Fabrice Neyret.

This project is developed in the context of a European Marie Curie Outgoing International Fellowship funding (Marie Curie OIF, see Section  8.4 ) allowing Philippe Decaudin to work as a visiting researcher at CASIA (Chinese Academy of Science, Institute of Automation, in Beijing) for the first phase of the project, and to carry out the second phase at EVASION.

The main objective is to define a representation and algorithm able to efficiently visualize 3D models of plants and trees like those developed by the GreenLab team of CASIA. This representation and this algorithm are required to allow the interactive exploration of virtual landscapes and ecosystems. They must be able to render vegetation at interactive framerate.

We have focused our work on mid-range and far distance views (see Figure  13 ). In this context, texture-based volume rendering is an interesting alternative to polygon-based rendering for plants and trees visualization. This leads us to the design of an efficient level-of-detail representation well-suited to the display of a large number of complex objects (a dense forest for instance) in real-time. This is an image-based representation, which means that it is independent of the geometric complexity of the object and its rendering cost mainly depends on the rasterization cost of the image projected on the screen. However, this representation remains truly 3D, which means that parallax effects are preserved (contrary to simple billboard representations for instance) as well as possible integration with polygonal elements. It can also rely on MIP-mapping to use a filtered version of the volume with respect to the projected size of the voxels onto the screen, leading to anti-aliased display of the object.

Figure 13. Texture-based volume rendering of trees.  Volume data generated by Marc Jaeger (Digiplante) from AMAP tree models.

In order to optimize the rendering of such volumes, we also developed a simple kd-tree based space partitioning scheme to efficiently remove the empty spaces from the volume data sets in a fast preprocessing stage. The splitting rule of the scheme is based on a simple yet effective cost function evaluated through a fast approximation of the bounding volume of the non-empty regions. The scheme culls a large number of empty voxels and encloses the remaining data with a small number of axis-aligned bounding boxes (Figure  14 ), which are then used for interactive rendering. This work has been co-developed with Vincent Vidal during his internship at CASIA.

Figure 14. Empty spaces removed from volume data sets

Real-time quality rendering of clouds layers

Participants : Antoine Bouthors, Eric Bruneton, Cyril Crassin, Fabrice Neyret.

Figure 15. Real-time clouds rendering

Antoine Bouthors continues his PhD on Cumulus clouds. The purpose is to study and model a high-quality real-time illumination model embedding the main local and global lighting effects in reflectance and transmitance (halo, glory, pseudo-specular, diffusion, etc.) in the form of a local shader . This year, we generalized the model to take into account complex cloud shapes such as cumulus cloud, in collaboration with Nelson Max (U. Davis-LLNL) where Antoine spent his Eurodoc stay.

Real-time rendering of large detailed volumes

Participants : Cyril Crassin, Fabrice Neyret.

Cyril Crassin conducted his Master Project with Fabrice Neyret and Sylvain Lefebvre (Reves project) on the the real-time rendering of very large and detailed volumes, taking advantage of GPU-adapted data-structure and algorithms. The main target corresponds to the cases where detail is concentrated at the interface between free space and clusters of density found in many natural volume data such as cloudy sky or vegetation, or data represented as generalized parallax maps, hypertextures or volumetric textures. Our method is based on a dynamic N3 tree storing MIP-mapped 3D texture bricks in its leaves. We load on the fly on GPU only the necessary bricks at the necessary resolution, taking into account visibility. This maintains low memory consumption during interactive exploration and minimizes data transfer. Our ray marching algorithm benefits from the multiresolution aspect of our data structure and provides real-time performance.

Figure 16. Real-time rendering of large detailed volumes


Logo Inria