Team EVASION

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Efficient visualization of very large scenes

Participants : Yacine Amara, Sébastien Barbier, Georges-Pierre Bonneau, Christian Boucheny, Antoine Bouthors, Eric Bruneton, Philippe Decaudin, Thomas Félix, Fabrice Neyret.

Visualisation of large numerical simulation data sets

Participants : Sébastien Barbier, Georges-Pierre Bonneau, Thomas Félix.

Figure 18. BiResolution Visualization of Tetrahedral Meshes. From left to right: fine tetrahedral mesh, coarse mesh, Mesh partition, BiResolution mesh with VoI.
IMG/BarbierFelixBonneau

Visualization of the results of scientific simulations is crucial in order to gain understanding of the phenomena that are simulated. The visualization techniques need to be interactive - if not real time - to be helpful for engineers. Therefore multiresolution techniques are required to accelerate the visual exploration of the data sets. Sébastien Barbier and Thomas Felix have developed a set of GPU implemented algorithms that enable to dynamically extract a BiResolution mesh from any given tetrahedral mesh. A specific out-of-core simplification algorithm is performed in preprocessing. During exploration of the data, a single consistent mesh is extracted on-the-fly from a Volume-Of-Interest (VoI) and a coarse contextual mesh outside the VoI. Massive tetrahedral mesh can be visualized interactively on a desktop PC. Figure  18 illustrates the approach. The results have been presented in two poster sessions at Pacific Visualization 2008 and Pacific Graphics 2008 and in the paper  [12] .

Perceptive Visualization

Participants : Georges-Pierre Bonneau, Christian Boucheny.

This project is part of a collaboration with EdF R& D, and with LPPA (Laboratoire de Physiologie de la Perception et de l'Action, Collège de France). EdF runs massive numerical simulations in hydrodynamics, mechanics, thermodynamic, neutronic... Postprocess, and in particular visualization of the resulting avalanche of data is a bottleneck in the engineering pipeline. Contrary to the numerical simulation itself, this postprocessing is human-time consuming, with engineers spending several hours to explore the result of their simulation, trying to catch the knowledge hidden behind the numbers computed by the simulation. The focus of our collaboration with EdF and the College de France is to incorporate our knowledge of the human visual perception system in the development of more efficient visualization techniques. We also deal with the evaluation of existing visualization algorithms, based on perceptive criteria.

Figure 19. Hidden Point Removal and Eye Dome Lighting of a dense cloud of points. Left: point cloud, Right: rendering using HPR and EDL
IMG/BouchenyBonneau

This year a journal paper at ACM Transactions on Applied Perception has been accepted [7] , about the perceptive evaluation of volume rendering algorithms. Christian Boucheny has also developed two rendering techniques taking into account perceptive criteria. First he has proposed a novel shading algorithm, called EyeDomeLighting (EDL), simulating ambient occlusion in image-space, that augment our perception of depth in arbitrary 3D data. Second he has introduced a new Hidden Point Removal (HPR) algorithm that takes as input a cloud of scattered 3D points, and outputs a subset of this cloud which intuitively simulate an opaque surface interpolating the cloud of points. Figure  19 illustrates the HPR and EDL shading on a dense 3D set of points.

Efficient representation of landscapes

Participants : Eric Bruneton, Fabrice Neyret.

Figure 20. Real-time rendering and editing of large landscapes
IMG/landscape

The goal of this work is the real-time rendering and editing of large landscapes with forests, rivers, fields, roads, etc. with high rendering quality, especially in term of details and continuity. A first step toward this goal is the modeling, representation and rendering of the terrain itself. Since an explicit representation of the whole terrain elevation and texture at the maximum level of detail would be impossible, we generate them procedurally on the fly (completely from scratch or based on low resolution digital elevation models). Our main contribution in this context is to use vector-based data to efficiently and precisely model linear features of the landscape (such as rivers, hedges or roads), from which we can compute in real-time the terrain texture and the terrain elevation (in order to correctly insert roads and rivers in the terrain - see Figure  20 ). This work has been published at the Eurographics conference [16] .

Efficient representation of plants and trees

Participants : Philippe Decaudin, Fabrice Neyret.

Figure 21. Our Volumetric Billboards model allows us to represent very complex detailled scenes very efficiently and with high visual quality, handling a volumetric-wise filtering of 'geometry'.
IMG/PhD

We developped a new representation for the efficient representation and filtering of complex data, typically, vegetal elements in a landscape. The volumetric Billboard, which is based on a multiscale volume of voxels. Our rendering algorithm is able to render seemlessly and efficiently a complex self-intersecting distribution of such base volumes relying on common adaptive slicing of then parallel to screen. Equivalent mesh-based seen are more costly to render, more prone to aliasing. Moreover, Volumes allow to properly define the filtering of thin objects (which become fuzzy), see Figure  21 . A paper has been submmited.

Real-time quality rendering of clouds

Participants : Antoine Bouthors, Eric Bruneton, Fabrice Neyret.

Figure 22. Real-time high quality rendering of detailled animatable clouds, taking into account anisotropic multiple Mie scattering.
IMG/clouds07

Antoine Bouthors defended his PhD in June 2008 [1] . His last work on real-time high-quality rendering of cumuls clouds (without precomputation so as to allow for their animation), see Figure  22 has lead to a paper at I3D'08 [14] . The model seems to interest the atmospheric physic community, so a journal submission in this field is under preparation. Note that Antoine has been hired by Weta Digital (New Zealand), and will continue working on atmospheric phenomena in the scope of special effects.

Atmosphere rendering

Participants : Eric Bruneton, Fabrice Neyret.

Figure 23. Real-time rendering of the atmosphere. Our method supports all light directions, all viewpoints from ground to space, and simulates multiple scattering and light shafts (see right image).
IMG/atmosphere

The goal of this work is the real-time and accurate rendering of the atmosphere from any viewpoint from ground level to outer space, while taking Rayleigh and Mie multiple scattering into account. Our method reproduces many effects of the scattering of light, such as the daylight and twilight sky color and aerial perspective for all view and light directions, or the Earth and mountain shadows (light shafts) inside the atmosphere. Our method is based on a formulation of the light transport equation that is precomputable for all view points, view directions and sun directions. We show how to store this data compactly and propose a GPU compliant algorithm to precompute it in a few seconds. This precomputed data allows us to evaluate at runtime the light transport equation in constant time, without any sampling, while taking into account the ground for shadows and light shafts (see Figure  23 ). This work has been published at the Eurographics Symposium on Rendering (EGSR) [15] .

This work was limited to clear sky conditions. We studied how this method could be extended to support clouds. We have designed and implemented a physical simulator to find, among all the interactions bewteen the clouds, the atmosphere and the ground, the ones that are really important for visual realism, and those that can be neglected. We have found the important effects, but did not have time to propose a method to simulate them in real-time. A new Master thesis proposal has been submitted to continue this work next year.

Plant instancing on planet-sized terrains

Participants : Yacine Amara, Eric Bruneton.

Figure 24. Forests and isolated trees instanced and rendered with billboard clouds using our instancing algorithm, seen from several altitudes.
IMG/forests

The goal of this work is to render in real time planet-sized terrains populated with plants and trees. Since it is not possible to precompute and store the position of each plant (there are billions of them) we generate them on the fly. For this we generate candidate positions with a pseudo random generator, and we test each candidate against a land cover classification (LCC) map in order to reject all positions that fall outside vegetation areas (our Earth LCC map is quite coarse - 1 km per pixel - so we amplify it on the fly with procedural noise to add small scale variations). We then pack the validated positions using a GPU stream reduction algorithm, and we use this packed structure to draw many (> 100000) plant instances with appropriate LOD using hardware instancing (see Figure  24 ). This work was done by Yacine Amara as part of his PhD, during a five months visit in the EVASION team, based on previous work done in collaboration with Xavier Marsault in 2007. The result has been integrated in the new version of Proland (see Section  5.4 ).


previous
next

Logo Inria