Team artis

Overall Objectives
Scientific Foundations
Application Domains
New Results
Contracts and Grants with Industry
Other Grants and Activities

Section: New Results

Lighting and rendering

Participants : François Sillion, Cyril Soler, Nicolas Holzschuch, Jean-Marc Hasenfratz, Emmanuel Turquin, Samuel Hornus, David Roger, Lionel Atty.

Application of frequency analysis of light transport to photon mapping

In 2005, we derived a complete framework for the analysis of light transport in Fourier space, providing the necessary tools and equations which permit to compute frequency informations of the distribution of light in a scene [29] . Lately, in 2006, we have been working on the application of these principles to some specific lighting simulation techniques, starting with photon mapping .

Whereas the main idea of photon mapping is to transport light as a density of light particles of unit energy, we also carry frequency information in photons such that it is additionnaly possible to reconstruct frequency information in the computed image. Such information permits to foresee some hard-to-predict phenomena such as the presence of shadows, the amount of bluring due to depth of field or the low variation of light across diffuse surfaces, in the specific lighting conditions of the computed image.

The application of such a scheme is to optimaly adapt the sampling in pixel space as well as in angular space (for secondary reconstruction ray) so as to avoid calculations that would ordinary be made necessary by the absence of a correct clue on frequencies, yet not necessary for the image itself. The ongoing work already provides some interesting and promising results, as showed on Figure 1 .

Figure 1. Left: a simple configuration where blockers are the cause of high frequencies of the lightfield onto the receiving surface. The blue square shows a clue of what the spectrum in space and angles looks like. Right: from the spectrum, a correct sampling density for the image is produced, which allows to allocate an optimal amount of cpu time to the computation of the shadow onto the receiver.

Wavelet Radiance Transport for Interactive Indirect Lighting

We have developed an algorithm for real-time simulation of indirect lighting. Combined with a separately computed direct lighting, our algorithm gives interactive global illumination in dynamic scenes.

We start by computing the global transport operator (GTO), expressing the converged indirect lighting as a function of direct lighting. This GTO is computed and expressed in a hierachical manner, using a new multi-dimensional wavelet basis we have developed.

Figure 2. Our interactive global illumination algorithm. This scene runs at 15 fps.
(a) Maze scene(b) Direct lighting(c) Indirect lighting computed with our algorithm(d) Resulting global illumination

At runtime, we compute the projection of direct lighting onto this wavelet basis, then apply the GTO to the projection. We thus get the indirect lighting. This indirect lighting is then added with a direct lighting component computed on the GPU, giving interactive global illumination. This technique allows unprecedented freedom in the interactive manipulation of lighting for static scenes (see Figure 2 ). Our work, done in cooperation with Janne Kontkanen, of the Helsinki University of Technology, has been published at the 2006 Eurographics Symposium on Rendering [20] .

Real-Time Soft Shadows using Shadow Maps

We have developped a completely new algorithm for real-time computation of soft shadows. This algorithm is based on the shadow map method [39] . The shadow map is converted into a discrete representation of the occluders (see Figures 3 a and 3 b), and we compute the soft shadow of this discrete representation.

Figure 3. Our real-time Soft Shadow algorithm, based on shadow maps.
(a) The original model(b) The discretized occluder(c) Our largest test scene (565,203 polygons, renders at 20 fps)

Through several optimisations, and the use of programmable graphics hardware, we achieve a very efficient rendering of soft shadows, even on complex scenes. The algorithm scales well with the number of occluders and the complexity of the scene (see Figure 3 c). The algorithm achieves faster results for large penumbra regions, exploiting the fact that they are low-frequency effects.

We have conducted an extensive testing of our algorithm, comparing it with other soft-shadow algorithm and ground truth. The description of the algorithm, along with the extensive testing has been published in Computer Graphics Forum  [3] .

Precomputed Ambient Occlusion

Ambient occlusion is used widely for improving the realism of real-time lighting simulations, in video games and in special effects for motion pictures.

Figure 4. Our method for precomputed ambient occlusion greatly improves the realism of the scenes rendered in real-time, giving contact shadows and illumination from environment maps at a very low cost.
(a) Example of proximity shadows, computed using our algorithm for ambient occlusion. This scene runs at more than 200 fps.(b) Using the Ambient Occlusion information to compute illumination from an environment map. This scene runs at 30 fps.

We have developped a new, simple method for storing ambient occlusion values, that is very easy to implement and uses very little CPU and GPU resources. This method can be used to store and retrieve the percentage of occlusion, in combination with the average occluded direction.

This information is used to render occlusion from moving occluders, as well as to compute illumination from an environment map at a very small cost (see Figure 4 ).

The speed of our algorithm is independent from the complexity of either the occluder or the receiver, making our algorithm highly suitable for games and other real-time applications. This work has been accepted for publication in the Journal of Graphics Tools  [7] .

Real-Time Reflexions on Curved Surfaces

We have developed an algorithm for real-time simulation of reflexions on specular surfaces. We separate the reflector from the rest of the scene, then introduce a new projection function, corresponding to the effect of the reflector on the objects in the scene. For curved reflectors, this projection function is not linear and difficult to compute, but we have shown that it is possible to approximate using the programmability of the graphics card. This work has been published at the Eurographics 2006 conference [10] .

Figure 5. Example of reflexions computed in real-time using our algorithm.

Modelling and Rendering of Geometry with Relief Textures

We have developed a way to render geometry using an image based representation. Geometric information is encoded by a texture with depth and rendered by rasterizing the bounding box geometry (see Figure 6 ). For each resulting fragment, a shader computes the intersection of the corresponding ray with the geometry using pre-computed information to accelerate the computation. Great care is taken to be artifact free even when zoomed in or at grazing angles. We integrate our algorithm with reverse perspective projection to represent a larger class of shapes. The extra texture requirement is small and the rendering cost is output sensitive so our representation can be used to model many parts of a 3D scene (see Figure 7 ). The paper has been published at Graphics Interface 2006 conference [14] .

Figure 6. (left) Color, normal and depth textures for a perspective distorted heightfield (right) Rendering of the bounding box with ray-intersection performed in a shader.
Figure 7. Rendering of an object represented by 6 relief textures.

Although other methods have been published in the litterature for rendering heightfields on the GPU, our method was the first one to focus on exactness. Consequently, it can be used to model any heightfields, no matter the amplitute and screen size of the feature it contains. Thus, it can be used to render macro objects, as opposed to micro details. We continued the work by exploring classes of objects that fit in the heightfield category.

Realistic Water Volumes in Real-Time

Water surfaces over basins, ponds and alike have been investigated. We extended the method of previous section to handle simultaneously two heightfields (one for the water surface and one for the bottom of the ground). Because we do ray-tracing, complex effects such as diffraction, absorption, etc. can be easily integrated. We also incorporated simple GPU based photon mapping to compute caustics. Combining all this, we are able to render realistic water in real time, fully on the GPU (Figure 8 ). The results were published at Symposium on Natural Phenomenae '06[13] .

Figure 8. Rendering of water surfaces.


Logo Inria