Team artis

Overall Objectives
Scientific Foundations
Application Domains
New Results
Contracts and Grants with Industry
Other Grants and Activities

Section: New Results

Lighting and Rendering

Participants : Nicolas Holzschuch, Charles de Rousiers, François Sillion, Cyril Soler, Kartic Subr.

Fourier Depth of Field

Participants : Cyril Soler, Kartic Subr.

The simplistic pinhole camera model used to teach perspective (and computer graphics) produces sharp images because every image element corresponds to a single ray in the scene. Real-life optical systems such as photographic lenses, however, must collect enought light to accomodate the sensitivity of the imaging system, and therefore combine light rays coming through a finite-sized aperture. Focusing mechanisms are needed to choose the distance of an “in-focus" plane, which will be sharply reproduced on the sensor, while objects appear increasingly blurry as their distance to this plane increases. The visual effect of focusing can be dramatic and is used extensively in photography and film, for instance to separate a subject from the background.

Figure 3. (a) The image sampling density predicts that the shiny regions of the trumpet, with high curvature and in focus need to be sampled most profusely in the image. (b) The aperture density predicts that defocused regions need to be sampled densely while the ball in focus requires very few samples over the aperture. (c) the image samples obtained from the image sampling density. (d) The image is reconstructed from scattered radiance estimates.

Although the simulation of depth of field in Computer Graphics has been possible for more than two decades, this effect is still rarely used in practice because of its high cost: the lens aperture must be densely sampled to produce a high-quality image. This is particularly frustrating because the defocus produced by the lens is not increasing the visual complexity, but rather removing detail! In this paper, we propose to exploit the blurriness of out-of-focus regions to reduce the computation load. We study defocus from a signal processing perspective and propose a new algorithm that estimates local image bandwidth. This allows us to reduce computation costs in two ways, by adapting the sampling rate over both the image and lens aperture domain.

In image space, we exploit the blurriness of out-of-focus regions by downsampling them: we compute the final image color for only a subset of the pixels and interpolate. Our motivation for adaptive sampling over the lens comes from the observation that in-focus regions do not require a large number of lens samples because they do not get blurred, in contrast to out of focus regions where the large variations of radiance through the lens requires many samples. More formally, we derive a formula for the variance over the lens and use it to adapt sampling for a Monte-Carlo integrator. Both image and lens sampling are derived from a Fourier analysis of depth of field that extends recent work on light transport  [36] . In particular, we show how image and lens sampling correspond to the spatial and angular bandwidth of the lightfield.

Figure 3 shows an example of applying our technique to a scene with high depth of field variations. As predicted, the spatial sampling density is high in the regions with high specularity or depth discontinuities, and the angular sampling density is high where un-focused pixels are the result of averaging high variance regions of the incoming illumination. Spatial samples therefore stick to regions with high spatial frequencies.

This paper was published in the journal ACM Transactions on Graphics [18] and presented at the Siggraph'2009 conference.

Fourier Motion Blur

Participant : Nicolas Holzschuch.

Motion blur is crucial for high-quality rendering but is also very expensive. Our first contribution is a frequency analysis of motion-blurred scenes, including moving objects, specular reflections, and shadows. We show that motion induces a shear in the frequency domain, and that the spectrum of moving scenes is usually contained in a wedge. This allows us to compute adaptive space-time sampling rates, to accelerate rendering. For uniform velocities and standard axis-aligned reconstruction, we show that the product of spatial and temporal bandlimits or sampling rates is constant, independent of velocity. Our second contribution is a novel sheared reconstruction filter that tightly packs the wedge of frequencies in the Fourier domain, and enables even lower sampling rates (see Figure 4 ). We present a rendering algorithm that computes a sheared reconstruction filter per pixel, without any intermediate Fourier representation. This often permits synthesis of motion-blurred images with far fewer rendering samples than standard techniques require (see Figure 5 ). This work was published at the Siggraph 2009 conference, and in the journal ACM Transactions on Graphics [15] .

Figure 4. Our algorithm for efficient computation of motion blur: in a first step (left), we estimate the minimum and maximum velocity for each pixel in the picture. In a second step, we use this information to compute the filter width and sampling density. The final picture is reconstructed using sheared filters (right).
Figure 5. Comparison between our algorithm and other methods for rendering motion blurred images.

Single Scattering in Refractive Media with Triangle Mesh Boundaries

Participant : Nicolas Holzschuch.

Light scattering in refractive media is an important optical phenomenon for computer graphics. While recent research has focused on multiple scattering, there has been less work on accurate solutions for single or low-order scattering. Refraction through a complex boundary allows a single external source to be visible in multiple directions internally with different strengths; these are hard to find with existing techniques. This paper presents techniques to quickly find paths that connect points inside and outside a medium while obeying the laws of refraction. We introduce: a half-vector based formulation to support the most common geometric representation, triangles with interpolated normals; hierarchical pruning to scale to triangular meshes; and, both a solver with strong accuracy guarantees, and a faster method that is empirically accurate. A GPU version achieves interactive frame rates in several examples. See Figure 6 for our results, and Figure 7 for comparison with other rendering methods.

Figure 6. The bending and focusing of light in refractive media creates distinctive rich details. The top row shows single scatter surface caustics in glass and water. The bottom row shows complex volumetric refractive caustics in amber and glass. All images were generated using our method, except the bottom right which used the common straight-line approximation that neglects shadow ray refraction.
IMG/teapotPurpleGlassIMG/poolIMG/glassTile  IMG/tileMosaic
teapotpoolglass tileglass mosaic
IMG/amberScorpIMG/cubeoctoIMG/bumpsphere  IMG/tileMosaicHack
ambercuboctahedronbumpy spherestraight-line approximation
Figure 7. Back lit bumpy sphere with four rendering algorithms. The straight-line approximation cannot capture the volume caustics. Path tracing required replacing the point light with a small area source and even with 32 768 samples per pixel (compute time: 1.4 hours) produces a very noisy result (white region is a reflection of the area source). The photon map is much better and takes roughly equal time as our result, but even with ten million photons, it still blurs out the finer details of the caustics as shown in the bottom row zoom-in. Our algorithm is able to capture these fine details without the high memory or time requirements of the other methods.


Logo Inria