Section: New Results
Lighting and Rendering
Participants : Nicolas Holzschuch, Charles de Rousiers, François Sillion, Cyril Soler, Kartic Subr.
Fourier Depth of Field
Participants : Cyril Soler, Kartic Subr.
The simplistic pinhole camera model used to teach perspective (and computer graphics) produces sharp images because every image element corresponds to a single ray in the scene. Reallife optical systems such as photographic lenses, however, must collect enought light to accomodate the sensitivity of the imaging system, and therefore combine light rays coming through a finitesized aperture. Focusing mechanisms are needed to choose the distance of an “infocus" plane, which will be sharply reproduced on the sensor, while objects appear increasingly blurry as their distance to this plane increases. The visual effect of focusing can be dramatic and is used extensively in photography and film, for instance to separate a subject from the background.
Although the simulation of depth of field in Computer Graphics has been possible for more than two decades, this effect is still rarely used in practice because of its high cost: the lens aperture must be densely sampled to produce a highquality image. This is particularly frustrating because the defocus produced by the lens is not increasing the visual complexity, but rather removing detail! In this paper, we propose to exploit the blurriness of outoffocus regions to reduce the computation load. We study defocus from a signal processing perspective and propose a new algorithm that estimates local image bandwidth. This allows us to reduce computation costs in two ways, by adapting the sampling rate over both the image and lens aperture domain.
In image space, we exploit the blurriness of outoffocus regions by downsampling them: we compute the final image color for only a subset of the pixels and interpolate. Our motivation for adaptive sampling over the lens comes from the observation that infocus regions do not require a large number of lens samples because they do not get blurred, in contrast to out of focus regions where the large variations of radiance through the lens requires many samples. More formally, we derive a formula for the variance over the lens and use it to adapt sampling for a MonteCarlo integrator. Both image and lens sampling are derived from a Fourier analysis of depth of field that extends recent work on light transport [36] . In particular, we show how image and lens sampling correspond to the spatial and angular bandwidth of the lightfield.
Figure 3 shows an example of applying our technique to a scene with high depth of field variations. As predicted, the spatial sampling density is high in the regions with high specularity or depth discontinuities, and the angular sampling density is high where unfocused pixels are the result of averaging high variance regions of the incoming illumination. Spatial samples therefore stick to regions with high spatial frequencies.
This paper was published in the journal ACM Transactions on Graphics [18] and presented at the Siggraph'2009 conference.
Fourier Motion Blur
Participant : Nicolas Holzschuch.
Motion blur is crucial for highquality rendering but is also very expensive. Our first contribution is a frequency analysis of motionblurred scenes, including moving objects, specular reflections, and shadows. We show that motion induces a shear in the frequency domain, and that the spectrum of moving scenes is usually contained in a wedge. This allows us to compute adaptive spacetime sampling rates, to accelerate rendering. For uniform velocities and standard axisaligned reconstruction, we show that the product of spatial and temporal bandlimits or sampling rates is constant, independent of velocity. Our second contribution is a novel sheared reconstruction filter that tightly packs the wedge of frequencies in the Fourier domain, and enables even lower sampling rates (see Figure 4 ). We present a rendering algorithm that computes a sheared reconstruction filter per pixel, without any intermediate Fourier representation. This often permits synthesis of motionblurred images with far fewer rendering samples than standard techniques require (see Figure 5 ). This work was published at the Siggraph 2009 conference, and in the journal ACM Transactions on Graphics [15] .
Single Scattering in Refractive Media with Triangle Mesh Boundaries
Participant : Nicolas Holzschuch.
Light scattering in refractive media is an important optical phenomenon for computer graphics. While recent research has focused on multiple scattering, there has been less work on accurate solutions for single or loworder scattering. Refraction through a complex boundary allows a single external source to be visible in multiple directions internally with different strengths; these are hard to find with existing techniques. This paper presents techniques to quickly find paths that connect points inside and outside a medium while obeying the laws of refraction. We introduce: a halfvector based formulation to support the most common geometric representation, triangles with interpolated normals; hierarchical pruning to scale to triangular meshes; and, both a solver with strong accuracy guarantees, and a faster method that is empirically accurate. A GPU version achieves interactive frame rates in several examples. See Figure 6 for our results, and Figure 7 for comparison with other rendering methods.
