Section: New Results
Efficient rendering of natural scenes
Adaptive surfels for real-time forest sceneries
Participant : Fabrice Neyret.
During his Master Thesis, Guillaume Gilet (advised by Alexandre Meyer and Fabrice Neyret) has developed an adaptive model of surfels (a point based representation): the size of points (i.e. discs) depends on the distance and on the visibility, and represent a set of leaves (these sets are organized hierarchically in that purpose). Moreover, surfels commute to classical meshes for close points of view. This allows for the interactive rendering of forests with both close and distant trees, and continuous flyovers of entire forests (see Figure 9 ).
This work has been published at EWNP'05 
Real-time rendering of a river surface
Participant : Fabrice Neyret.
In 2001, we developed a model for representing the features of the flow in animated rivers, based on quasi-stationary shock-waves and ripples. This allows us to obtain very precise features, with very compact data (vector representation of features, no data where no features are present).
The purpose of this work, done by Frank Rochet during his Master thesis  , is to render the water surface both efficiently and with thin details based on this vector representation of features. For this, we developped two representations of the water surface: one geometry-based for close point of views, and one bump-map based for distant point of views. In both cases, we generated geometric strips as a support to shock waves. Upon these, the main wave and ripples are represented with features-aligned polygons in the first case, and with a high-field profile (i.e. a 1D-texture, used to generate normals maps) in the second case. Only features which are visible in the view frustum are generated. The difficult issues are the transition between the two models, and the intersection of primitives.
With this model, we were able to obtain the real-time high resolution rendering of an animated river, which is totally impractical with ordinary grid-based methods, as illustrated in figure 10 .
Self shadowing of animated scenes
Self shadows are particularly important to get the adequate impression of volume for complex natural objects such as hair (see Figure 11 ). We just developed an efficient self shadowing method (submitted for publication) particularly well adapted to the rendering of animated objects, since it requires no geometry-based pre-computation. Our method is based on a 3D light-oriented density map, a novel structure that combines an optimized volumetric representation of hair with a light-oriented partition of space. Using this 3D map, accurate hair self-shadowing can be interactively processed (several frames per second for a full hairstyle) on a standard CPU. Beyond the fact that our application is independent of any graphics hardware (and thus portable), it can easily be parallelized for better performance. A parallel implementation makes the method run in real-time.