Team artis

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Expressive Rendering

Participants : François Sillion, Joëlle Thollot, Cyril Soler, Pascal Barla, David Vanderhaeghe, Matt Kaplan, Hedlena Bezerra, Adrien Bousseau, Pierre-Edouard Landes, Kaleigh Smith, David Lanier, Florent Moulin.

Interactive watercolor rendering

Watercolor offers a very rich medium for graphical expression. As such, it is used in a variety of applications including illustration, image processing and animation. The salient features of watercolor images, such as the brilliant colors, the subtle variation of color saturation and the visibility and texture of the underlying paper, are the result of the complex interaction of water, pigments and the support medium.

In this work, we present a set of tools for allowing the creation of watercolor-like pictures and animations. Our emphasis is on the development of intuitive controls, placed in the hands of the artists, rather than on a physically-based simulation of the underlying processes. To this end, we focus on what we believe to be the most significant watercolor effects, and describe a pipeline where each of these effects can be controlled independently, intuitively and interactively, see Figure 12 .

Our goal is the production of watercolor renderings either from images or 3d models, static or animated. In the case of animation rendering, temporal coherence of the rendered effects must be ensured to avoid unwanted flickering and other annoyances. We describe two methods to address this well-known problem that differ in the compromise they make between 2d and 3d.

This work has been published in NPAR '06 conference [16] .

Figure 12. Various watercolor-like images obtained either from a 3d model (a,b) or from a photograph (c) in the same pipeline.
IMG/patio_aquaIMG/foldsIMG/alpes_segment
(a)(b)(c)

X-Toon: An Extended Toon Shader

Over the past decade, toon shading has proven popular in a variety of 3D renderers, video games, and animations. The idea is simple but effective: extend the lambertian shading model by using the computed illumination (a dot product between a light vector and the surface normal) to index into a 1D texture that describes how the final shading varies from dark to light regions. The designer controls the behavior of the shader by creating a 1D texture, typically composed of two or three regions of constant color, to mimic the flat regions of constant colors found in comics and traditional animation. Toon shading can be implemented efficiently via vertex and fragment programs on modern GPUs.

A limitation of toon shading is that it does not reflect the importance or desired level of detail (LOD) of a surface. Such LOD behavior plays an important role in traditional media, however. Often, some objects are considered more important (e.g. , characters vs. background) and thus are depicted with greater detail. In paintings and drawings, an effect known as aerial perspective makes objects in the background appear desaturated and less detailed than those in the foreground. And in scientific illustration, a technique similar to depth-of-field is used to focus on a region of a shape by decreasing contrast or opacity in less-important or out of focus parts of the surface.

Another limitation of ordinary toon shading is that it is view-independent, and so cannot represent plastic or metallic materials, for which view-dependent highlights are of primary importance. Similarly, it cannot support the view-dependent backlighting effects, often used in traditional comics and animation, in which a virtual back light illuminates the surface near the silhouette.

Finally, in conventional toon shading, every surface location is rendered with full accuracy, so that even small shape details are depicted by the shading (in at least some views). This can be desirable, but often designers working traditionally apply a degree of abstraction so that small shape details are omitted. A similar ability to depict an abstracted version of the shape is thus desirable in an automatic toon shader.

In this work we describe X-Toon, see figure 13 , a toon shader that supports view-dependent effects through two extensions to conventional toon shading. The first incorporates a notion of tone detail, so that tone varies with depth or orientation relative to the camera. For this, we replace the conventional 1D texture used in toon shading with a 2D texture, where the second dimension corresponds to tone detail. We describe several ways to define the additional texture coordinate. Our second extension lets us vary the perceived shape detail of the shading. We achieve this by using a modified normal field defined by interpolating between normals of the original shape and normals of a highly abstracted shape. This approach has the advantage of abstracting the shading from a shape (while preserving silhouettes).

This work has been published in NPAR '06 conference [15] and is the result of the Eurodoc grant obtained by Pascal Barla for his stay in the University of Michigan.

Figure 13. Some example effects achieved by our extended toon shader (from left to right): continuous levels of detail, abstraction of near-silhouette regions (smoothing and opacity), backlighting and highlighting (plastic and metallic).
IMG/xtoon

A perception-based criterion for the automatic selection of feature lines

Our ability to recognize objects from images relies on many visual cues, including colour, contrast, luminance, etc. It has been proven that edges are processed faster by our visual system than any other kind of information. That is the reason why line drawings, where strokes outline the objects features, allow an efficient and intuitive depiction of objects. This is attested by cognitive studies showing that line drawings are sufficient for the recognition of familiar objects.

There is a large variety of methods in computer graphics to compute feature lines from a 3D model. These methods allow the extraction of different types of lines, such as contours, borders, suggestive contours, creases, ridges and valleys. However, as feature lines are extracted based on geometric properties, there is no evidence that they actually convey relevant information, that is, information in coherence with what we perceive. Moreover, since different types of lines may convey complementary information, they have to be combined to depict the entire object s shape. To our knowledge, there is no combination scheme that ensures that the resulting drawing is not too dense. Unfortunately, no selection mechanism is provided to keep only the most relevant lines.

In this work we address the problem of producing a line drawing from a 3D model that correctly describes the geometry of the model. To this end, we use a psychovisual filter in order to evaluate the relevance of feature lines extracted from a 3D model, see Figure 14 .

This work has been presented as a poster at NPAR06 [25] and is part of the MIRO project (see section  8.1.4 ) in collaboration with IPARLA (a project from INRIA Futurs).

Figure 14. The selection of feature lines.
IMG/line_selection

Stroke pattern analysis and synthesis

A particularly important class of non-photorealistic renderings is that of stroke-based images. Various styles such as etchings, pen-and-ink and oil painting renderings can be thought of as stroke-based styles. The rendered strokes can be either used to fill in 2D regions, as in painterly rendering, or to annotate 1D paths, like with some hatching patterns; in both cases, the generation of appropriate stroke arrangements for these styles remains a difficult or tedious process to date. Since the individual style of each artist has to be conserved but is not easy to translate in an algorithmic representation, we can not simply rely on procedural methods to generate stroke patterns. Finding a compromise between automation and expressiveness is then crucial for such renderings to be used by artists.

Synthesis by example appears to be the best way to address this question. However, pixel-based texture synthesis is not well suited to stroke patterns, in part because each element of a stroke pattern is individually perceptible, in contrast to pixels. Organized stroke clusters such as those found in hatchings are difcult to extract and reproduce at the pixel level. Moreover, some variation in the reproduced pattern is desirable to avoid too much regularity, and it would be difcult to achieve such variation with pixel-based texture synthesis.

We therefore propose to use a vector-based description of an input stroke pattern supplied by the user. This allows for greater expressiveness and higher-level analysis than would be afforded by a per-pixel approach. The stroke geometry is represented explicitly as connected vertices with attributes such as width and color.

We have worked on two methods to adress this question. The first one (published as a technical report [24] ) bears similarities to parameteric methods on texture synthesis in that it performs a statistical analysis of properties of the input pattern (such as stroke positions, lengths, and orientations). However such methods are hard to extend to general patterns because the parameters depend heavily on the style and structure of the pattern. We thus have worked on a more general method (See Figure 15 ) that targets any kind of stroke patterns (stippling, hatching, brush strokes, small figures) with a quasi-uniform distribution of positions in 1D and 2D (along a path or inside a region). The strokes attributes can vary in non-uniform ways and the only parameter required from the user is the scale of the meaningful elements of the pattern. Then, in a manner analogous to texture synthesis techniques, we organise our method in two stages. An analysis stage where we identify the relevant elements in terms of stroke patterns and their distribution, and a synthesis stage where these elements are placed in the image so as to reproduce an appearance similar to the reference pattern. It has been published in Computer Graphics Forum [4] and is the result of the Eurodoc grant obtained by Pascal Barla for his stay in the university of Michigan.

Figure 15. Our method for pattern synthesis takes as input a reference vectorized stroke pattern, then analyses it to extract relevant stroke pattern elements and properties in order to synthesize a similar pattern.
IMG/2d_bar_clusters_synth_r2_result_cliped

A dynamic drawing algorithm for interactive painterly rendering

Painterly rendering is a technique that takes inspiration from traditional paintings, usually focusing on effects similar to those achieved with oil or acrylic paint, where the individual brush strokes are more or less perceived individually. The main idea is to render a scene projected on the image plane by a set of 2D vector paint strokes holding style attributes (color, texture, etc). This representation has the effect of abstracting the rendering by using primitives larger than pixels, and emphasizing the the 2D nature of the image through 2D paint strokes.

In the general case, paint strokes simultaneously represent information about objects in the scene (such as shape or reflective properties of a surface from the current point of view) while following a stroke style provided by the user (straight or curved brush strokes, thick or thin outline, etc). During the animation they also follow the 2D or 3D motion of some scene elements. The main issues in painterly rendering originate from these conflicting goals. Temporal coherence of the strokes motion is of primary interest: it comes from the desire to link the motion of a 2D primitive (a stroke) to that of a 3D primitive (e.g. a surface). Another important aspect is the density of strokes: when zooming in or out from an object, the number of strokes used to represent it must increase or decrease in order to maintain a uniform density in the picture plane while keeping a constant thickness in image space. Finally, an ideal painterly renderer would let the user fully specify the strokes style in a way that is independent from the depicted scene, but at the same time should ensure that some properties of the scene are well-represented, such as object silhouettes or lighting.

We present an object-space, particle-based system that extends the pioneering work of Meier [38] . Our main contribution is a fully view- and lighting-dependent behavior that explicitly performs a trade-off between a user-specified stroke style and the faithful representation of the depicted scene. This way, our system offers a broader expressiveness by removing the constraints usually found in previous approaches, while still ensuring temporal coherence.

This work has been presented as a poster at NPAR06 [26] and a sketch at SIGGRAPH'06 [22] .

Figure 16. Three different painterly styles produced by our method at interactive rates. Left: long strokes are drawn along silhouettes using surface normals. Middle: strokes have a common global orientation set by the user. Right: Surface principal curvature is used to orient thick strokes. Note that illumination features such as shading and highlights are correctly represented independently of the user's stylistic decisions.
IMG/apple_mark4IMG/apple_jitter0IMG/apple_curv

Automated Style Analysis

Pierre-Edouard Landes has started his PhD in late 2006. His work aims at understanding the mecanisms which could lead to an automated extraction of style in expressive renderings. During his DEA, he successfuly applied style analogies – following the work of A.Hertzmann [35] – for automatically relating the geometric features of a 3D model to the parameters of a NPR style used to render it.

In the beginning of his PhD, P.E.Landes works on the automated extraction of frequent objects in an image. Such objects may furthermore be partially masked by each others, or deformed in an inpredictable way. The work aims at automatically recovering the information that such an image was generated by pasting a single model possibly modified (deformed, colors modified, etc) as well as the range of parameters that represent these modifications.

Among the possible applications to this work is the ``intelligent'' analysis and synthesis of near-regular textures. But more importantly, such a system could be applied to the extraction of style in expressive drawings, based on the assumption that frequent patterns belong to the set of drawing primitives, rather than to the objects which are depicted.


previous
next

Logo Inria