Team artis

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Expressive Rendering

Participants : Pierre Bénard, Hedlena Bezerra, Adrien Bousseau, Pierre-Edouard Landes, Alexandrina Orzan, Thierry Stein, Cyril Soler, Joëlle Thollot.

Pattern-Based Texture Analysis and Synthesis

Participants : Pierre-Edouard Landes, Cyril Soler.

In this research [32] , we found a new method for the analysis and resynthesis of a specific class of textures, that we refer to as ”high-level stochastic textures”. Such textures consist of distributions from different and potentially overlapping shapes, or ”patterns”. The constituent distributions may be random or they may obey some geometric placement rule. As a result they fall into the class termed, high-level stochastic textures, that comprises both arrangements of 2d primitives as well as near regular textures.

Figure 14. Our method proposes a complete framework for the analysis (a) and resynthesis (b) of pattern-based textures . Once the patterns have been detected and extracted thanks to their repetition throughout the input sample (a), synthesizing new textures preserving such relevant shapes becomes straightforward (b).
IMG/texture1IMG/texture2
(a)(b)

Our proposed method first aims at extracting and capturing the distribution of relevant shapes. To achieve this, we rely on their repetition throughout the input sample. Once this analysis step is performed, it then becomes possible to resynthesize visually similar and tileable textures by reusing the obtained patterns. This is in complete contrast to classic pixel-based Markovian approaches, where such long-range structure preserving synthesis is hard if not impossible.

Spectrum-Preserving Texture Advection for Animated Fluid

Participants : Fabrice Neyret, Nicolas Holzschuch.

Qizhi Yu and al. have developped a Lagrangian model of texture advection, to be used for advecting small water surface details while preserving their spectral characteristics (see Figure 15 ). Our particles are distributed according to animated Poisson-disk, and carry a local grid mesh which is distorted by advection and regenerated when a distorsion metrics is passed. This Lagrangian approach solve the problem of local-adaptive regeneration rate, provide a better spectrum and better motion illusion, and avoid the burden of blending several layers.

Figure 15. Results of our method to texture animated fluids. We use as input a velocity field and a texture. We produce as output an animated fluid texture. Left: an input procedural noise (flownoise) texture is advected and used in a procedural fire shader. Middle: an advected procedural noise (flownoise) texture is used in a cloud shader with displacement mapping. Right: advection of an input bubble texture in a river flow.
IMG//LagrangianAdvection

Real-Time Coherent Stylization

Participants : Pierre Bénard, Adrien Bousseau, Joëlle Thollot.

Many non-photorealistic rendering approaches aim at depicting 3D scenes with styles that are traditionally produced on 2D media like paper. The main difficulty suffered by these methods is temporal coherence when stylizing dynamic scenes. This problem arises from the contrary goals of depicting a 3D motion while preserving the 2D characteristics inherent to any style marks (pigments, strokes, etc). Achieving these goals without introducing visual artifacts implies the concurrent fulfillment of three constraints. First, the style marks should have a constant size and density in the image in order to preserve the 2D appearance of the medium. Second, the style marks should follow the motion of the 3D objects they depict to avoid the sliding of the style features over the 3D scene. Finally, a sufficient temporal continuity between adjacent frames is required to avoid popping and flickering.

We propose dynamic textures, a method that facilitates the integration of temporally coherent stylization in real-time rendering pipelines.This method uses textures as simple data structures to represent style marks. This makes our approach especially well suited to media with characteristic textures (eg. watercolor, charcoal, stippling), while ensuring real-time performances due to the optimized texture management of modern graphic cards. Central to our technique is an object space infinite zoom mechanism that guarantees a quasi-constant size and density of the texture elements in screen space for any distance from the camera. This simple mechanism preserves most of the 2D appearance of the medium supported by the texture while maintaining a strong temporal coherence during animation.

This work was awarded by the best paper price during the 21th AFIG conference and has been presented at the ACM Symposium I3D 2009 [26] .

Appearance-guided Synthesis of Element Arrangements by Example

Participants : Pierre-Edouard Landes, Thomas Hurtut, Joëlle Thollot.

We present a technique for the analysis and re-synthesis of 2D arrangements of stroke-based vector elements. A posteriori analysis of a user's inputs as a way to capture his/her intent poses a formidable challenge. By-example approaches could yet easily become some of the most intuitive use metaphors and greatly alleviate creation process efforts. Here, we propose to tackle this issue from a statistical point of view and take specific care of accounting for information usually overlooked in previous research, namely the elements' very appearance. Composed of curve-like strokes, we describe elements by a concise set of perceptually relevant features. After detecting appearance dominant traits, we can generate new arrangements that respect the captured appearance-related spatial statistics using multitype point processes. Our method faithfully reproduces visually similar arrangements and relies on neither heuristics nor post-processes to ensure statistical correctness. This work has been published at NPAR 2009 [29] .

Figure 16. Given a reference arrangement composed of vector elements (top left) , our analysis scheme divides the raw element set into appearance categories (bottom left) . Spatial interactions based on appearance can then be learned by statistical modelling and then exploited to yield visually similar arrangements (right) .
IMG/arrangements

Quality Assessment of Fractalized NPR Textures: a Perceptual Objective Metric

Participants : Pierre Bénard, Joëlle Thollot.

Texture fractalization is used in many existing approaches to ensure the temporal coherence of a stylized animation. This works presents the results of a psychophysical user-study evaluating the relative distortion induced by a fractalization process of typical medium textures. We perform a ranking experiment, assess the agreement among the participants and study the criteria they used. Finally we show that the average co-occurrence error is an efficient quality predictor in this context.

This work has been published in APGV 09 (symposium on Applied perception in graphics and visualization) [27] .

Texture Design and Draping in 2D Images

Participants : Alexandrina Orzan, Joëlle Thollot.

We present a 2D vector image representation with native support for textures, complete with tools for creating and manipulating these textures. Particularly, we describe methods for applying the textures directly to an image, without requiring full 3D information, a process we call texture-draping . Since the image representation is vectorial, it is compact and remains editable with convenient, high-level editing tools, designed to support common artistic workflows. While we focus on regular and near-regular textures, our representation can be extended to handle other types of textures. We illustrate our approach with several textured vector images, created by an artist using our system.

This work has been presented at EGSR 2009 (Eurographics Symposium on Rendering) [21] and is part of a collaboration with Adobe.

Virtual Immersion for Balance Control Analysis

Participants : Jean-Dominique Gascuel, Olivier Martin.

Both in real life and in virtual environments, the control of gaze and balance strongly depend on the processing of visual cues. We started a new research subject to use objective human balance measurements (force/torque applied by the feet, muscular activations, postural motion capture) to asses impact of projected visual images on an immersed subject. For non-photo realistic rendering, experimental setups should allow objective measurements of different rendering styles effectiveness, and might be useful to evaluate different policies for levels of abstraction .

In 2009, we focus on another type of application in the medical domain: human balance also strongly depends on two other sensory signals: vestibular and muscular proprioception (internal hears and musculoskeletal feedback).

We published a first study [16] , [30] , [17] that assessed the specific effects of the dynamic 2D and 3D visual inputs on the oculomotor and balance reactive control. Thirteen subjects were immersed in a virtual environment using different 2D/3D visual flow conditions. Analysis of eye movement and postural adjustments shows that 2D and 3D flows induce specific measurable behavioral responses.

This results will allow us to go further in medical applications, and to conceive an immersive virtual visual environment for diagnosis and treatment of balance troubles. For this, we obtain in 2009 a clinical research fund from the DIRC (Direction de la recherche clinique et de l'innovation du CHU Grenoble Nord ) to buildup a clinical experiment setup in order to validate the concept.


previous
next

Logo Inria