Section: New Results
Modeling, editing and processing geometry
Participants : Sébastien Barbier, Adrien Bernhardt, Georges-Pierre Bonneau, Marie-Paule Cani, François Faure, Sahar Hassan, Franck Hétroy, Olivier Palombi, Adeline Pihuit, Damien Rohmer, Jamie Wither.
Multiresolution geometric modeling with constraints
This work is done in collaboration with Stefanie Hahmann from LJK. A collaboration is also taking place on this topic with Prof. Gershon Elber from Technion. The purpose of this research is to allow complex nonlinear geometric constraints in a multiresolution geometric modeling. This year we have two publications. The first is dedicated to the preservation of volume inside a multiresolution BSpline tensor-product surface  . This is illustrated in Figure 4 . It has been published in the journal CAGD (Elsevier). The second work is a state of the art on the modeling of smooth complex surfaces interpolating an arbitray mesh  , published at ASME2008.
The layered volumetric model we previously developed for virtual clay achieves the desired plausibility in real-time, but opens the problem of providing intuitive interaction tools. Our recent work therefore focused on developing new interfaces for interacting with virtual clay, and in particular interfaces that enable us to intuitively control a virtual hand interacting with the clay.
We first developed a prototype where a soft ball attached to a force feedback device (phantom) serves as an avatar for the virtual clay. The ball is augmented with force sensors, to ease the control of the deformable virtual hand that sculpts the clay. This enabled us to identify several issues, such as the restriction to small gestures, and the fatigue due to the fact that a constant pressure needed to be applied to hold an object.
We then developed a new interaction device, called Hand Navigatoras it is an extension of a space ball  . The user’s hand lies on the space-ball, augmented with cavities equipped with force sensor for each fingers. The user controls virtual hand gestures by applying two directional pressure forces with his/her fingers, and by controlling the overall motion and orientation of the virtual hand with the palm. This both enables gestures that are not limited in scope and avoids fatigue, since the user’s hand remains in a rest position. See Figure 5 . We conducted a series of user studies for validation.
This project resulted into an INRIA patent registered in July 2008, and a pre-industrialization project funded in 2008-2009 by the incubator GRAVIT (see Section 7.3 ). Note that this project is part of our contribution to the PPF "Multimodal interaction" (see Section 8.3.1 ).
Sketch-based 3D modeling is currently attracting more and more attention, being recognized as a fast and intuitive way of creating digital content. We are exploring this technique from two different view-points:
A first class of methods directly infer free-form shapes in 3D from arbitrary progressive sketches, without any a priori knowledge on the objects being represented. In collaboration with Loic Barthes from the IRIT lab in Toulouse, we studied the use of convolution surfaces for achieving this goal  : the user paints a 2D projection of the shape. A skeleton (or medial axis), taking the form of a set of branching curves, is reconstructed from this 2D region. It is converted into a close form convolution surface whose radius varies along the skeleton. The resulting 3D shape can be extended by sketching over it from a different viewpoint, while the blending operator used adapts its action so that no detail is blurred during the process. This work was supported by a direct industrial contract with the firm Axiatec (see Section 7.4 ), leading to the development of the MaTISSe software (see Section 5.5 ). This work will be continued within Adeline Pihuit’s PhD project, co-supervised by Olivier Palombi and Marie-Paule Cani, and focusing on the use of sketch-based interfaces for the interactive teaching of anatomy.
Other sketching techniques are able to create a complex shape from a single sketch, using some a priori knowledge on the object being drawn for inferring the missing 3D information. This was the topic of Jamie Wither's PhD thesis, advised by Marie-Paule Cani and defended in November 24, 2008. In 2008, we introduced the ideas of inferring the structure of a complex shape from its silhouette, of combining sketch-based control with the procedural modeling of details, and of being able to sketch fine details locally, and extend them to the whole shape. These ideas were exploited for the sketch-based modeling of clouds  (see Figure 6 ) and of trees (submitted to publication). This last work was conducted within the ANR Natsim (see Section 8.2.2 ), in collaboration with the INRIA project-team Virtual Plants (AMAP lab).
Geometrical methods for skinning character animations
Skinning, which consists in computing how vertices of a character mesh (representing its skin) are moved during a deformation w.r.t. the skeleton bones, is currently the most tedious part in the skeleton-based character animation process. We propose new geometrical tools to enhance current methods. First, we developed a new skinning framework inspired from the mathematical concept of atlas of charts: we segment a 3D model of a character into overlapping parts, each of them being anatomically meaningful (e.g., a region for each arm, leg, etc., with overlaps around joints), then during deformation the position of each vertex in an overlapping area is updated thanks to the movement of neighboring bones. This work (submitted to publication) was done in collaboration with Boris Thibert from the MGMI team of the LJK, Cédric Gérot and Annick Montanvert from the GIPSA-Lab in Grenoble, and Lin Lu from the University of Hong Kong.
Secondly, we developed, in collaboration with Stefanie Hahmann from the MGMI team of the LJK, a post-correction method for preserving volume in the standard smooth-skinning pipeline  . As usual, the character is defined by a skin mesh at some rest pose and an animation skeleton. At each animation step, skin deformations are first computed using standard SSD. Our method corrects the result using a set of local deformations which model the fold-over-free, constant volume behavior of soft tissues. This is done geometrically, without the need of any physically-based simulation. To make the method easily applicable, we also provide automatic ways to extract the local regions where volume is to be preserved and to compute adequate skinning weights, both based on the character’s morphology.
Detection and quantification of brain aneurysms
Aneurysms are excrescences on blood vessels. They can break, letting the blood propagate outside the vessel, which often leads to death. In some cases, the blood clots sufficiently fast so that people survive. However, a neurosurgeon or a neuroradiologist should intervene very quickly in order to repare the vessel before the aneurysm breaks once more.
The purpose of this research is to help neurosurgeons and neuroradiologists to plan surgery, by giving them quantitative information about the size, shape and geometry position of aneurysms. This work is part of the PhD of Sahar Hassan. In 2008 Sahar enhanced the method developed during her Master thesis, by adding partial graph matching to locate aneurysms on the cerebral vascular tree and give accurate information on the location and size of aneurysm's necks. This work has been evaluated by a radiologist at the Grenoble University Hospital. We plan to publish the method in the medical literature.
Reconstruction of the Linear Nucleus of the Medulla using interactive implicit modeling from tissues sections
To reveal both three dimensional organization and function of brain structures, sectioning of biological material remains the only way. Noninvasive techniques such as current 3D imaging explorations cannot provide the necessary resolution. Our goal is to produce the most realistic models of the neuro-anatomical structures. We propose an interactive technique to reconstruct 3D shapes from manually contoured structures in 2D using skeletal implicit surfaces. The fields generated in each slice are blended over all sections to build the final 3D surface. The anatomist can change selectively contours to improve the relevance of the final model according to histological information contained in the tissues sections. The Figure 7 shows the Linear Nucleus (in green) as an example.
Participant : Franck Hétroy.
This work is done in collaboration with Carlos Andujar, Pere Brunet and Alvar Vinacua from Universitat Politecnica de Barcelona, Spain. The purpose is to propose an efficient method to create 2-manifold meshes from real data, obtained as soups of polygons with combinatorial, geometrical and topological noise. We propose to use a voxel structure called a discrete membrane and morphological operators to compute possible topologies, between which the user chooses.