ALICE is one of the four teams in the Image Geometry and Computation groupin INRIA Nancy Grand-Est.

ALICE is a project-team in Computer Graphics. The fundamental aspects of this domain concern the interaction of
*light*with the
*geometry*of the objects. The lighting problem consists in designing accurate and efficient
*numerical simulation*methods for the light transport equation. The geometrical problem consists in developing new solutions to
*transform and optimize geometric representations*. Our original approach to both issues is to restate the problems in terms of
*numerical optimization*. We try to develop solutions that are
*provably correct*,
*numerically stable*and
*scalable*.

By provably correct, we mean that some properties/invariants of the initial object need to be preserved by our solutions.

By numerically stable, we mean that our solutions need to be resistant to the degeneracies often encountered in industrial data sets.

By scalable, we mean that our solutions need to be applicable to data sets of industrial size.

To reach these goals, our approach consists in transforming the physical or geometric problem into a numerical optimization problem, studying the properties of the objective function and designing efficient minimization algorithms. To properly construct these discretizations, we use the formalism of finite element modeling, geometry and topology. We are also interested in fundamental concepts that were recently introduced into the geometry processing community, such as discrete exterior calculus, spectral geometry processing and theory of sampling.

The main applications of our results concern scientific visualization. We develop cooperations with researchers and people from the industry, who experiment applications of our general solutions to various domains, comprising CAD, industrial design, oil exploration and plasma physics. Our solutions are distributed in both open-source software ( Graphite, OpenNL, CGAL) and industrial software ( Gocad, DVIZ).

Bruno Lévy obtained a Starting Grant from the European Research Council for his project GoodShape. The aim of the project is to study the problem of the optimal sampling of shapes. The project was one of the 300 funded projects, out of 10000 submissions (European wide, all disciplines of science). The 1.1 million Euros grant will allow to fund four Ph.D. theses, two post-docs, one software developer and one international workshop during the project (5 years, August 2008 - August 2013).

We proposed this year a new method to design tangent vector fields over surfaces, a fundamental problem in geometry processing, texture generation and surface fitting. Our work was published in ACM TOG and presented at Siggraph 2008. The main contribution is to exactly control the orientation field topology by extending the Poincaré-Hopf theorem to orientations with symmetries, and by explicitly representing the topological degrees of freedom. Our method was also used to produce some figures of the book topology and its applications, by William F. Basener (Rochester Institute of Technology, school of Mathematical Sciences).

Computer Graphics is a quickly evolving domain of research. These last few years, both acquisition techniques (e.g., range laser scanners) and computer graphics hardware (the so-called GPU's, for Graphics Processing Units) have made considerable advances. However, as shown in Figure , despite these advances, fundamental problems still remain open. For instance, a scanned mesh composed of hundred million triangles cannot be used directly in real-time visualization or complex numerical simulation. To design efficient solutions for these difficult problems, ALICE studies two fundamental issues in Computer Graphics:

the representation of the objects, i.e., their geometry and physical properties;

the interaction between these objects and light.

Historically, these two issues have been studied by independent research communities. However, we think that they share a common theoretical basis. For instance, multi-resolution and
wavelets were mathematical tools used by both communities
. We develop a new approach, which consists in studying the geometry and lighting from the
*numerical analysis*point of view. In our approach, geometry processing and light simulation are systematically restated as a (possibly non-linear and/or constrained) functional
optimization problem. This type of formulation leads to algorithms that are more efficient. Our long-term research goal is to find a formulation that permits a unified treatment of geometry and
illumination over this geometry.

Geometry processing recently emerged (in the middle of the 90's) as a promising strategy to solve the geometric modeling problems encountered when manipulating meshes composed of hundred
millions of elements. Since a mesh may be considered to be a
*sampling*of a surface - in other words a
*signal*- the
*digital signal processing*formalism was a natural theoretic background for this subdomain (see e.g.,
). Researchers of this domain then studied different aspects of this formalism applied to geometric modeling.

Although many advances have been made in the geometry processing area, important problems still remain open. Even if shape acquisition and filtering is much easier than 30 years ago, a
scanned mesh composed of hundred million triangles cannot be used directly in real-time visualization or complex numerical simulation. For this reason, automatic methods to convert those large
meshes into higher level representations are necessary. However, these automatic methods do not exist yet. For instance, the pioneer Henri Gouraud often mentions in his talks that the
*data acquisition*problem is still open. Malcolm Sabin, another pioneer of the “Computer Aided Geometric Design” and “Subdivision” approaches, mentioned during several conferences of the
domain that constructing the optimum control-mesh of a subdivision surface so as to approximate a given surface is still an open problem. More generally, converting a mesh model into a higher
level representation, consisting of a set of equations, is a difficult problem for which no satisfying solutions have been proposed. This is one of the long-term goals of international
initiatives, such as the
AIMShapeEuropean network of excellence.

Motivated by gridding application for finite elements modeling for oil and gas exploration, in the frame of the Gocadproject, we started studying geometry processing in the late 90's and contributed to this area at the early stages of its development. We developed the LSCM method (Least Squares Conformal Maps) in cooperation with Alias Wavefront . This method has become the de-facto standard in automatic unwrapping, and was adopted by several 3D modeling packages (including Maya and Blender). We experimented various applications of the method, including normal mapping, mesh completion and light simulation .

However, classical mesh parameterization requires to partition the considered object into a set of topological disks. For this reason, we designed a new method (Periodic Global Parameterization) that generates a continuous set of coordinates over the object . We also showed the applicability of this method, by proposing the first algorithm that converts a scanned mesh into a Spline surface automatically . Both algorithms are demonstrated in Figure .

We are still not fully satisfied with these results, since the method remains quite complicated. We think that a deeper understanding of the underlying theory is likely to lead to both efficient and simple methods. For this reason, we studied last year several ways of discretizing partial differential equations on meshes, including Finite Element Modeling and Discrete Exterior Calculus. This year, we also explored Spectral Geometry Processing and Sampling Theory (more on this below).

Numerical simulation of light means solving for light intensity in the “Rendering Equation”, an integral equation modeling energy transfers (or light
*intensity*transfers). The Rendering Equation was first formalized by Kajiya
, and is given by:

In addition, these methods are challenged with more and more complex materials (see Figure ) which need to be taken into account in the simulation. The simple diffuse Lambert law has been replaced with much more complex reflection models. The goal is to create synthetic images that no longer have a synthetic aspect, in particular when human characters are considered.

One of the difficulties is finding efficient ways of evaluating the visibility term. This is typically a Computational Geometry problem, i.e., a matter of finding the right combinatorial
data structure (the
*visibility complex*), studying its complexity and deriving algorithms to construct it. To deal with this issue, several teams (including VEGAS, ARTIS and REVES) study the visibility
complex.

The other terms of the Rendering Equation cannot be solved analytically in general. Many different numerical resolution methods have been used. The main difficulties of the discipline are that each time a new physical effect should be simulated, the numerical resolution methods need to be adapted. In the worst case, it is even necessary to design a new ad-hoc numerical resolution method. For instance, in Monte-Carlo based solvers and in recent Photon-Mapping based methods, several sampling maps are used, one for each effect (a map is used for the diffuse part of lighting, another map is used for caustics, etc.). As a consequence, the discipline becomes a collection of (sometimes mutually exclusive) techniques, where each of these techniques can only simulate a specific lighting effect.

The other difficulty is the classical problem of satisfying two somewhat antinomic objectives at the same time. On the one hand, we want to simulate complex physical phenomena (subsurface scattering, polarization, interferences, etc.), responsible for subtle lighting effects. On the other hand, we want to visualize the result of the simulation in real-time.

We first experimented finite-element methods in parameter space, and developed the
*Virtual Mesh*approach and a parallel solution mechanism for the associated hierarchical finite element formulation. The initial method was dedicated to scenes composed of quadrics. We
combined this method with our geometry processing methods to improve the visualization
.

One of our goals is now to design new representations of lighting coupled with the geometric representation. These representations of lighting need to be general enough so as to be easily extended when multiple physical phenomena should be simulated. Moreover, we want to be able to use these representations of lighting in the frame of real-time visualization. Our original approach to these problems consists in finding efficient function bases to represent the geometry and the physical attributes of the objects. We have first experimented this approach to the problem of image vectorization . We think that our dynamic function basis formulation is likely to lead to efficient light simulation algorithms. The originality is that the so-defined optimization algorithm solves for approximation and sampling all together. Developing such an algorithm is the main goal of our ERC GoodShapeproject.

After having introduced the
*geometry processing*and
*light simulation*scientific domains, we now present the principles that we use to design a common mathematical framework that can be applied to both domains. Early approaches to geometry
processing and light simulation were driven by a Signal Processing approach. In other words, the solution of the problem is obtained after applying a
*filtering scheme*multiple times. This is for instance the case of the mesh smoothing operator defined by Taubin in his pioneering work
. Recent approaches still inherit from this background. Even if the general trend moves to Numerical Analysis, much
work in geometry processing still studies the coefficients of the gradient of the objective function
*one by one*. This intrinsically refers to
*descent*methods (e.g., Gauss-Seidel), which are not the most efficient, and do not converge in general when applied to meshes larger than a certain size (in practice, the limit appears to
be around
10
^{4}facets).

In the approach we develop in the ALICEproject-team, geometry processing and light simulation are systematically restated as a (possibly non-linear and/or constrained) functional optimization problem. As a consequence, studying the properties of the minimum is easier: the minimizer of a multivariate function can be more easily characterized than the limit of multiple applications of a smoothing operator. This simple remark makes it possible to derive properties (existence and uniqueness of the minimum, injectivity of a parameterization, and independence to the mesh).

Besides helping to characterize the solution, restating the geometric problem as a numerical optimization problem has another benefit. It makes it possible to design efficient numerical optimization methods, instead of the iterative relaxations used in classic methods.

Richard Feynman (Nobel Prize in physics) mentions in his lectures that physical models are a “smoothed” version of reality. The global behavior and interaction of multiple particles is
captured by physical entities of a larger scale. According to Feynman, the striking similarities between equations governing various physical phenomena (e.g., Navier-Stokes in fluid dynamics
and Maxwell in electromagnetism) is an illusion that comes from the way the phenomena are modeled and represented by “smoothed” larger-scale values (i.e.,
*fluxes*in the case of fluids and electromagnetism). Note that those larger-scale values do not necessarily directly correspond to a physical intuition, they can reside in a more abstract
“computational” space. For instance, representing lighting by the coefficients of a finite element is a first step in this direction. More generally, our approach consists in trying to get rid
of the limits imposed by the classic view of the existing solution mechanisms. The traditional approaches are based on an intuition driven by the laws of physics. Instead of trying to mimic the
physical process, we try to restate the problem as an abstract numerical computation problem, on which more sophisticated methods can be applied (a plane flies like a bird, but it does not flap
its wings). We try to consider the problem from a computational point of view, and focus on the link between the numerical simulation process and the properties of the solution of the Rendering
Equation. Note also that the numerical computation problems yielded by our approach lie in a high-dimensional space (millions of variables). To ensure that our solutions scale-up to scientific
and industrial data from the real world, our strategy is to try to always use the best formalism and the best tool. The best formalism comprises Finite Elements theory, differential geometry,
topology, and the best tools comprise recent hardware, such as GPU (Graphic Processing Units), with the associated highly parallel algorithms. To implement our strategy, we develop algorithmic,
software and hardware architectures, and distribute these solutions in both open-source software (
Graphite) and industrial software (
Gocad,
DVIZ).

Besides developing new solutions for geometry processing and numerical light simulation, we aim at applying these solutions to real-size scientific and industrial problems. In this context, scientific visualization is our main applications domain. With the advances in acquisition techniques, the size of the data sets to be processed increases faster than Moore's law, and represents a scientific and technical challenge. To ensure that our processing and visualization algorithms scale-up, we develop a combination of algorithmic, software and hardware architectures. Namely, we are interested in hierarchical function bases, and in parallel computation on GPUs (graphic processing units).

Our developments in parallel processing and GPU programming permit our geometry processing and light simulation solutions to scale-up, and handle real-scale data from other research and industry domains. The following applications are developed within the MIS (Modelization, Interaction, Simulation) and AOC (Analysis, Optimization and Control) programs, which are supported by the “Contrat de Plan État-Région Lorraine”.

This application domain is led by the Gocadconsortium, created by Prof. Mallet, now headed by Guillaume Caumon. The consortium involves 48 universities and most of the major oil and gas companies. ALICEcontributes to Gocadwith numerical geometry and visualization algorithms for oil and gas engineering. The currently explored domains are complex and dynamic structural models construction, extremely large seismic volumes exploration, and drilling evaluation and planning. The solutions that we develop are transferred to the industry through Earth Decision Sciences. Several Ph.D. students were co-advised by researchers in GOCAD and ALICE, such as Laurent Castanié (defended in 2006, on novel visualization methods, published in IEEE Visualization ), Luc Buatois (defended this year, on high-performance numerical solvers on Graphic Processing units), and more rencently (last year), Thomas Viard, on the visualization of data with uncertainties.

Protein docking is a fundamental biological process that links two proteins. This link is typically defined by interactions between two large zones of the protein boundaries. Visualizing the interfaces where these interactions take place is useful to understand the process thanks to 3D protein structures, to estimate the quality of docking simulation results, and to classify interactions in order to predict docking affinity between classes of interacting zones. Our developments take place in the VMD software (in cooperation with ORPAILLEUR and the Beckmann Institute at University of Illinois). In the frame of his Ph.D., Matthieu Chavent studied new means of visualizing molecular surfaces, which play an important role in better understanding the nano-scale mechanisms of life.

Graphiteis a research platform for computer graphics, 3D modeling and numerical geometry. It comprises all the main research results of our “geometry processing” group. Data structures for cellular complexes, parameterization, multi-resolution analysis and numerical optimization are the main features of the software. Graphite is publicly available since October 2003. It is hosted by Inria GForge since September 2008 (1000 downloads in two months). Graphite is one of the common software platforms used in the frame of the European Network of Excellence AIMShape. This year, we developed and distributed new plugins for computing the Manifold Harmonics Transform that we introduced in , a plugin for our ARDECO vectorization algorithm , and a plugin for coupling the CGALlibrary with Graphite.

OpenNLis a standalone library for numerical optimization, especially well-suited to mesh processing. The API is inspired by the graphics API OpenGL, this makes the learning curve easy for computer graphics practitioners. The included demo program implements our LSCM mesh unwrapping method. It was integrated in Blenderby Brecht Van Lommel and others to create automatic texture mapping methods. More recently, they implemented our ABF++ method (developed in cooperation with University of British Columbia). It will shortly include the more recent linear ABF, that we developed in cooperation with Rhaleb Zayer (who was at that time with Max Planck Institute for Informatik). Our mesh unwrapping algorithms have now become the de-facto standard for mesh unwrapping in several industrial mesh modeling packages (including Maya, Silo, Catia). OpenNL is extended with two specialized modules :

CGAL parameterization package: this software library, developed in cooperation with Pierre Alliez and Laurent Saboret, is a CGALpackage for mesh parameterization. It includes a special, generic version of OpenNL, compatible with CGAL requirements of genericity.

Concurrent Number Cruncher: this software library extends OpenNL with parallel computing on the GPU, implemented using the CUDA API, as explained in our publication .

This year, we enhanced OpenNL, by improving the management of sparse matrices (x2 acceleration factor as compared to our previous implementation). We also interfaced GINAC, a formal computation library. This allows computing the stiffness and mass matrices directly from the equation of the function basis and the operator.

Intersurfis a plugin of the VMD (Visual Molecular Dynamics) software. VMD is developed by the Theoretical and Computational Biophysics Group at the Beckmann Institute at University of Illinois. The Intersurf plugin is released with the official version of VMD since the 1.8.3 release. It provides surfaces representing the interaction between two groups of atoms, and colors can be added to represent interaction forces between these groups of atoms. We plan to include in this package the new results obtained this year in molecular surface visualization by Matthieu Chavent.

Gocadis a 3D modeler dedicated to geosciences. It was developed by a consortium headed by Jean-Laurent Mallet, in the Nancy School of Geology. Gocad is now commercialized by Earth Decision Sciences(formerly T-Surf), a company which was initially a start-up company of the project-team. Gocad is used by all major oil companies (Total-Fina-Elf, ChevronTexaco, Petrobras, etc.), and has become a de facto standard in geo-modeling. Last year, Laurent Castanié's work (CIFRE Earth Decision Sciences, defended in 2006) was successfully integrated in the VolumeExplorer plugin of Gocad. Luc Buatois's work on GPU-based numerical solvers will be integrated in Gocad's flow simulator.

We continued our work on Geometry Processing with the strategy of considering all the three levels of abstraction in parallel, namely
*formalization*(specification using functional analysis and topology),
*discretization*(relations between the continuous problem and discretized linear models), and finally
*implementation*(how to implement efficient solvers for these linear problems using modern hardware). This year's realization for these three levels of abstraction are described in the
following three paragraphs.

Many algorithms in texture synthesis, non-photorealistic rendering (hatching), or re-meshing require defining the orientation of some features (texture, hatches or edges) at each point of a surface. This is also the case of the quad-remeshing algorithms that we developed ( and ). In early works, tangent vector (or tensor) fields were used to define the orientation of these features. Extrapolating and smoothing such fields is usually performed by minimizing an energy composed of a smoothness term and of a data fitting term. Those approaches allow smoothing existing fields such as the direction of the curvature, to interactively introduce directional constraints, but fail to control the topology of the resulting field.

We have proposed a structure called N-symmetry direction fields that explicits the field topology by a set of integer variables. This allows to design direction fields with directional constraints that always respect the user-prescribed topology. This set of variables corresponds to the coefficients of a co-homology basis of the field rotation. On the one hand, controlling the topology makes it possible to have few singularities, even in the presence of high frequencies (fine details) in the surface geometry. On the other hand, the user has to explicitly specify all singularities, which can be a tedious task.

From a user point of view, a better direction field design algorithm would offer the possibility of letting the singularities emerge naturally from the direction extrapolation and smoothing (as it was done in previous approaches), but would also provide some control over the topology. We have proposed such a weak control of the topology by smoothing "the influence of the geometric details onto the emergence of singularities". The idea here is to restate the objective function such that the optimization algorithm does not try to minimize the part of the field curvature that is due to the Gaussian curvature of the surface. Some results are shown in Figure . This work is accepted pending revision in ACM Transactions on Graphics.

With the increasing importance of numerical optimization in geometry processing, matrix algebra is becoming central to many applications in the field. Unfortunately, this importance does not translate into new approaches for the conception and realization of algorithmic solutions which still rely on classical mesh traversal data structures. This often leads to unnecessary duplication of data and redundancy in operations as several data structures need to work in concert. We advocate that sparse matrix data structures can play a key role both in mesh traversal and numerical computing on surfaces. In this way, parallelism for mesh processing comes at no additional cost as it can take advantage of readily available parallel matrix packages. Furthermore, we demonstrate that the functionality of classical mesh data structures can be recovered using sparse matrix algebra. Thinking within the sparse matrix formalism often offers an elegant and fast alternative for performing geometric computations on surface meshes.

In the frame of the Ph.D. of Luc Buatois, we continued working on efficient parallel numerical solvers on the GPU. We generalized our CNC implementation ( Concurrent Number Cruncher), based on the CUDA programming language for NVidia GPUs, as explained in our publication . The CNC is a highly optimized parallel sparse conjugate gradient solver, that uses the sparse block compressed row column format to make an optimum use the the GPU's memory. To our knowledge, this is the first general purpose sparse linear solver for Graphics Processing Units. We successfully applied it to geometry processing problems (mesh fairing and mesh parameterization). Our CNC outperforms by up to a factor of 10x leading-edge CPU implementations of the same algorithms for significant sizes of linear systems. The implementation is available as an OpenSource package. Luc Buatois defended his Ph.D. this year , including this result and previous ones on the visualization of unstructured grids.

In the frame of our GoodShapeproject, we study geometry processing problems with the specific point of view of computing an optimal function basis. To reach this goal, we explore different strategies, and revisit them with the formalism of numerical optimization. As a mean of computing an efficient function basis, we study Centroidal Voronoi Tessellations and spectral methods, as described in the following two paragraphs. The so-computed function basis will be used as the fundamental tool for new light simulation methods that we try to develop (see below).

Optimization technique for Faster Centroidal Voronoi Tessellation (CVT): CVT is an essential tool in many scientific fields, that can be used to compute the optimal sampling of a given signal. In Figure , we show a CVT adapted to a background density function, computed by our algorithm mentioned below. For large-scale problems, the popular Lloyd relaxation is not fast enough to achieve local minimum due to its linear convergence rate. Our previous work shows that Limited memory Quasi-Newton method (for instance, L-BFGS) is a better method which preserves sparsity and simplicity for our CVT program. However, the approximated Hessian matrix by L-BFGS only contains enough curvature information and it has only r-linear convergence rate in theory. We investigate the state of the art in large-scale optimization technique such as Truncated Newton, Modified Newton and other LMQN methods on CVT computation and integrate the preconditioned LBFGS method which use the true Hessian and is faster than traditional L-BFGS method. The goal of our work is to develop super-linear optimization methods for fast CVT computation both in convergence and computational time. Our algorithm is described in an article (accepted pending revisions in ACM Transactions on Graphics).

Isotropic and anisotropic CVT under Euclidean metric are already well used in computer graphics and numerical simulation. Under other metrics, the behavior and usage of CVT are not well known and studied. For instance, CVT under metric can be a good way to produce nice quadrilateral tiling. Inspired by the concept of CVT, we are pursuing to produce nice 2D quadrilateral meshing and 3D hexahedral meshing with the aid of CVT. We propose a novel generalized CVT concept based on convex distance function and directional and stretch information. The present work includes approximated Voronoi tessellation computation via graphics hardware, dynamical seed-insertion and removal, quad and hexahedral-cell generation.

We continued our research program started in 2006 about spectral geometry processing methods. As a result, we designed the Manifold Harmonics Transform , a formalism to compute on 3D meshes the equivalent of the Fourier transform (or the Discrete Cosine Transform to be more precise). This makes it possible to completely port the signal processing framework from the setting of 2D images to the more complicated setting of manifolds of arbitrary genus and curvature. The approach is also described in the Ph.D. thesis of Bruno Vallet , that he defended this year. We also made the software implementation of this tool available in our Graphiteplatform. We now start to investigate non-linear spectral mesh processing, shown in Figure and outlined in the next paragraph.

With the ever increasing strive for working with highly detailed meshes, the cost of generating computer solutions for mesh editing problems often becomes exhaustive. It is therefore, desirable to reduce the size of the original problem for increased efficiency. In this aspect we investigate the potential of modal analysis for interactive deformation using the underlying shell model description. By encoding the relationship between input (force excitation) and output (deformation response) in terms of the frequency response function, the deformation problem reduces to representing the deformation in terms of the most significant eigenmodes of the original surface. In this framework, the mesh vertices enjoy six degrees of freedom and the behavior of the deformation is controlled through material properties assigned to the mesh as well as the combination of bending and membrane effects encoded in the shell model. For the discretization, we take advantage of a generic approach which allows the construction and evaluation of several deformation models without much programming effort. The governing deformation modes are the significant modes associated with the generalized eigenvalue problem for the stiffness and mass matrices. As the mesh editing might require large displacements, it is imperative to extend the modal analysis to the nonlinear case where the stiffness varies throughout the deformation process. Besides from direct editing of surface meshes, we plan to investigate the potential of these techniques also in animation reconstruction from video by analyzing the optical flow field.

Light simulation is a very active topic in the computer graphics community. In the frame of his Ph.D. (started in Oct. 2008), Vincent Nivoliers studies a dynamic basis formulation of the
problem. Among the methods used to obtain satisfactory results, radiosity aims at finding an approximate solution to the general light equation problem. The formulation of this problem fits
well into the dynamic function basis framework, which could be used to quickly find both a good sampling of the scene, and the best approximation on this sampling. This method would avoid the
use of discontinuity meshing, and provide a light solution without requiring hierarchical sampling. The problem of the illumination of a scene can be translated into an integral equation. The
general solution of this equation cannot be computed in closed form, therefore, the usual method is to restrict the problem on a specific function space which both approximates the general
L^{2}function space of the solution, and has a simple basis on which to project. Most approaches of the problem use hierarchical function basis, to refine the solution were needed, and to
compute large-scale interactions with fewer coefficients. In the dynamic function basis formalism, the function basis changes during the optimisation step to fit the solution and enhance the
accuracy of the approximation.

Last year, we introduced
*geometry textures*, a novel geometric representation for surfaces based on height maps. The visualization is done through a GPU ray casting algorithm applied to the whole object. At
rendering time, the fine-scale details (mesostructures) are reconstructed preserving original quality. Visualizing surfaces with geometry textures allows a natural LOD behavior. This year, we
explored new applications of geometry textures, and published them in
. We extend the application of geometry textures beyond their initial motivation (mesostructure visualization).
Thus, we present three new applications: rendering of solid models, geological surfaces visualization and surface smoothing.

We continued our work on the visualization of large-scale CAD models for industrial structures. In the specific case of the oil and gas industry, structures comprise a huge number of pipes. We developed techniques to recover the equations of cylinders, cones and toric sections from triangle soups, and a GPU-oriented ray-caster for high-quality and efficient rendering of these structures. We published the method in . Some results are shown in Figure .

We continued our work on interactive molecule visualization. Our system visualizes the so-called MSS (Molecular Skin Surface). Our approach is based on a GPU ray-caster, that directly uses the equation of the MSS (a piecewise quadric surface), rather than discretizing it into triangles (Figure -left). This year, we improved the efficiency of the algorithm and published an article in the Journal of Molecular Graphics . We also implemented an efficient version of a specific global illumination effects (screen-space ambient occlusion), as shown in Figure -right. This was done in the frame of the Ph.D. of Mathieu Chavent and the training period of Cécile Poisot.

The company Earth Decision Sciences(formerly T-Surf) develops and commercializes the modeler Gocad. Gocad is a 3D modeler dedicated to geosciences. This company was initially created as a start-up company of the National School of Geology and members of ISA and ALICE project-teams. It has now 200 employees in seven countries (France, United States, Brazil, Dubai, Canada ...). It was recently acquired by the Paradigm company.

The ScalableGraphicscompany was created in January 2007 by Xavier Cavin (detached ALICE researcher). Its objective is to provide high-performance visualization solutions, based on graphics PC clusters. The DViz software is based on industrialized research results from the research on high performance visualization done in ALICE.

Our proposal
*Geometric Intelligence*on Geometry Processing was selected by Microsoft Research Cambridge (in the frame of the Microsoft call for proposals
*tools for advancing science*). This year, in the frame of this project, we developed advanced visualization algorithms for molecular graphics, symmetry detection algorithms, and
vectorization techniques.

The Gocadsoftware is developed in the context of a consortium that encloses more than forty universities and thirty oil and gas companies around the world. This software is dedicated to modeling and visualizing the underground. ALICE studies the mathematical aspects of geo-modeling, and develops efficient numerical algorithms to solve the underlying optimization problems. The cooperation is formalized by several co-advised Ph.D. thesis (Laurent Castanié, Luc Buatois, Thomas Viard), and courses on numerical optimization given by ALICE researchers in the school of geology. Guillaume Caumon (head of Gocad consortium) is an external collaborator of the ALICE project-team.

We have signed Non Disclosure Agreements with ATI and NVidia. We experiment their new APIs to implement high performance GPGPU computations, i.e., using the graphic board as a high-performance numerical computation engine (Luc Buatois).

In the frame of the MIS program (Modeling, Interaction and Simulation) of the CPER (“Contrat de Plan État-Région Lorraine”), we coordinate the MOVIS action, with participants from ALICE, ScalableGraphics, ORPAILLEUR, and Gocad. The goal of this action is to design new algorithms for modeling and visualizing both industrial and manufactured objects. In 2008, about visualization aspects, we continued to develop algorithms for visualizing molecular surfaces, industrial structures, detailed objects. We also made our linear solver on the GPU more efficient and more general.

In the frame of the AOC program (Analysis, Optimization and Control) of the CPER (“Contrat de Plan État-Région Lorraine”), we participate to the “swimmer” action, coordinated by Marius Tucsnak (CORIDA project-team). The goal of this action is to simulate and visualize the complex fluid-solid interactions caused by a swimming fish. Last year (2007), we designed a new software library for extending MATLAB, as a module of our OpenNLlibrary. This software library, currently under development, will allow the user to easily implement finite element solvers for coupled fluid-solid dynamics. This year, we enhanced this software library, by improving the management of sparse matrices (x2 acceleration factor as compared to our previous implementation). We also interfaced GINAC, a formal computation library. This allows computing the stiffness and mass matrices directly from the equation of the function basis and the operator. Our final goal is to test the validity of our approach by implementing a 3D Navier-Stokes solver with solid-fluid interactions.

We work in cooperation with the Gocadgroup. The Ph.D. theses of L. Buatois and T. Viard are co-advised by the ENSG/Gocad (Nancy School of Geology) and ALICE.

L. Alonso is secretary of the national AGOS association of INRIA.

Project GoodShape(Numerical Geometric Abstraction: from bits to equations), funded by the European Research Council, involves several fundamental aspects of 3D modelling and computer graphics. GOODSHAPE is taking a new approach to the classic, essential problem of sampling, or the digital representation of objects in a computer. This new approach proposes to simultaneously consider the problem of approximating the solution of a partial differential equation and the optimal sampling problem. The proposed approach, based on the theory of numerical optimization, is likely to lead to new algorithms, more efficient than existing methods. Possible applications are envisioned in inverse engineering and oil exploration.

Greg Turk's team and ALICE belong to two different scientific communities: texture synthesis for Greg Turk and geometry processing for ALICE. We noticed that at a fundamental level, both scientific communities are interested in the same type of problems i.e. optimizing locally-defined criteria. In texture synthesis, the criterion is the similarity to a reference image, while in geometry processing, it is defined by discretized differential equations. In Greg Turk's team, the optimizations are performed by stochastic approaches (Monte Carlo algorithm...), while the specificity of the criteria optimized by ALICE (smoothness, convexity) allows using efficient numerical methods (Conjugate Gradient, Newton, etc.). We believe that major contributions could be achieved by combining both approaches. It can be illustrated by our first joint work: an interactive texturing tool based on stochastic approach to define the texture, and a geometry processing approach to define the mapping of this texture onto a surface. This first success leads us to believe that coupling the stochastic and numerical approaches could be made at a more fundamental level. Our ambition is therefore to define a novel unified framework to optimize complex energy functionals. Since this new framework will include both stochastic and numerical aspects, we expect important advances in geometry processing and in scientific visualization.

Feb. 4,5, 2008, Julien Tierny (LIFL) visits us and gives a seminar on the Reeb skeleton.

Feb, 12, 2008, Younis Hijazi (University of Kaiserslautern) visits us and gives a seminar on interval arithmetic in visualization and computer graphics.

Feb, 28-29, 2008, Greg Turk and Chris Wotjan (Georgia Tech) visit us.

March, 03-04, G. Drettakis and S. Lefebvre (INRIA-Reves) visit us. Sylvain Lefebvre gives a seminar on tile trees.

August, 26, 2008, Jean-Michel Dischler and G. Gilet (LSIIT / Strasbourg) visit us

October, 08, 2008, Dominique Bechmann (LSIIT / Strasbourg) visits us and gives a seminar on topologically-based modeling.

D. Sokolov teaches “Numerical analysis for computer sciences” (L3) and “Logics and computing models” (M1) at Nancy1 university.

L. Buatois teaches “C2I” (basics of computers) at Nancy2 university.

B. Lévy teaches “Numerical Algorithms” at the ENSG (School of Geology - INPL)

B. Lévy was program co-chair of the ACM Symposium on Solid and Physical Modeling.

B. Lévy was member of the program committee of ACM SIGGRAPH, IEEE Visualization, EUROGRAPHICS, ACM/EG Symposium on Geometry Processing, IEEE Shape Modeling International and Pacific Graphics.

Members of the team attended ACM SIGGRAPH, EUROGRAPHICS, ACM SPM, IEEE SMI, ISVC

B. Lévy attended the workshop Rendez-vous de l'INRIA , towards an EU framework for technology transfer.

B. Lévy attended the ERC conference in "Collège de France"

B. Lévy attended a reception organized by Valérie Pécresse (minister of research and higher education) with ERC Starting Grant grantees, and participated to a roundtable organized by Francois Fillon (prime minister) and Valérie Pécresse (minister of research and higher edutaction) with ERC Starting Grant grantees

B. Lévy was a member of the Ph.D. committee of Julien Tierny (LIFL-Lille), Pierre Kraemer (LSIIT-Strasbourg), Marc Fournier (LSIIT-Strasbourg), Guilhem Dupuiy (Pau).

B. Lévy gave an invited tutorial in the European Conference on Computer Vision (ECCV), an invited talk in the Stony Brook Modeling Week about Next Generation Geometry Processing, and an invited talk in the IEEE Shape Modeling International (SMI) Minisymposium on Spectral Geometry Processing.

Nov., 14-17: “Fête de la science”, V. Nivoliers and B. Lévy showed demonstrations and gave talks on geometry processing.

Dec., 4-5: B. Lévy gave a poster presentation at the welcome reception for INRIA new researchers.