Research carried out by the Geometrica project team is dedicated to Computational Geometry and Topology and follows three major directions: (a). mesh generation and geometry processing; (b). topological and geometric inference; (c). data structures and robust geometric computation. The overall objective of the project-team is to give effective computational geometry and topology solid mathematical and algorithmic foundations, to provide solutions to key problems as well as to validate theoretical advances through extensive experimental research and the development of software packages that may serve as steps toward a standard for reliable and effective geometric computing. Most notably, Geometrica, together with several partners in Europe, plays a prominent role in the development of cgal, a large library of computational geometry algorithms.

Creation of a new Inria research team called TITANE on geometric modeling of 3D environments. Creation expected in 2013.

Best Paper Award for "The Simplex Tree: An Efficient Data Structure for General Simplicial Complexes" at ESA 2012.

Meshes are becoming commonplace in a number of applications ranging from engineering to multimedia through biomedecine and geology. For rendering, the quality of a mesh refers to its approximation properties. For numerical simulation, a mesh is not only required to faithfully approximate the domain of simulation, but also to satisfy size as well as shape constraints. The elaboration of algorithms for automatic mesh generation is a notoriously difficult task as it involves numerous geometric components: Complex data structures and algorithms, surface approximation, robustness as well as scalability issues. The recent trend to reconstruct domain boundaries from measurements adds even further hurdles. Armed with our experience on triangulations and algorithms, and with components from the cgal library, we aim at devising robust algorithms for 2D, surface, 3D mesh generation as well as anisotropic meshes. Our research in mesh generation primarily focuses on the generation of simplicial meshes, i.e. triangular and tetrahedral meshes. We investigate both greedy approaches based upon Delaunay refinement and filtering, and variational approaches based upon energy functionals and associated minimizers.

The search for new methods and tools to process digital geometry is motivated by the fact that previous attempts to adapt common signal processing methods have led to limited success: Shapes are not just another signal but a new challenge to face due to distinctive properties of complex shapes such as topology, metric, lack of global parameterization, non-uniform sampling and irregular discretization. Our research in geometry processing ranges from surface reconstruction to surface remeshing through curvature estimation, principal component analysis, surface approximationand surface mesh parameterization. Another focus is on the robustness of the algorithms to defect-laden data. This focus stems from the fact that acquired geometric data obtained through measurements or designs are rarely usable directly by downstream applications. This generates bottlenecks, i.e., parts of the processing pipeline which are too labor-intensive or too brittle for practitioners. Beyond reliability and theoretical foundations, our goal is to design methods which are also robust to raw, unprocessed inputs.

Due to the fast evolution of data acquisition devices and computational power, scientists in many areas are asking for efficient algorithmic tools for analyzing, manipulating and visualizing more and more complex shapes or complex systems from approximative data. Many of the existing algorithmic solutions which come with little theoretical guarantee provide unsatisfactory and/or unpredictable results. Since these algorithms take as input discrete geometric data, it is mandatory to develop concepts that are rich enough to robustly and correctly approximate continuous shapes and their geometric properties by discrete models. Ensuring the correctness of geometric estimations and approximations on discrete data is a sensitive problem in many applications.

Data sets being often represented as point sets in high dimensional spaces, there is a considerable interest in analyzing and processing data in such spaces. Although these point sets usually live in high dimensional spaces, one often expects them to be located around unknown, possibly non linear, low dimensional shapes. These shapes are usually assumed to be smooth submanifolds or more generally compact subsets of the ambient space. It is then desirable to infer topological (dimension, Betti numbers,...) and geometric characteristics (singularities, volume, curvature,...) of these shapes from the data. The hope is that this information will help to better understand the underlying complex systems from which the data are generated. In spite of recent promising results, many problems still remain open and to be addressed, need a tight collaboration between mathematicians and computer scientists. In this context, our goal is to contribute to the development of new mathematically well founded and algorithmically efficient geometric tools for data analysis and processing of complex geometric objects. Our main targeted areas of application include machine learning, data mining, statistical analysis, and sensor networks.

Geometrica has a large expertise of algorithms and data structures for geometric problems.We are pursuing efforts to design efficient algorithms from a theoretical point of view, but we also put efforts in the effective implementation of these results.

In the past years, we made significant contributions to algorithms for computing Delaunay triangulations (which are used by meshes in the above paragraph). We are still working on the practical efficiency of existing algorithms to compute or to exploit classical Euclidean triangulations in 2 and 3 dimensions, but the current focus of our research is more aimed towards extending the triangulation efforts in several new directions of research.

One of these directions is the triangulation of non Euclidean spaces such as periodic or projective spaces, with various potential applications ranging from astronomy to granular material simulation.

Another direction is the triangulation of moving points, with potential applications to fluid dynamics where the points represent some particles of some evolving physical material, and to variational methods devised to optimize point placement for meshing a domain with a high quality elements.

Increasing the dimension of space is also a stimulating direction of research, as triangulating points in medium dimension (say 4 to 15) has potential applications and makes new challenges to trade exponential complexity of the problem in the dimension for the possibility to reach effective and practical results in reasonably small dimensions.

On the complexity analysis side, we pursue efforts to obtain complexity analysis in some practical situations involving randomized or stochastic hypotheses. On the algorithm design side, we are looking for new paradigms to exploit parallelism on modern multicore hardware architectures.

Finally, all this work is done while keeping in mind concerns related to effective implementation of our work, practical efficiency and robustness issues which have become a background task of all different works made by Geometrica.

Modeling 3D shapes is required for all visualization applications where interactivity is a key feature since the observer can change the viewpoint and get an immediate feedback. This interactivity enhances the descriptive power of the medium significantly. For example, visualization of complex molecules helps drug designers to understand their structure. Multimedia applications also involve interactive visualization and include e-commerce (companies can present their products realistically), 3D games, animation and special effects in motion pictures. The uses of geometric modeling also cover the spectrum of engineering, computer-aided design and manufacture applications (CAD/CAM). More and more stages of the industrial development and production pipeline are now performed by simulation, due to the increased performance of numerical simulation packages. Geometric modeling therefore plays an increasingly important role in this area. Another emerging application of geometric modeling with high impact is medical visualization and simulation.

In a broad sense, shape reconstruction consists of creating digital models of real objects from points. Example application areas where such a process is involved are Computer Aided Geometric Design (making a car model from a clay mockup), medical imaging (reconstructing an organ from medical data), geology (modeling underground strata from seismic data), or cultural heritage projects (making models of ancient and or fragile models or places). The availability of accurate and fast scanning devices has also made the reproduction of real objects more effective such that additional fields of applications are coming into reach. The members of Geometrica have a long experience in shape reconstruction and contributed several original methods based upon the Delaunay and Voronoi diagrams.

Meshes are the basic tools for scientific computing using finite element methods. Unstructured meshes are used to discretize domains bounded by complex shapes while allowing local refinements. Geometrica contributes to mesh generation of 2D and 3D possibly curved domains. Most of our methods are based upon Delaunay triangulations, Voronoi diagrams and their variants. Anisotropic meshes are also investigated. We investigate in parallel both greedy and variational mesh generation techniques. The greedy algorithms consist of inserting vertices in an initial coarse mesh using the Delaunay refinement paradigm, while the variational algorithms consists of minimizing an energy related to the shape and size of the elements. Our goal is to show the complementarity of these two paradigms. Quadrangle surface meshes are also of interest for reverse engineering and geometry processing applications. Our goal is to control the final edge alignment, the mesh sizing and the regularity of the quadrangle tiling.

With the collaboration of
Hervé Brönnimann,
Manuel Caroli,
Pedro Machado Manhães de Castro,
Frédéric Cazals,
Frank Da,
Christophe Delage,
Andreas Fabri,
Julia Flötotto,
Philippe Guigue,
Michael Hemmer,
Samuel Hornus,
Menelaos Karavelas,
Sébastien Loriot,
Abdelkrim Mebarki,
Naceur Meskini,
Andreas Meyer,
Sylvain Pion,
Marc Pouget,
François Rebufat,
Laurent Rineau,
Laurent Saboret,
Stéphane Tayeb,
Jane Tournois,
Radu Ursu, and
Camille Wormser.
http://

cgal is a C++ library of geometric algorithms and data structures.
Its development has been initially funded and further supported by several
European projects (CGAL, GALIA, ECG, ACS, AIM@SHAPE) since 1996. The long term
partners of the project are research teams from the following institutes: Inria
Sophia Antipolis - Méditerranée, Max-Planck Institut Saarbrücken, ETH Zürich,
Tel Aviv University, together with several others. In 2003, cgal became an
Open Source project (under the LGPL and QPL licenses), and it also became
commercialized by Geometry Factory, a company *Born of Inria* founded by Andreas
Fabri.

The aim of the cgal project is to create a platform for geometric computing supporting usage in both industry and academia. The main design goals are genericity, numerical robustness, efficiency and ease of use. These goals are enforced by a review of all submissions managed by an editorial board. As the focus is on fundamental geometric algorithms and data structures, the target application domains are numerous: from geological modeling to medical images, from antenna placement to geographic information systems, etc.

The cgal library consists of a kernel, a list of algorithmic packages,
and a support
library. The kernel is made of classes that represent elementary
geometric objects (points, vectors, lines, segments, planes,
simplices, isothetic boxes, circles, spheres, circular arcs...),
as well as affine transformations and
a number of predicates and geometric constructions over these objects.
These classes exist in dimensions 2 and 3 (static dimension) and

A number of packages provide geometric data structures as
well as algorithms. The data structures are polygons, polyhedra,
triangulations, planar maps, arrangements and various search
structures (segment trees,

Finally, the support library provides random generators, and interfacing code with other libraries, tools, or file formats (ASCII files, QT or LEDA Windows, OpenGL, Open Inventor, Postscript, Geomview...). Partial interfaces with Python, scilab and the Ipe drawing editor are now also available.

Geometrica is particularly involved in general maintenance, in the arithmetic issues that arise in the treatment of robustness issues, in the kernel, in triangulation packages and their close applications such as alpha shapes, in meshes... Three researchers of Geometrica are members of the cgal Editorial Board, whose main responsibilities are the control of the quality of cgal, making decisions about technical matters, coordinating communication and promotion of cgal.

cgal is about 700,000 lines of code and supports various platforms: GCC (Linux, Mac OS X, Cygwin...), Visual C++ (Windows), Intel C++... A new version of cgal is released twice a year, and it is downloaded about 10000 times a year. Moreover, cgal is directly available as packages for the Debian, Ubuntu and Fedora Linux distributions.

More numbers about cgal: there are now 14 editors in the editorial board, with approximately 20 additional developers. The user discussion mailing-list has more than 1000 subscribers with a relatively high traffic of 5-10 mails a day. The announcement mailing-list has more than 3000 subscribers.

The theory of optimal size meshes gives a method for analyzing the output size (number of simplices) of a Delaunay refinement mesh in terms of the integral of a sizing function over the input domain. The input points define a maximal such sizing function called the feature size. This work aims to find a way to bound the feature size integral in terms of an easy to compute property of a suitable ordering of the point set. The key idea is to consider the pacing of an ordered point set, a measure of the rate of change in the feature size as points are added one at a time. In previous work, Miller et al. showed that if an ordered point set has pacing

Triangle meshes have been nearly ubiquitous in computer graphics, and a large body of data structures and geometry processing algorithms based on them has been developed in the literature. At the same time, quadrilateral meshes, especially semi-regular ones, have advantages for many applications, and significant progress was made in quadrilateral mesh generation and processing during the last several years. In this work, we discuss the advantages and problems of techniques operating on quadrilateral meshes, including surface analysis and mesh quality, simplification, adaptive refinement, alignment with features, parametrization, and remeshing .

We propose a practical method to compute a mesh of the octagon, in the
Poincaré disk, that respects its symmetries. This is obtained by
meshing the Schwartz triangle

OpenVolumeMesh is a data structure which is able to represent heterogeneous 3-dimensional polytopal cell complexes and is general enough to also represent non-manifolds without incurring undue overhead . Extending the idea of half-edge based data structures for two-manifold surface meshes, all faces, i.e. the two-dimensional entities of a mesh, are represented by a pair of oriented half-faces. The concept of using directed half-entities enables inducing an orientation to the meshes in an intuitive and easy to use manner. We pursue the idea of encoding connectivity by storing first-order top-down incidence relations per entity, i.e. for each entity of dimension d, a list of links to the respective incident entities is stored. For instance, each half-face as well as its orientation is uniquely determined by a tuple of links to its incident half-edges or each 3D cell by the set of incident half-faces. This representation allows for handling non-manifolds as well as mixed-dimensional mesh configurations. No entity is duplicated according to its valence, instead, it is shared by all incident entities in order to reduce memory consumption. Furthermore, an array-based storage layout is used in combination with direct index-based access. This guarantees constant access time to the entities of a mesh. Although bottom-up incidence relations are implied by the top-down incidences, our data structure provides the option to explicitly generate and cache them in a transparent manner. This allows for accelerated navigation in the local neighbor- hood of an entity. We provide an open-source and platform-independent implementation of the proposed data structure written in C++ using dynamic typing paradigms. The library is equipped with a set of STL compliant iterators, a generic property system to dynamically attach properties to all entities at run-time, and a serializer/deserializer supporting a simple file format. Due to its similarity to the OpenMesh data structure, it is easy to use, in particular for those familiar with OpenMesh. Since the presented data structure is compact, intuitive, and efficient, it is suitable for a variety of applications, such as meshing, visualization, and numerical analysis. OpenVolumeMesh is open-source software licensed under the terms of the LGPL .

In collaboration with Luca Castelli Aleardi (LIX, Palaiseau) and Jarek Rossignac (Georgia Tech).

We consider the problem of designing space efficient solutions for representing the connectivity information of manifold triangle meshes. Most mesh data structures are quite redundant, storing a large amount of information in order to efficiently support mesh traversal operators. Several compact data structures have been proposed to reduce storage cost while supporting constant-time mesh traversal. Some recent solutions are based on a global re-ordering approach, which allows to implicitly encode a map between vertices and faces. Unfortunately, these compact representations do not support efficient updates, because local connectivity changes (such as edge-contractions, edge-flips or vertex insertions) require re-ordering the entire mesh. Our main contribution is to propose a new way of designing compact data structures which can be dynamically maintained. In our solution, we push further the limits of the re-ordering approaches: the main novelty is to allow to re-order vertex data (such as vertex coordinates), and to exploit this vertex permutation to easily maintain the connectivity under local changes. We describe a new class of data structures, called Editable SQuad (ESQ), offering the same navigational and storage performance as previous works, while supporting local editing in amortized constant time. As far as we know, our solution provides the most compact dynamic data structure for triangle meshes. We propose a linear-time and linear-space construction algorithm, and provide worst-case bounds for storage and time cost .

We present a method for reconstructing surfaces from point sets. The main novelty lies into a structure-preserving approach where the input point set is first consolidated by structuring and resampling the planar components, before reconstructing the surface from both the consolidated components and the unstructured points. The final surface is obtained through solving a graph-cut problem formulated on the 3D Delaunay triangulation of the structured point set where the tetrahedra are labeled as inside or outside cells. Structuring facilitates the surface reconstruction as the point set is substantially reduced and the points are enriched with structural meaning related to adjacency between primitives. Our approach departs from the common dichotomy between smooth/piecewise-smooth and primitive-based representations by gracefully combining canonical parts from detected primitives and free-form parts of the inferred shape. Our experiments on a variety of inputs illustrate the potential of our approach in terms of robustness, flexibility and efficiency .

In collaboration with Fernando de Goes and Mathieu Desbrun from Caltech.

We introduce a robust and feature-capturing surface reconstruction and simplification method that turns an input point set into a low triangle-count simplicial complex. Our approach starts with a (possibly non-manifold) simplicial complex filtered from a 3D Delaunay triangulation of the input points. This initial approximation is iteratively simplified based on an error metric that measures, through optimal transport, the distance between the input points and the current simplicial complex, both seen as mass distributions. Our approach is shown to exhibit both robustness to noise and outliers, as well as preservation of sharp features and boundaries (Figure ). Our new feature-sensitive metric between point sets and triangle meshes can also be used as a post-processing tool that, from the smooth output of a reconstruction method, recovers sharp features and boundaries present in the initial point set .

Denoising surfaces is a crucial step in the surface processing pipeline. This is even more challenging when no underlying structure of the surface is known, that is when the surface is represented as a set of unorganized points. We introduce a denoising method based on *local similarities*. The contributions are threefold: first, we do not denoise directly the point positions but use a low/high frequency decomposition and denoise only the high frequency. Second, we introduce a local surface parameterization which is proved stable. Finally, this method works directly on point clouds, thus avoiding building a mesh of a noisy surface which is a difficult problem. Our approach is based on denoising a height vector field by comparing the neighborhood of the point with neighborhoods of other points on the surface (Figure ). It falls into the non-local denoising framework that has been extensively used in image processing, but extends it to unorganized point clouds .

In collaboration with Adrien Maglo, Clément Courbet and Céline Hudelot from Ecole Centrale Paris.

We present a new algorithm for the progressive compression of surface polygon meshes. The input surface is decimated by several traversals that generate successive levels of detail through a specific patch decimation operator which combines vertex removal and local remeshing. This operator encodes the mesh connectivity through a transformation that generates two lists of Boolean symbols during face and edge removals. The geometry is encoded with a barycentric error prediction of the removed vertex coordinates. In order to further reduce the size of the geometry and connectivity data, we propose a curvature prediction method and a connectivity prediction scheme based on the mesh geometry. We also include two methods that improve the rate-distortion performance: a wavelet formulation with a lifting scheme and an adaptive quantization technique. Experimental results demonstrate the effectiveness of our approach in terms of compression rates and rate-distortion performance. Our approach compares favorably to compression schemes specialized to triangle meshes .

In collaboration with Dominique Attali (Gipsa-lab), Ulrich Bauer (Göttingen Univ.), and André Lieutier (Dassault Systèmes).

We consider the problem of deciding whether the persistent homology group
of a simplicial pair

As a consequence, we show that it is NP-hard to simplify level and
sublevel sets of scalar functions on

In collaboration with Vin de Silva (Pomona College)

We give a self-contained treatment of the theory of persistence modules indexed over the real line. We give new proofs of the standard results. Persistence diagrams are constructed using measure theory. Linear algebra lemmas are simplified using a new notation for calculations on quiver representations. We show that the stringent finiteness conditions required by traditional methods are not necessary to prove the existence and stability of the persistence diagram. We introduce weaker hypotheses for taming persistence modules, which are met in practice and are strong enough for the theory still to work. The constructions and proofs enabled by our framework are, we claim, cleaner and simpler .

In collaboration with Vin de Silva (Pomona College)

We study the properties of the homology of different geometric filtered complexes (such as Vietoris–Rips, Čech and witness complexes) built on top of precompact spaces. Using recent developments in the theory of topological persistence we provide simple and natural proofs of the stability of the persistent homology of such complexes with respect to the Gromov–Hausdorff distance. We also exhibit a few noteworthy properties of the homology of the Rips and Čech complexes built on top of compact spaces .

For points sampled near a compact set

Along the way, we develop new techniques for manipulating and comparing persistence barcodes from zigzag modules. We give methods for reversing arrows and removing spaces from a zigzag. We also discuss factoring zigzags and a kind of interleaving of two zigzags that allows their barcodes to be compared. These techniques were developed to provide our theoretical analysis of the signal-to-noise ratio of Rips-like zigzags, but they are of independent interest as they apply to zigzag modules generally .

In collaboration with Tamal Dey (Ohio State University)

The persistent homology with

In collaboration with Sivaraman Balakrishnan and Alessandro Rinaldo and Aarti Singh and Larry A. Wasserman (Carnegie Mellon University)

Often, high dimensional data lie close to a low-dimensional submanifold and it is of interest to understand the geometry of these submanifolds. The homology groups of a manifold are important topological invariants that provide an algebraic summary of the manifold. These groups contain rich topological information, for instance, about the connected components, holes, tunnels and sometimes the dimension of the manifold. We consider the statistical problem of estimating the homology of a manifold from noisy samples under several different noise models. We derive upper and lower bounds on the minimax risk for this problem. Our upper bounds are based on estimators which are constructed from a union of balls of appropriate radius around carefully selected points. In each case, we establish complementary lower bounds using Le Cam's lemma .

The Vietoris-Rips filtration is a versatile tool in topological data analysis. Unfortunately, it is often too large to construct in full. We show how to construct an

We show that filtering the barycentric decomposition of a Čech complex by the cardinality of the vertices captures precisely the topology of

In collaboration with Primoz Skraba (Lubiana Univ.), Amit Patel (Rutgers Univ.)

Using topological degree theory, we present and prove correctness of a fast algorithm for computing the well diagram, a quantitative property, of a vector field on Euclidean space .

In collaboration with Luca Castelli Aleardi and Éric Fusy (LIX, Palaiseau).

We extend the notion of canonical orderings to cylindric triangulations. This allows us to extend the incremental straight-line drawing algorithm of
de Fraysseix et al. to this setting. Our algorithm yields in linear time
a crossing-free straight-line drawing of a cylindric triangulation

In collaboration with Menelaos Karavelas (University of Crete).

In the literature, the generic way to address degeneracies in
computational geometry is the *Symbolic Perturbation* paradigm:
the input is made dependent of some parameter

The usual way of using this approach is what we will call
*Algebraic Symbolic Perturbation* framework.
When the function to be evaluated is a polynomial of the input,
its perturbed version is seen as a polynomial in

We propose to address the handling of geometric degeneracies in a
different way, namely by means of what we call the
*Qualitative Symbolic Perturbation* framework.
We no longer use a single perturbation that must remove all
degeneracies,
but rather a sequence of perturbations, such that the next
perturbation is being used only if the previous ones have not removed
the degeneracies.
The new perturbation is considered as *symbolically smaller* than
the previous ones. This approach allows us to use simple elementary
perturbations whose effect can be analyzed and evaluated: (1) by
geometric reasoning instead of algebraic development of the
predicate polynomial in

We apply our framework to predicates used in the computation of Apollonius diagrams in 2D and 3D, as well as the computation of trapezoidal maps of circular arcs .

In collaboration with Gert Vegter (Johan Bernoulli Institute, Groningen University)

A previous algorithm was computing the Delaunay triangulation of the flat torus, by using a 9-sheeted covering space . We propose a modification of the algorithm using only a 8-sheeted covering space, which allows to work with 8 periodic copies of the input points instead of 9. The main interest of our contribution is not only this result, but most of all the method itself: this new construction of covering spaces generalizes to Delaunay triangulations of surfaces of higher genus.

We study Delaunay complexes and Voronoi diagrams in the Poincaré ball, a confomal model of the hyperbolic space, in any dimension. We elaborate on our earlier work on the space of spheres , giving a detailed description of algorithms, and presenting a static and a dynamic variants. All proofs are based on geometric reasoning, they do not resort to any use of the analytic formula of the hyperbolic distance. We also study algebraic and arithmetic issues, observing that only rational computations are needed. This allows for an exact and efficient implementation in 2D. All degenerate cases are handled. The implementation will be submitted to the cgal editorial board for future integration into the cgal library .

In collaboration with Arijit Ghosh (Indian Statistical Institute, Kolkata, India)

We introduce a parametrized notion of genericity for Delaunay
triangulations which, in particular, implies that the Delaunay
simplices of

In collaboration with Arijit Ghosh (Indian Statistical Institute, Kolkata, India)

This work is the algorithmic counterpart of our previous paper . We describe an algorithm to construct an intrinsic Delaunay triangulation of a smooth closed submanifold of Euclidean space. We also provide a counterexample to the results announced by Leibon and Letscher on Delaunay triangulations on Riemannian manifolds. In general the nerve of the intrinsic Voronoi diagram is not homeomorphic to the manifold. The density of the sample points alone cannot guarantee the existence of a Delaunay triangulation. To circumvent this issue, we use results established in our companion paper on the stability of Delaunay triangulations on δ-generic point sets. We establish sampling criteria which ensure that the intrinsic Delaunay complex coincides with the restricted Delaunay complex and also with the recently introduced tangential Delaunay complex. The algorithm generates a point set that meets the required criteria while the tangential complex is being constructed. In this way the computation of geodesic distances is avoided, the runtime is only linearly dependent on the ambient dimension, and the Delaunay complexes are guaranteed to be triangulations of the manifold .

In collaboration with Arijit Ghosh (Indian Statistical Institute, Kolkata, India)

It is a well-known fact that the restricted Delaunay and witness complexes may differ when the landmark and witness sets are located on submanifolds of Rd of dimension 3 or more. Currently, the only known way of overcoming this issue consists of building some crude superset of the witness complex, and applying a greedy sliver exudation technique on this superset. Unfortunately, the construction time of the superset depends exponentially on the ambient dimension, which makes the witness complex based approach to manifold reconstruction impractical. This work provides an analysis of the reasons why the restricted Delaunay and witness complexes fail to include each other. From this, a new set of conditions naturally arises under which the two complexes are equal .

In collaboration with Xavier Goaoc (EPI vegas).

Average-case analysis of data-structures or algorithms is commonly used in computational geometry when the, more classical, worst-case analysis is deemed overly pessimistic. Since these analyses are often intricate, the models of random geometric data that can be handled are often simplistic and far from “realistic inputs”. We present a new simple scheme for the analysis of geometric structures. While this scheme only produces results up to a polylog factor, it is much simpler to apply than the classical techniques and therefore succeeds in analyzing new input distributions related to smoothed complexity analysis.

We illustrate our method on two classical structures: convex hulls
and Delaunay triangulations. Specifically, we give short and
elementary proofs of the classical results that

In collaboration with Nicolas Broutin (EPI rap).

Walking strategies are a standard tool for point location in
a triangulation of size *Comp Geom–Theor Appl*, vol. 29, 2004], in the case of the so called
*straight walk* which has the very specific property that deciding whether
a given (Delaunay) triangle belongs to the walk may be determined without looking at the other sites.
We analyze a different walking strategy that follows vertex
neighbour relations to move towards the query. We call this walk
*cone vertex walk*. We prove that cone vertex walk
visits

In collaboration with Xavier Goaoc and Guillaume Moroz (EPI vegas) and Matthias Reitzner (Universität Osnabrück, Germany).

Let

We show that for planar convex sets, *random sampling* argument .

Point processes have demonstrated efficiency and competitiveness when addressing object recognition problems in vision. However, simulating these mathematical models is a difficult task, especially on large scenes. Existing samplers suffer from average performances in terms of computation time and stability. We propose a new sampling procedure based on a Monte Carlo formalism. Our algorithm exploits Markovian properties of point processes to perform the sampling in parallel. This procedure is embedded into a data-driven mechanism such that the points are non-uniformly distributed in the scene. The performances of the sampler are analyzed through a set of experiments on various object recognition problems from large scenes, and through comparisons to the existing algorithms , .

We present a novel and robust method for modeling cities from 3D-point data. Our algorithm provides a more complete description than existing approaches by reconstructing simultaneously buildings, trees and topologically complex grounds. A major contribution of our work is the original way of modeling buildings which guarantees a high generalization level while having semantized and compact representations. Geometric 3D-primitives such as planes, cylinders, spheres or cones describe regular roof sections, and are combined with mesh-patches that represent irregular roof components. The various urban components interact through a non-convex energy minimization problem in which they are propagated under arrangement constraints over a planimetric map. Our approach is experimentally validated on complex buildings and large urban scenes of millions of points, and is compared to state-of-the-art methods .

In collaboration with Johan Hidding, Rien van de Weygaert, Bernard J.T. Jones (Kapteyn Institute, Groningen University) and Gert Vegter (Johan Bernoulli Institute, Groningen University)

We highlight the application of Computational Geometry to our understanding of the formation and dynamics of the Cosmic Web. The emergence of this intricate and pervasive weblike structure of the Universe on Megaparsec scales can be approximated by a well-known equation from fluid mechanics, the Burgers’ equation. The solution to this equation can be obtained from a geometrical formalism. We have extended and improved this method by invoking weighted Delaunay and Voronoi tessellations. The duality between these tessellations finds a remarkable and profound reflection in the description of physical systems in Eulerian and Lagrangian terms .

The initial development phase of the cgal library has been made by a
European consortium. In order to achieve the transfer and diffusion
of cgal in the industry, a company called Geometry Factory has been
founded in January 2003 by Andreas Fabri
(http://

The goal of this company is to pursue the development of the library and to offer services in connection with cgal (maintenance, support, teaching, advice). Geometry Factory is a link between the researchers from the computational geometry community and the industrial users.

It offers licenses to interested companies, and provides support. There are contracts in various domains such as CAD/CAM, medical applications, GIS, computer vision...

Geometry Factory is keeping close contacts with the original consortium members, and in particular with Geometrica.

In 2012, Geometry Factory had the following new customers for cgal packages developed by Geometrica: Archivideo (GIS, 2D Constrained Delaunay), Gamesim (games, 2D Constrained Delaunay), Medicm (medical imaging, 2D Constrained Delaunay, BE), Tecosim(CAD/CAM, 3D Delaunay, Germany). Midland Valley (Surface mesher, UK)

Moreover, research licenses (in-house research usage for all of cgal) have been purchased by: ROI Bologna (medical imaging, Italy), Technicolor (France), U Southampton (medical imaging, UK), ZIB (medical imaging, Germany).

The main goal of this collaboration is to develop indoor models more accurate, meaningful and complete than existing methods. The conventional way for modeling indoor scenes is based on plane arrangements. This type of representation is particularly limited and must be improved by developing more complex geometric entities adapted to a detailed and semantized description of scenes.

- Starting date: April 2012

- Duration: 3 years

In collaboration with Jane Tournois from Geometry Factory.

CGALmesh is an Inria technological development action started in March 2009, in collaboration with Geometry Factory. Building upon components from cgal, we are implementing a generic mesh generation framework for surfaces and 3D domains. We primarily target applications which involve data acquired from the physical world: geology, medicine, 3D cartography and reverse engineering. In 2012 we devised a new parallel 3D mesh generation and optimization algorithm for multi-core architectures with shared memory, and an algorithm for anisotropic mesh generation.

- Starting date: March 2009

- Duration: 3 years

We participate in the Présage project funded by the anr. The project involves:

the Inria vegas team,

Univeristy of Rouen, and

the Geometrica team.

This project brings together computational and probabilistic geometers to tackle new probabilistic geometry problems arising from the design and analysis of geometric algorithms and data structures. We focus on properties of discrete structures induced by or underlying random continuous geometric objects. This raises questions such as:

What does a random geometric structure (convex hulls, tessellations, visibility regions...) look like?

How to analyze and optimize the behavior of classical
geometric algorithms on *usual* inputs?

How can we generate randomly *interesting* discrete geometric
structures?

- Starting date: 31 December 2011

- Duration: 4 years

GIGA stands for Geometric Inference and Geometric Approximation. GIGA aims at designing mathematical models and algorithms for analyzing, representing and manipulating discretized versions of continuous shapes without losing their topological and geometric properties. By shapes, we mean sub-manifolds or compact subsets of, possibly high dimensional, Riemannian manifolds. This research project is divided into tasks which have Geometric Inference and Geometric Approximation as a common thread. Shapes can be represented in three ways: a physical representation (known only through measurements), a mathematical representation (abstract and continuous), and a computerized representation (inherently discrete). The GIGA project aims at studying the transitions from one type to the other, as well as the associated discrete data structures.

Some tasks are motivated by problems coming from data analysis, which can be found when studying data sets in high dimensional spaces. They are dedicated to the development of mathematically well-founded models and tools for the robust estimation of topological and geometric properties of data sets sampled around an unknown compact set in Euclidean spaces or around Riemannian manifolds.

Some tasks are motivated by problems coming from data generation, which can be found when studying data sets in lower dimensional spaces (Euclidean spaces of dimension 2 or 3). The proposed research activities aim at leveraging some concepts from computational geometry and harmonic forms to provide novel algorithms for generating discrete data structures either from mathematical representations (possibly deriving from an inference process) or from raw, unprocessed discrete data. We target both isotropic and anisotropic meshes, and simplicial as well as quadrangle and hexahedron meshes.

This project coordinated by Geometrica also involves researchers from the Inria team-project ABS, CNRS (Grenoble), and a representative from the industry (Dassault Systèmes).

- Starting date: October 2009.

- Duration: 4 years.

The primary purpose of this project is to bring about a close collaboration between the chair holder Dr Vin de Silva and Digiteo teams working on the development of topological and geometric methods in Computer Science. The research program is motivated by problems coming from the increasing need of studying and analyzing the (often huge) data sets that are now available in many scientific and economic domains. Indeed, due to the improvements of measurement devices and data storage tools, the available data about complex shapes or complex systems are growing very fast. These data being often represented as point clouds in high dimensional (or even infinite dimensional) spaces there is a considerable interest in analyzing and processing data in such spaces. Despite the high dimensionality of the ambient space, one often expects them to be located around an unknown, possibly non linear, low dimensional shape. It is then appealing to infer and analyze topological and geometric characteristics of that shape from the data. The hope is that this information will help to process more efficiently the data and to better understand the underlying complex systems from which the data are generated. In the last few years, topological and geometric approaches to obtain such information have encountered an increasing interest. The goal of this project is to bring together the complementary expertises in computational topology and geometry of the involved Digiteo teams and in applied geometry and algebraic topology of V. de Silva to develop new topological approaches to the previous mentioned domain. The project intends to develop both the theoretical and practical sides of this subject. The other partners of the project are the Ecole Polytechnique (L. Castelli-Aleardi and F. Nielsen) and the CEA (E. Goubault).

- Starting date: January 2009.

- Duration: 3 years.

The GDR ISIS young researcher project on "scene analysis from Lidar" consists in reconstructing in 3D large-scale city models from airborne Lidar scans. This project is in collaboration with Clément Mallet and Bruno Vallet from MATIS Laboratory, IGN [http://www.ign.fr].

- Starting date: January 2010

- Duration: 3 years

Culture 3D Clouds is a cloud computing platform for 3D scanning, documentation, preservation and dissemination of cultural heritage. The motivation stems from the fact that the field of 3D scanning artifacts heritage evolves slowly and only provides resources for researchers and specialists. The technology and equipment used for 3D scanning are sophisticated and require highly specialized skills. The cost is thus significant and limits the widespread practice. Culture 3D Clouds aims at providing the photographers with a value chain to commercialize 3D reproductions demand for their customers and expand the market valuation of business assets (commercial publishers, general public).

- Starting date: September 2012

- Duration: 3 years

Title: Computational Geometric Learning

Type: COOPERATION (ICT)

Defi: FET Open

Instrument: Specific Targeted Research Project (STREP)

Duration: November 2010 - October 2013

Coordinator: Friedrich-Schiller-Universität Jena (Germany)

Others partners: National and Kapodistrian University of Athens (Greece), Technische Universität Dortmund (Germany), Tel Aviv University (Israel), Eidgenössische Technische Hochschule Zürich (Switzerland), Rijksuniversiteit Groningen (Netherlands), Freie Universität Berlin (Germany)

See also: http://

Abstract: The Computational Geometric Learning project aims at extending the success story of geometric algorithms with guarantees to high-dimensions. This is not a straightforward task. For many problems, no efficient algorithms exist that compute the exact solution in high dimensions. This behavior is commonly called the curse of dimensionality. We try to address the curse of dimensionality by focusing on inherent structure in the data like sparsity or low intrinsic dimension, and by resorting to fast approximation algorithms.

Title: Robust Geometry Processing

Type: IDEAS

Instrument: ERC Starting Grant (Starting)

Duration: January 2011 - December 2015

Coordinator: Pierre Alliez, Inria Sophia Antipolis - mediterranee (France)

See also: http://

Abstract: The purpose of this project is to bring forth the full scientific and technological potential of Digital Geometry Processing by consolidating its most foundational aspects. Our methodology will draw from and bridge the two main communities (computer graphics and computational geometry) involved in discrete geometry to derive algorithmic and theoretical contributions that provide both robustness to noisy, unprocessed inputs, and strong guarantees on the outputs. The intended impact is to make the digital geometry pipeline as generic and ironclad as its Digital Signal Processing counterpart.

Title: Computational Methods for the analysis of high-dimensional data

Inria principal investigator: Steve Y. Oudot

International Partner:

Institution: Stanford University (United States)

Laboratory: Computer Science Department

Researcher: Leonidas J. Guibas

International Partner:

Institution: Ohio State University (United States)

Laboratory: Computer Science and Engineering

Researcher: Yusu Wang

Duration: 2011 - 2013

See also: http://

CoMeT is an associate team between the Geometrica group at Inria, the Geometric Computing group at Stanford University, and the Computational Geometry group at the Ohio State University. Its focus is on the design of computational methods for the analysis of high-dimensional data, using tools from metric geometry and algebraic topology. Our goal is to extract enough structure from the data, so we can get a higher-level informative understanding of these data and of the spaces they originate from. The main challenge is to be able to go beyond mere dimensionality reduction and topology inference, without the need for a costly explicit reconstruction. To validate our approach, we intend to set our methods against real-life data sets coming from a variety of applications, including (but not restricted to) clustering, image or shape segmentation, sensor field monitoring, shape classification and matching. The three research groups involved in this project have been active contributors in the field of Computational Topology in the recent years, and some of their members have had long-standing collaborations. We believe this associate team can help create new synergies between these groups.

Misha Belkin (Ohio State University).

Mikhail Bessmeltsev (University of British Columbia).

Mark Blome (Zuse-Institut Berlin).

Benjamin Burton (School of Mathematics and Physics, University of Queensland, Brisbane).

Dengfeng Chai (Zhejiang University).

Mathieu Desbrun (Caltech).

Paweł Dłotko (Jagiellonian University, Krakow).

Leo Guibas (Stanford University).

Sun Jian (Tsinghua University, Beijing).

Leif Kobbelt (RWTH Aachen).

Sylvain Lazard (EPI vegas).

Michael Lesnick (Stanford University).

Jeff Phillips (University of Utah).

Alla Sheffer (University of British Columbia).

Vin de Silva (Pomona College).

Gert Vegter (Johan Bernoulli Institute, Groningen University).

Yusu Wang (Ohio State University).

Ricard Campos (six months)

Topic: Reconstruction of 3D underwater scenes

Institution: University of Girona (Spain)

Andrea Tagliasacchi (three months)

Topic: surface reconstruction through optimal transportation

Institution: Simon Fraser University (Canada)

P. Alliez is an associate editor of
*ACM Transactions on Graphics* and
*Graphical Models*.

J-D. Boissonnat is a member of the editorial board of the
*Journal of the ACM*,
*Discrete and Computational Geometry*,
*Algorithmica*, the
*International Journal of Computational Geometry and Applications* and the electronic
*Journal of Computational Geometry*. He is also
a member of the editorial advisory board of the Springer Verlag book series *Geometry and Computing*.

F. Chazal is an associate editor of *Graphical Models* and *SIAM journal on Imaging Science*.

S. Oudot is co-editor with T. Dey of the special issue of *Discrete and Computational Geometry* on SoCG 2012.

Olivier Devillers is a member of the Editorial Board of *Graphical Models*.

Monique Teillaud is a member of the Editorial Boards of
CGTA, *Computational Geometry: Theory and Applications*,
and of IJCGA, *International Journal of Computational Geometry and Applications*.

M. Yvinec is a member of the editorial board of *Journal of Discrete Algorithms*.

P. Alliez, M. Teillaud (review manager until April 30th), and M. Yvinec are members of the cgal editorial board.

Pierre Alliez was a program committee member of EUROGRAPHICS main conference, EUROGRAPHICS Symposium on Geometry Processing and International Workshop on Point Cloud Processing (a CVPR workshop).

Steve Oudot was a program committee member of SoCG 2012 (ACM Symposium on Computational Geometry).

Frédéric Chazal was a program committee member of ATMCS 2012 (5th conf. on Algebra and Topology: Methods, Computation and Science), SMAI-SIGMA 2012 conference, SMAI-MAIRCI workshop 2012, ICPRAM 2012 and 2013 (International Conference on Patern Recognition Applications and Methods).

Frédéric Chazal is a program committee member of SoCG 2013 (ACM Symposium on Computational Geometry).

David Cohen-Steiner was a program committee member of the EUROGRAPHICS Symposium on Geometry Processing.

J-D. Boissonnat was a program committee member of “Mathematical Methods for Curves and Surfaces", Geometric Modeling and processing (GMP 2012), EUROGRAPHICS Symposium on Geometry Processing (SGP 2012)

Florent Lafarge was program chair of IEEE CVPR workshop on Point Cloud Processing and a program committee member of EUROGRAPHICS short papers and ISPRS congress.

Monique Teillaud is a member of the Computational Geometry Steering Committee.

Pierre Alliez was a referee and member of the Ph.D. defense committee of Nicolas Mellado (Inria Bordeaux) and David Bommes (RWTH Aachen).

Steve Oudot was a member of the Ph.D. defense committee of Arijit Ghosh (Inria Sophia-Antipolis).

Jean-Daniel Boissonnat was a member of the HDR defense of Eric Colin de Verdière (ENS Ulm)

Olivier Devillers was a member of the Ph.D. defense committee of Daniela Maftuleac (Univ. Marseille)

F. Chazal is the chair of the “Commission scientifique” at Inria Saclay - île de France.

Monique Teillaud is a member of the Inria Evaluation Committee.

She coordinated the working group on life of software of the evaluation committee. She is a member of the working group on the Inria software database (BIL).

She was a member of the national Inria CR2 and DR2 recruitment committees, as well as a member of the Inria Starting and Advanced Research Positions committees.

She is also a member of both Inria Sophia CDT (Committee for Technologic Development) and Inria national CDT.

Florent Lafarge is a member of the comité de Suivi Doctoral (CSD) of Inria Sophia.

Pierre Alliez is a member of the comité de médiation scientifique of Inria Sophia Antipolis - Mediterranee.

J.-D. Boissonnat is a member of the working groups GP1 (Modèles et calcul) and GP2 (Logiciels et systèmes informatiques) de l’Alliance des sciences et technologies du numérique (Allistène).

J.-D. Boissonnat is a member of the AERES Board (Evaluation Agency for Research and Higher Education).

J-D. Boissonnat chaired the Visiting Committee of the Geometric Modeling and Scientific Visualization Center of the King Abdullah University of Science and Technology (KAUST, Saudi Arabia, 2012).

J-D. Boissonnat chaired the recruitment committee of ENS Lyon (PU n°4037 / CS 27)

F. Chazal is a member of the Scientific Council of AMIES (Agence pour les Mathématiques en Interaction avec l'Industrie et la Société)

F. Chazal is a member of the “Comité de pilotage” of the MAIRCI and SIGMA (ex-AFA) groups of the SMAI.

F. Chazal is a member of the “Bureau du Comité des Projets” at Inria Saclay.

M. Glisse is a member of the experts group of AFNOR for the standardization of the C++ language within the ISO/WG21 working group.

M. Teillaud was a member of the LIAMA Visiting Committee.

J.-D. Boissonnat chairs the scientific committee of the Jacques Morgenstern Colloquium.

Monique Teillaud organized the *Minisymposium on publicly available
geometric/topological software* (with Menelaos Karavelas), Chapel
Hill, USA, June 17&19, 2012.

Monique Teillaud was a member of the scientific committee of the workshop SIGMA'2012: Signal, Image, Geometry, Modelling, Approximation, Marseille, November 19-23.

CGAL developers meeting, Inria, December 17-21.

M. Teillaud is maintaining the Computational Geometry Web Pages
http://

We give here the details of courses. Web pages of the graduate courses can be found on the web site :

Traitement numérique de la géométrie, Lorsque nos maths s'incarnent dans les ordinateurs, Monique Teillaud, 3h, Centre de Formation d'Apprentis de l'Industrie, Avignon.

“à quoi sert un triangle ?”, Monique Teillaud, 2x2h, Collège Le Prés des Roures, Le Rouret, in the framework of the national Week of Mathematics.

Master: Computational Geometry: from Theory to Applications, Steve Oudot, 20h ETD, M1, École Polytechnique.

Master: Géométrie algorithmique, Olivier Devillers, 13h ETD (2011-2012) 24h ETD (2012-2013), M1, Université de Nice.

Master: Algorithmes géométriques: théorie et pratique, Pierre Alliez, Olivier Devillers, and Monique Teillaud, 28h ETD, M2, Université de Nice.

Master: Computational Geometry Learning, J.-D. Boissonnat, F. Chazal and M. Yvinec, 25h, M2, MPRI (Paris).

Master: Computational Geometry: from Theory to Applications, S. Oudot, 20h, M1, école Polytechnique.

Master: Maillages 3D et applications, P. Alliez, 21h, Ecole des Ponts ParisTech (Paris).

Master: Mathématiques pour la géométrie, P. Alliez, 24h, EFREI (Ecole d'Ingénieur des Technologies de l'Information et de la Communication, Paris).

Doctorat: Geometric Inference, F. Chazal and D. Cohen-Steiner, 20h, Université Paris Sud (Orsay).

Master: Markov Random Fields, F. Lafarge, 9h, EPU Sophia Antipolis.

Master: Image segmentation, F. Lafarge, 11h, University of Nice Sophia Antipolis.

Doctorat: Introduction to Geometric Inference, F. Chazal, 3h, Neuchâtel University.

Arijit Ghosh, Piecewise Linear Reconstruction and Meshing of Submanifolds of Euclidean Space, Université de Nice-Sophia Antipolis, May 30, 2012, J-D. Boissonnat.

Bertrand Pellenard, Generation of quadrangle meshes. Université de Nice-Sophia Antipolis, Dec 18, 2012, P. Alliez.

Claire Caillerie, Sélection de modèles pour l'inférence géométrique, Université Paris XI, started September 2008, Frédéric Chazal and Pascal Massart (in progress ).

Alexandre Bos, Topological methods for geometric data classification, Université Paris XI, started September 2010, Frédéric Chazal and Steve Oudot (in progress ).

Mikhail Bogdanov, Triangulations in non-Euclidean spaces, started October 1st 2010, Monique Teillaud (in progress ).

Yannick Verdié, Urban scene analysis from unstructured point data, Université de Nice-Sophia Antipolis, started November 2010, Florent Lafarge (in progress ).

Mickael Buchet, Topological and geometric inference from measures, Université Paris XI, started October 2011, Frédéric Chazal and Steve Oudot (in progress ).

Ross Hemsley, Probabilistic methods for the efficiency of geometric structures and algorithms, started October 1st 2011, Olivier Devillers (in progress ).

Clément Maria, Data structures for simplicial complexes, Université de Nice-Sophia Antipolis, started October 1, 2011, J-D. Boissonnat (in progress ).

Simon Giraudot, Robust Shape Reconstruction, Université de Nice-Sophia Antipolis, started November 2011, Pierre Alliez (in progress ).

Xavier Rolland-Névière, Robust watermarking of 3D Shape, Université de Nice-Sophia Antipolis, started November 2011, Pierre Alliez and Gwenael Doerr from Technicolor (in progress ).

Sven Oesau, Reconstruction of indoor scenes, Université de Nice-Sophia Antipolis, started April 2012, Pierre Alliez and Florent Lafarge (in progress ).

Manish Mandad, Robust Shape Approximation, Université de Nice-Sophia Antipolis, started November 2012, Pierre Alliez (in progress ).

Rémy Thomasse, Smoothed complexity of geometric structures and algorithms, started December 1st 2012, Olivier Devillers (in progress ).

Internship proposals can be found on the web at
http://

Manish Mandad, Shape approximation with guarantees, Master CBB (France).

Mathier Schmitt, Meshing of the hyperbolic octagon, L3 ENS Lyon (Monique Teillaud).

Claudia Werner, Triangulations on the sphere, Hochschule für Technik Stuttgart (Monique Teillaud)

Frédéric Chazal, “Topological Data Analysis using distance based functions”, 4th International Workshop on computational Topology in Image context, Bertinoro, Italy, february 2012.

Olivier Devillers, “EuroCG-2012”, Delaunay triangulation, theory vs practice, March 19th.

Pierre Alliez, “Robust Shape Reconstruction”, Dive with an Expert Seminar, Schlumberger Montpellier Technology Center, May 2012.

J-D. Boissonnat, “Aspects algorithmiques de la triangulation des variétés”, Colloque Mathématiques et Formation des Ingénieurs”, Coetquidan, Juin 2012.

Frédéric Chazal, “Topological Data Analysis using distance based functions”, Dynamics, Topology and Computations (DyToComp 2012 - satellite conference of 6th European Congress of Mathematics), Bedlewo Poland, June 2012.

Frédéric Chazal, “Detection and Approximation of Linear Structures in Metric Spaces” Workshop on Algorithms for Modern Massive Data Sets (MMDS 2012), Stanford USA, July 2012.

J-D. Boissonnat, “Delaunay-type structures for manifolds”, Workshop on Applied and Computational Topology (ATMCS), Edinburgh July 2012.

Pierre Alliez, “Advances in Architectural Geometry”, Paris, September.

Pierre Alliez, “MICCAI workshop on Mesh Processing in Medical Image Analysis”, Nice, October.

Pierre Alliez, “3DIMPVT: 3D Imaging, Modeling, Processing, Visualization and Transmission”, Zurich, October.

J-D. Boissonnat, “Provably good meshes”, Dive with an Expert Seminar, Schlumberger Montpellier Technology Center, October 2012.

Frédéric Chazal, “Persistence Stability for Geometric complexes” , Workshop on Algebraic Topology and Machine Learning, NIPS 2012, Lake Tahoe USA, December 2012.

Members of the project have presented their published articles at conferences. The reader can refer to the bibliography to obtain the corresponding list. We list below all other talks given in seminars, summer schools and other workshops.

Olivier Devillers, «Constructions incrémentielles randomisées», journées Présage, january 12-13.

Marc Glisse, «Complexité moyenne et Visibilité 3D», journées Présage, january 12-13.

Olivier Devillers, Olivier Devillers, «Delaunay triangulations, theory vs. practice», journées de géometrie algorithmique, april 2-6.

Clément Maria, “A Data Structure to Represent Simplicial Complexes“, journées de géométrie algorithmique, april 2-6.

Ross Hemsley, “Pivot Walk - A Faster Walking Strategy for Point Location” , journées de géométrie algorithmique, april 2-6.

Ramsay Dyer, “Stability of Delaunay-type structures for manifolds”, journées de géométrie algorithmique, april 2-6.

Frédéric Chazal, “Geometric Inference using distance-like functions”, Minisymposium on Computational Geometric Learning, Exploring geometric structures in high dimensions, CG:APT, SoCG 2012.

Frédéric Chazal, “Persistence Stability for Geometric complexes”, Workshop on Topological data analysis and machine learning theory, Banff Canada, October 2012.

Olivier Devillers, «Hypergraphes de régions vides I : point de vue de la géométrie algorithmique», journées Présage, october 18-19.

Marc Glisse, «Complexité moyenne en visibilité, sans les logs», journées Présage, october 18-19.

Frédéric Chazal, “Detection and Approximation of Linear Structures in Metric Spaces”, SMAI-SIGMA Conference 2012, CIRM Luminy France, November 2012.

Ramsay Dyer, “Stability of Delaunay-type structures for manifolds”, SMAI-SIGMA Conference 2012, CIRM

Clément Maria, “A space and time efficient implementation for computing persistent homology”, Computational Geometric Learning Workshop, Berlin, december 2012.

Pierre Alliez, “The CGAL Library and Mesh Generation”, Graduate school of the EUROGRAPHICS Symposium on Geometry Processing, July 2012.

http://

The Geometrica seminar featured presentations from the following visiting scientists:

Xianhai Meng (Beihang University, China): Mesh Generation and 3D Geological Modeling.

Thijs van Lankveld (University of Utrecht). Reconstructing urban geometry per surface from point samples. March 19.

Vincent Vidal (LIRIS, Lyon, France). Modèles graphiques pour segmenter et remailler les maillages surfaciques triangulaires. April 11.

Mark Blome (Konrad-Zuse-Zentrum für Informationstechnik, Berlin). Modeling of complex 3D nano-photonic devices using Computer Aided Design techniques. July 3.

Benjamin Burton (University of Queensland, Australia). Unknot recognition and the elusive polynomial time algorithm. October 3.

Clément Maria visited Pr. Coutsias (University of New Mexico) for one week in september and Pr. Dey (Ohio State University) for one week.

Pierre Alliez, RWTH Aachen, October 10-12.

Olivier Devillers, EPI rap, November 19-21.

Ross Hemsley, EPI rap, November 19-21.

Florent Lafarge, University of Auckland, December 17- January 11.