Team Geometrica

Overall Objectives
Scientific Foundations
Application Domains
New Results
Contracts and Grants with Industry
Other Grants and Activities

Section: New Results

Data Structures and Robust Geometric Computation

Parallel Geometric Algorithms for Multi-Core Computers

Participant : Sylvain Pion.

In collaboration with Vicente Batista (former INRIA intern), David Millman from University of North Carolina at Chapel Hill, and Johannes Singler from Universität Karlsruhe.

Computers with multiple processor cores using shared memory are now ubiquitous. We present several parallel geometric algorithms that specifically target this environment, with the goal of exploiting the additional computing power. The d -dimensional algorithms we describe are (a) spatial sorting of points, as is typically used for preprocessing before using incremental algorithms, (b) k d-tree construction, (c) axis-aligned box intersection computation, and finally (d) bulk insertion of points in Delaunay triangulations for mesh generation algorithms or simply computing Delaunay triangulations. We show experimental results for these algorithms in 3D, using our implementations based on cgal . This work is a step towards what we hope will become a parallel mode for cgal , where algorithms automatically use the available parallel resources without requiring significant user intervention [25] .

Delaunay Triangulation of Points in Higher Dimensions

Participants : Jean-Daniel Boissonnat, Olivier Devillers, Samuel Hornus.

We propose a new C++ implementation of the well-known incremental algorithm for the construction of Delaunay triangulations in any dimension. Our implementation follows the exact computing paradigm and is hence robust. An extensive series of comparisons have shown that our implementation outperforms the best available implementations for convex hulls and Delaunay triangulations, and that it can be used for large point sets in spaces of dimensions up to 6 [26] .

To circumvent prohibitive memory usage, we also propose a modification of the algorithm that uses and stores only the Delaunay graph (the edges of the full triangulation). We show that a careful implementation of the modified algorithm performs only 6 to 8 times slower than the original algorithm while drastically reducing memory usage in dimension 4 or above.

Filtering Relocations on a Delaunay Triangulation

Participants : Pierre Alliez, Pedro Machado Manhães de Castro, Olivier Devillers, Jane Tournois.

Updating a Delaunay triangulation with moving vertices is a bottleneck in several domains of application. Rebuilding the whole triangulation from scratch is surprisingly a very viable option compared to relocating the vertices. This can be explained by several recent advances in efficient construction of Delaunay triangulations. However, when all points move with a small magnitude, or when only a fraction of the vertices move, rebuilding is no longer the best option. This paper considers the problem of efficiently updating a Delaunay triangulation when its vertices are moving under small perturbations. The main contribution is a set of filters based upon the concept of vertex tolerances. Experiments show that filtering relocations is faster than rebuilding the whole triangulation from scratch under certain conditions [21] .

Point Location Strategies in Delaunay Triangulation

Participants : Pedro Machado Manhães de Castro, Olivier Devillers.

Point location in spatial subdivision is one of the most well-known problem in computational geometry. In the case of triangulations of Im1 $\#8477 ^d$ , we revisit the problem to exploit a possible coherence between the query points.

For a single query, walking in the triangulation is a classical strategy with good practical behavior and expected complexity Im2 ${O(n^\mfrac 1d)}$ if the points are evenly distributed. For a batch of query points, the main idea is to use previous queries to improve the current one; we compare various strategies that have an influence on the constant hidden in the big-O notation.

Still to improve a query, close to another one, we show how the Delaunay hierarchy can be used to answer a query q with a O(log#(pq)) randomized expected complexity, where #(.) indicates the number of simplexes crossed by the line pq , and p is a previously located query. The data structure has O(nlogn) construction complexity and O(n) memory complexity [58] .

Efficient Static and Dynamic Proximity Queries using Locality-Sensitive Hashing

Participant : Steve Oudot.

This work has been carried out in collaboration with David Arthur and Aneesh Sharma, both from Stanford University.

The approximate Nearest Neighbor search problem (NN) asks to pre-process a given set of points P in such a way that, given any query point q , one can retrieve a point in P that is approximately closest to q . Of particular interest is the case of points lying in high dimensions, which has seen rapid developments since the introduction of the Locality-Sensitive Hashing (LSH) data structure by Indyk and Motwani. Combined with a space decomposition by Har-Peled, the LSH data structure can answer approximate NN queries in sub-linear time using polynomial (in both d and n ) space. Unfortunately, it is not known whether Har-Peled's space decomposition can be maintained efficiently under point insertions and deletions, so the above solution only works in a static setting.

In this work we present a variant of Har-Peled's decomposition, based on random semi-regular grids, which can achieve the same query time with the added advantage that it can be maintained efficiently even under adversarial point insertions and deletions. The outcome is a new data structure to answer approximate NN queries efficiently in dynamic settings.

Another related problem known as Reverse Nearest Neighbor search (RNN) is to find the influence set of a given query point q , i.e. the subset of points of P that have q as their nearest neighbor. Although this problem finds many practical applications, very little is known about its complexity. In particular, no algorithm is known to solve it in high dimensions in sub-linear time using sub-exponential space. In this work we show how to pre-process the data points, so that Har-Peled's space decomposition combined with modified LSH data structures can solve an approximate variant of the RNN problem efficiently, using polynomial space. The query time of our approach is bounded by two terms: the first one is sub-linear in the size of P and corresponds roughly to the incompressible time needed to locate the query point in the data structure; the second one is proportional to the size of the output, which is a set of points as opposed to a single point for (approximate) NN queries. An interesting feature of our RNN solution is that it is flexible enough to be applied indifferently in monochromatic or bichromatic settings [50] .

Are Extreme Points Robust to Perturbations ?

Participant : Olivier Devillers.

This work has been done in collaboration with Dominique Attali (GipsaLab) and Xavier Goaoc (Loria).

Figure 3. Noise reduce the complexity

Assume that X' is a noisy version of a point set X in convex position (all points are vertices of their convex hull). How many extreme points does X' have? (See Figure 3 ).

We consider the case where X is an ($ \epsilon$, $ \kappa$) -sample of a sphere in Im1 $\#8477 ^d$ and the noise is random and uniform: X' is obtained by replacing each point x$ \in$X by a point chosen randomly uniformly in some region R(x) of size $ \delta$ around x . We give upper and lower bounds on the expected number of extreme points in X' when R(x) is a ball (in arbitrary dimension) or an axis-parallel square (in the plane). Our bounds depends on the size n of X and $ \delta$ , and are tight up to a polylogarithmic factor. These results naturally extend in various directions (more general point sets, other regions R(x) , more general distributions...).

We also present experimental results that show that our bounds for random noise provide good estimators of the behavior of snap-rounding, where X' is obtained by rounding each point of X to the nearest point on a grid of step $ \delta$ [51] .

Deletion in Two Dimensional Delaunay Triangulation: Asymptotic Complexity is Pointless

Participant : Olivier Devillers.

The theoretical complexity of vertex removal in a Delaunay triangulation is often given in terms of the degree d of the removed point with usual results O(d) , O(dlogd) , or O(d2) . In fact the asymptotic complexity is of poor interest since d is usually quite small. In this paper, we carefully design code for small degrees 3$ \le$d$ \le$7 , it improves the global behavior of the removal for random points by a factor of 2 [59] .

The new method is implemented and will be submitted for inclusion into cgal .

Triangulation on the Sphere

Participants : Manuel Caroli, Pedro Machado Manhães de Castro, Monique Teillaud.

This work has been done in collaboration with Sébastien Loriot (Abs ), Camille Wormser (ETH Zürich) and Olivier Rouiller (École Centrale Lille).

We propose two ways to compute the Delaunay triangulation of points on a sphere, or of rounded points close to a sphere, both based on the classic incremental algorithm initially designed for the plane. (See Figure 4 .) We use the so-called space of circles as mathematical background for this work. We present a fully robust implementation built upon existing generic algorithms provided by the cgal library. The efficiency of the implementation is established by benchmarks [54] .

Figure 4. Delaunay triangulation of 20950 weather stations all around the world


Logo Inria