Team geometrica

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Data Structures and Robust Geometric Computation

The Size of some Trees Constructed on Planar Point Sets

Participants : Pedro Machado Manhães de Castro, Olivier Devillers.

The Euclidean minimal k -insertion tree (EMITk ) is obtained for a set of n points obtained by linking the last point to the closest amongst the k last inserted point. EMIT1 is just the chain of points in insertion order and EMITn is the minimum spanning tree. If the weight w of an edge e is its Euclidean length to the power of $ \alpha$ , we show that Im7 ${\#8721 _{e\#8712 EMIT_k}w{(e)}}$ is Im8 ${O(n·k^{-\#945 /d})}$ in the worst case, where d is the dimension, for d$ \ge$2 and 0<$ \alpha$<d . We also analyze the expected size of EMITk and some stars, when points are evenly distributed inside the unit ball, for any $ \alpha$>0 [16] , [48] . These results are used in the next section.

Simple and Efficient Distribution-Sensitive Point Location in Triangulations

Participants : Pedro Machado Manhães de Castro, Olivier Devillers.

Point location in spatial subdivision is one of the most studied problems in computational geometry. In the case of triangulations of Im2 $\#8477 ^d$ , we revisit the problem to exploit a possible coherence between the query-points.

For a single query, walking in the triangulation is a classical strategy with good practical behavior and expected complexity O(n1/d) if the points are evenly distributed. Based upon this strategy, we analyze, implement, and evaluate a distribution-sensitive point location algorithm based on the classical Jump & Walk, called Keep, Jump, & Walk. For a batch of query-points, the main idea is to use previous queries to improve the current one. In practice, Keep, Jump, & Walk is actually a very competitive method to locate points in a triangulation.

Regarding point location in a Delaunay triangulation, we show how the Delaunay hierarchy can be used to answer, under some hypotheses, a query q with a O(log#(pq)) randomized expected complexity, where p is a previously located query and #(s) indicates the number of simplices crossed by the line segment s .

The Delaunay hierarchy has O(nlogn) time complexity and O(n) memory complexity in the plane, and under certain realistic hypotheses these complexities generalize to any finite dimension.

Finally, we combine the good distribution-sensitive behavior of Keep, Jump, & Walk, and the good complexity of the Delaunay hierarchy, into a novel point location algorithm called Keep, Jump, & Climb. To the best of our knowledge, Keep, Jump, & Climb is the first practical distribution-sensitive algorithm that works both in theory and in practice for Delaunay triangulation—it is actually faster than the Delaunay hierarchy regardless of the spatial coherence of queries, and significantly faster when queries have strong spatial coherence [16] , [49] .

Delaunay Triangulation of Imprecise Points, Preprocess and Actually Get a Fast Query Time

Participant : Olivier Devillers.

Given a set of disks, we can preprocess them so that given a point in each disk, we can compute the Delaunay triangulation of these points in linear time if the disk are disjoint unit disks [45] . The proposed method is much simpler than previous similar method and is in practice actually faster than computing the Delaunay triangulation from scratch (without the knowledge of the disks).

Oja Medians and Centers of Gravity

Participant : Olivier Devillers.

This work has been done in collaboration with Dan Chen and Pat Morin (Carleton Univ.), John Iacono (Polytechnic, NY), and Stefan Langerman (Univ. Bruxelles).

Given a point set S , various notion of depth can be defined. The Oja depth of a query is the sum of the volume of all simplices formed by the query and points from S , and an Oja center is a point that minimize the Oja depth. In this work, relations between the center of gravity and Oja center are explored [28] .

Delaunay Triangulations of Point Sets in Closed Euclidean d -Manifolds

Participants : Manuel Caroli, Monique Teillaud.

We give a definition of the Delaunay triangulation of a point set in a closed Euclidean d -manifold, i.e. a compact quotient space of the Euclidean space for a discrete group of isometries (a so-called Bieberbach group or crystallographic group). We describe a geometric criterion to check whether a partition of the manifold actually forms a triangulation (which subsumes that it is a simplicial complex). We provide an algorithm to compute the Delaunay triangulation of the manifold for a given set of input points, if it exists. Otherwise, the algorithm returns the Delaunay triangulation of a finitely sheeted covering space of the manifold. The algorithm has optimal randomized worst-case time and space complexity.

Whereas there was prior work for the special case of the flat torus, as far as we know this is the first result for general closed Euclidean d -manifolds. This research is motivated by application fields, like computational biology for instance, showing a need to perform simulations in quotient spaces of the Euclidean space by more general groups of isometries than the groups generated by d independent translations [43] , [26] .

Parallel Geometric Algorithms for Multi-Core Computers

Participant : Sylvain Pion.

In collaboration with Vicente Batista (former INRIA intern), David Millman from University of North Carolina at Chapel Hill, Johannes Singler from Universität Karlsruhe, and Marc Jeanmoungin (INRIA intern from ENS Paris).

Computers with multiple processor cores using shared memory are now ubiquitous. We present several parallel geometric algorithms that specifically target this environment, with the goal of exploiting the additional computing power. The d -dimensional algorithms we describe are (a) spatial sorting of points, as is typically used for preprocessing before using incremental algorithms, (b) k d-tree construction, (c) axis-aligned box intersection computation, and finally (d) bulk insertion of points in Delaunay triangulations for mesh generation algorithms or simply computing Delaunay triangulations. We show experimental results for these algorithms in 3D, using our implementations based on cgal . This work is a step towards what we hope will become a parallel mode for cgal , where algorithms automatically use the available parallel resources without requiring significant user intervention [17] .

We also started work on parallel mesh generation, built on top of our work just described.

The Design of Core 2: A Library for Exact Numeric Computation in Geometry and Algebra

Participant : Sylvain Pion.

In collaboration with Jihun Yu (New York University), Chee Yap (New York University), Zilin Du (New York University) and Hervé Brönnimann (Polytechnic University Brooklyn).

There is a growing interest in numeric-algebraic techniques in the computer algebra community as such techniques can speed up many applications. This paper is concerned with one such approach called Exact Numeric Computation (ENC). The ENC approach to algebraic number computation is based on iterative verified approximations, combined with constructive zero bounds. This paper describes Core 2, the latest version of the Core Library, a package designed for applications such as non-linear computational geometry. The adaptive complexity of ENC combined with filters makes such libraries practical.

Core 2 smoothly integrates our algebraic ENC subsystem with transcendental functions with $ \epsilon$ -accurate comparisons. This paper describes how the design of Core 2 addresses key software issues such as modularity, extensibility, efficiency in a setting that combines algebraic and transcendental elements. Our redesign preserves the original goals of the Core Library, namely, to provide a simple and natural interface for ENC computation to support rapid prototyping and exploration. We present examples, experimental results, and timings for our new system, released as Core Library 2.0 [34] .

On the Complexity of Sets of Free Lines and Line Segments Among Balls in Three Dimensions

Participant : Marc Glisse.

This work has been done in collaboration with Sylvain Lazard from EPI Vegas .

We present two new fundamental lower bounds on the worst-case combinatorial complexity of sets of free lines and sets of maximal free line segments in the presence of balls in three dimensions. We first prove that the set of maximal non-occluded line segments among n disjoint unit balls has complexity $ \upper_omega$(n4) , which matches the trivial O(n4) upper bound. This improves the trivial $ \upper_omega$(n2) bound and also a previously known $ \upper_omega$(n3) lower bound for the restricted setting of arbitrary-size balls. This result settles, negatively, the natural conjecture that this set of line segments, or, equivalently, the visibility complex, has smaller worst-case complexity for disjoint fat objects than for skinny triangles. We also prove an $ \upper_omega$(n3) lower bound on the complexity of the set of non-occluded lines among n balls of arbitrary radii, improving on the trivial $ \upper_omega$(n2) bound. This new bound almost matches the Im9 ${O(n^{3+\#1013 })}$ upper bound obtained recently by Rubin [29] .

Reverse Nearest Neighbors Search in High Dimensions using Locality-Sensitive Hashing

Participant : Steve Oudot.

In collaboration with David Arthur (Stanford then Google).

We investigate the problem of finding reverse nearest neighbors efficiently. Although provably good solutions exist for this problem in low or fixed dimensions, to this date the methods proposed in high dimensions are mostly heuristic. We introduce a method that is both provably correct and efficient in all dimensions, based on a reduction of the problem to one instance of $ \epsilon$ -nearest neighbor search plus a controlled number of instances of exhaustive r -PLEB, a variant of Point Location among Equal Balls where all the r -balls centered at the data points that contain the query point are sought for, not just one. The former problem has been extensively studied and elegantly solved in high dimensions using Locality-Sensitive Hashing (LSH) techniques. By contrast, the latter problem has a complexity that is still not fully understood. We revisit the analysis of the LSH scheme for exhaustive r -PLEB using a somewhat refined notion of locality-sensitive family of hash function, which brings out a meaningful output-sensitive term in the complexity of the problem. Our analysis, combined with a non-isometric lifting of the data, enables us to answer exhaustive r -PLEB queries (and down the road reverse nearest neighbors queries) efficiently. Along the way, we obtain a simple algorithm for answering exact nearest neighbor queries, whose complexity is parametrized by some condition number measuring the inherent difficulty of a given instance of the problem [41] .

Certified Complex Root Isolation via Adaptive Root Separation Bounds

Participant : Michael Hemmer.

In collaboration with Michael Sagraloff from MPII and Michael Kerber from IST.

We address the problem of root isolation for polynomial systems: for an affine, zero-dimensional polynomial system of N equations in N variables, we describe an algorithm to encapsulate all complex solutions into disjoint regions, each containing precisely one solution (called isolating regions). Our approach also computes the multiplicity of each solution. The main novelty is a new approach to certify that a set of computed regions is indeed isolating. It is based on an adaptive root separation bound obtained from combining information about the approximate location of roots and resultant calculus. Here we use simple subdivision method to determine the number of roots within certain regions. The resultant calculus only takes place over prime fields to avoid the disadvantageous coefficient growth in symbolic methods, without sacrificing the exactness of the output. The presented approach is complete for uni- and bivariate systems, and in general applies in higher dimensions as well, possibly after a coordinate change.

A Complete, Exact and Efficient Implementation for Computing the Edge-Adjacency Graph of an Arrangement of Quadrics

Participant : Michael Hemmer.

In collaboration with Sylvain Petitjean and Laurent Dupont from EPI Vegas and Elmar Schömer from the University of Mainz.

Figure 7. Arrangement of quadrics
IMG/qi

We present a complete, exact and efficient implementation to compute the edge-adjacency graph of an arrangement of quadrics, i.e. surfaces of algebraic degree 2 (Figure 7 ). This is a major step towards the computation of the full 3D arrangement. We enhanced an implementation for an exact parameterization of the intersection curves of two quadrics, such that we can compute the exact parameter value for intersection points and from that the edge-adjacency graph of the arrangement. Our implementation is complete in the sense that it can handle all kinds of inputs including all degenerate ones, i.e. singularities or tangential intersection points. It is exact in that it always computes the mathematically correct result. It is efficient in terms of running times, i.e. it compares favorably to the only previous implementation [19] .

Constructing the Exact Voronoi Diagram of Arbitrary Lines in Space, with Fast Point-Location

Participant : Michael Hemmer.

In collaboration with Ophir Setter and Dan Halperin from the University of Tel Aviv.

Supplementary material and in particular the prototypical code of our implementation can be found in the website: http://acg.cs.tau.ac.il/projects/internal-projects/3d-lines-vor/project-page

We introduce a new, efficient, and complete algorithm, and its exact implementation, to compute the Voronoi diagram of lines in space (Figure 8 ). This is a major milestone towards the robust construction of the Voronoi diagram of polyhedra. As we follow the exact geometric-computation paradigm, it is guaranteed that we always compute the mathematically correct result. The algorithm is complete in the sense that it can handle all configurations, in particular all degenerate ones. The algorithm requires Im10 ${O(n^{3+\#949 })}$ time and space, where n is the number of lines. The Voronoi diagram is represented by a data structure that permits answering point-location queries in O(log2n) expected time. The implementation employs the cgal packages for constructing arrangements and lower envelopes together with advanced algebraic tools [30] , [46] .

Figure 8. Voronoi diagram of lines
IMG/vdol_3

A Generic Algebraic Kernel for Non-linear Geometric Applications

Participant : Michael Hemmer.

In collaboration with Eric Berberich from MPII and Michael Kerber from IST.

We report on a generic (uni- and bivariate) algebraic kernel that becomes available to the public with cgal  3.7. It comprises complete, correct, though efficient state-of-the-art implementations on polynomials, roots of polynomial systems, and the support to analyze algebraic curves defined by bivariate polynomials. The kernel is accompanied with a ready-to-use interface to enable arrangements induced by algebraic curves, that have already been used as basis for various geometric applications, as arrangements on Dupin cyclides or the triangulation of algebraic surfaces. We present two novel applications: arrangements of rotated algebraic curves and Boolean set operations on polygons bounded by segments of algebraic curves. We also provide exhaustive experiments showing that our implementation is competitive and often outperforms existing implementation on non-linear curves available in cgal , which demonstrates the general usefulness of the presented software [42] .

Figure 9. Arrangements of rotated algebraic curves
IMG/swirl

previous
next

Logo Inria