## Section: New Results

### Geometric analysis

#### Semantic analysis of non coherent geometry

Aurélien Martinet started his PhD in 2003, under the supervision of Cyril Soler and Nicolas Holzschuch, working on the automatic extraction of semantic information from non coherent geometry. This work aims at answering a recurrent need in computer graphics: most researchers work with 3D scene data into which they need high level information such as which groups of polygons form connex shapes, human regognisable objects, have symetries, or even which groups of polygons look like each other (also known as instancing information ). Unfortunately such high level information is most of the time not present in 3D geometry files, either because it was lost during format conversions, or because it was not defined the same way by the designer of the model.

The question to be solved is thus how to automatically retrieve some high level (also named semantic ) information from a polygon soup , i.e a list of polygons without any information about how these polygons are related to each other. During the passed year, Aurelien has focused on developing a new technique for automaticaly extracting instantiation information in a scene, on the basis of the work he previously performed for extracting symmetries of objects [8] . Figure 19 shows an example of an instancing graph automatically obtained using this method: this structure is a Directed Acylic Graph where each node is associated to a "generic object" which is instantiated in the scene, and each edge represents the geometric transformation of each instance.

Figure 19. From the input model (top center) , we compute a hierarchy of instances which give a "structure" to the model. This structure is a Directed Acylic Graph where each node is associated to a "generic object" which is instantiated in the scene, and each edge represents the geometric transformation of each instance. For clarity, we replace n multiple edges between two nodes with a single edge labeled n. On each side of the Figure, we present two examples of instances detected by our method. Our basic assumption is that the input model is completely unstructured and is therefore given as a polygon soup.

#### On Exact Error Bounds for View-Dependent Simplification

In view-dependent simplification, an object is simplified so that the difference between original and simplified versions as seen from a given viewcelll is bounded by a given error. The error is the maximum reprojection error, that is the distance between the projection of a point in image, and the projection of its counterpart in the simplified version.

To guarantee an error bound, one must know how much a point can be moved from its original position to satisfy the reprojection error bound. This defines the validity region of the point. Surprisingly, finding this region is a very difficult geometric problem. Elmar Eisemann worked on it during his master and found very important results. For example, the error bound cannot be checked only at the vertices of the mesh. Also, the maximum reprojection error is not necessarily observed at one of the corner of the viewcell. Finally, he showed how to compute exactly the validity region for the 2D case and opened the way to an extension to the 3D case. The proof is elegant and very inovative. It provides the first exact bound on view-dependent simplification error. In contrast, previously published bounds were often only approximate (though sufficient for the considered application). The results have been accepted as a journal paper [6] .

Figure 20. Validity regions of points that bound the reprojection error as seen from a line viewcell. We are able to describe exactly these regions.

Logo Inria