Team, Visitors, External Collaborators
Overall Objectives
Research Program
Application Domains
Highlights of the Year
New Software and Platforms
New Results
Partnerships and Cooperations
XML PDF e-pub
PDF e-Pub

Section: New Results

Matching and localization

Participants : Marie-Odile Berger, Vincent Gaudilliere, Gilles Simon, Frédéric Sur, Matthieu Zins.

View synthesis for efficient and accurate pose computation

Estimating the pose of a camera from a scene model is a challenging problem when the camera is in a position not covered by the views used to build the model, because feature matching is difficult. Several viewpoint simulation techniques have been recently proposed in this context. They generally come with a high computational cost, are limited to specific scenes such as urban environments or object-centred scenes, or need an initial pose guess. A new method based on viewpoint simulation is presented in [15]. In this article, we show that view synthesis dramatically improves pose computation and that both the synthesis process and pose computation can be done in a very efficient way. Two major problems are especially addressed: the positioning of the virtual viewpoints with respect to the scene, and the synthesis of geometrically consistent patches. Experiments show that patch synthesis dramatically improves the accuracy of the pose in case of difficult registration, with a limited computational cost.

Localization from objects

We are interested in AR applications which take place in man-made GPS-denied environments, such as industrial or indoor scenes. In such environments, relocalization may fail due to repeated patterns and large changes in appearance which occur even for small changes in viewpoint. During this year, we have investigated a new method for relocalization which operates at the level of objects and takes advantage of the impressive progress realized in object detection. Recent works have opened the way towards object oriented reconstruction from elliptic approximation of objects detected in images. We have gone beyond that and have proposed a new method for pose computation based on ellipse/ellipsoid correspondences. In [18], we have proved that a closed form estimate of the translation can be uniquely inferred from the rotation matrix of the pose. When two or more correspondences are available, the rotation matrix is deduced through an optimization problem with three degrees of freedom. However, the pose cannot be uniquely computed from one correspondence. In [19], we consider the practical common case where an initial guess of the rotation matrix of the pose is known, for instance with an inertial sensor or from the estimation of orthogonal vanishing points [10]. The translation is recovered as in [18], [24]. We proved the effectiveness of the method on real scenes from a set of object detections generated by YOLO [33]. Globally, considering pose at the level of objects allows us to avoid common failures due to repeated structures. In addition, due to the small combinatorics induced by object correspondences, our method is well suited to fast rough localization even in large environments.

A patent was filed on this method in May 2019 [27]. An Inria technological transfer action (ATT) on the subject of object based localization will start in January 2020 with the aim to produce a demonstrator for industrial maintenance in complex environments.