Team arobas

Overall Objectives
Scientific Foundations
Application Domains
New Results
Contracts and Grants with Industry
Other Grants and Activities

Section: New Results

Advanced perception for mobile robotics

Participants : Ezio Malis, Pascal Morin, Adan Salazar, Rémi Desouche.

The realization of complex robotic applications such as autonomous exploration of large scale environments, observation and surveillance by aerial robots requires to develop and combine methods in various research domains: sensor modeling, active perception, visual tracking and servoing, etc. This raises several issues.

Self-calibration of central omnidirectional cameras

Omnidirectional cameras are important in areas where large visual field coverage is needed, such as motion estimation and obstacle avoidance. The advantage of omnidirectional cameras is primarily the ability to see a large part of the surrounding scene. However their practical use is often hindered by the calibration phase that can be time consuming and requires an experienced user. Accurate calibration of a vision system is necessary for any computer vision task requiring the extraction of metric information of the environment from 2D images. The present work was motivated by the desire to facilitate the adoption of central omnidirectional cameras in robotics via the avoidance of awkward calibration steps. Although omnidirectional camera calibration is well understood, no method allowing to robustly on-line self-calibrate any central omnidirectional camera is known. Most of the existent self-calibration methods are off-line and take into account a specific mirror (e.g. hyperbolic and parabolic) or a projection model (skewness, alignment, errors, ...). Therefore, this research concentrates on the on-line self-calibration of any central omnidirectional camera. Another motivation was the lack of a theoretical proof of the uniqueness of camera calibration. In most of the works on omnidirectional cameras calibration it has been observed that in the case of a non-planar mirror two images acquired from different points of view suffice to calibrate an omnidirectional camera. Even though, to our knowledge, no theoretical proof has yet been provided as for the uniqueness of the solution.

Three new results have been obtained this year:

Algorithm for the visual tracking of a plane with an uncalibrated catadioptric camera

An algorithm to efficiently track a plane in an omnidirectional image without requiring the prior calibration of the sensor has been proposed. The approach is very promising because the estimated parameters are integrated into a single global warping function. We deal with a non-linear optimization problem that can be solved for small displacements between two images like those acquired at video rate by a camera mounted on a robot. This algorithm can be very helpful in any application for which camera calibration is impossible or hard to obtain. This result is reported in the publication [30] presented at the IROS 2009 conference.

Direct approach for the self-calibration of catadioptric cameras

A simplification of the calibration phase, by providing a direct approach to the on-line self-calibration of catadioptric cameras, is proposed. The algorithm for the visual tracking of a plane is applied. We use several of the tracked views in order to calibrate the sensor. The proposed method is more flexible since it avoids tedious calibration steps. This should facilitate the adoption of omnidirectional sensors in robotics. This result is reported in the publication [31] presented at the OMNIVIS 2009 Workshop.

Proof of the uniqueness of the solution for the calibration of catadioptric cameras

This is an important contribution because no theoretical proof of the uniqueness of the solution for the calibration of catadioptric cameras had been proposed so far. We have formalized the calibration problem by using a unified model for catadioptric cameras that is valid for all central catadioptric systems. Besides, we have shown that the uniqueness of the solution can be derived from the solution of non-linear equations, which have been solved in the general case.

Fusion of visual and inertial data for pose estimation

Estimating the pose (i.e. position and orientation) of a vehicle with respect to its environment from onboard sensor's measurements is a fundamental problem for many robotic applications. In the case of aerial robotics for example, this problem is critical because fully autonomous control modes require an accurate estimation of the pose of the vehicle. For small vehicles, typically used in robotics, purely inertial solutions cannot be used due to the prohibitive cost and weight of the associated sensors. Solutions based on the use of GPS and low-cost/low-weight IMUs (Inertial Measurement Units) have been developed recently, but the pose estimation accuracy so obtained remains limited and, obviously, these solutions cannot be used when the GPS information is not available. For these reasons there is a growing interest in the development of vision based algorithms, which can provide alternative solutions with limited weight and cost. Due to the relatively low bandwith of visual sensors and problems associated with illumination changes, there is an interest in complementing artificial vision with an IMU. We have initiated this year a study on the fusion of visual and inertial data for pose estimation, with the objective of using this solution for aerial robotics applications. The approach that we have investigated consists in using the ESM algorithm [1] to obtain a first pose estimate that is used as a correction term in a nonlinear observer driven by the IMU measurements. The state of this observer is complemented so as to allow for the estimation of biases in the IMU measurements. This work, reported in [36] , has been carried out as part of Rémi Desouche's internship. Glauco Scandaroli, who has just started a Ph.D. in our team, will take this study over.


Logo Inria