Team arobas

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Perception and autonomous navigation

Participants : Patrick Rives, Pascal Morin, Andrew Comport, Tarek Hamel, Alexandre Chapoulie, Gabriela Gallegos, Cyril Joly, Maxime Meilland, Glauco Scandaroli.

Nonlinear state estimation for feedback control

Nonlinear filter design for pose estimation and online IMU calibration

This work deals with pose (i.e. position and attitude) estimation via the fusion of sensory measurements. Given pose and inertial measurements, the objective is to obtain estimates that best exploit the different measurements characteristics, while coping with measurements biases. This issue is fundamental for ground and aerial robotic applications, for which a precise knowledge of the robot location is often required. In general, high frequency measurements, typically from 50 Hz up to 1 kHz , of rotational velocity and specific translational acceleration are provided by an inertial measurement unit (IMU). As a result of micro-electro-mechanical sensors (MEMS) manufacturing characteristics, IMU measurements are corrupted by additive noise and offset, also known as measurement bias. As a consequence, the estimation results obtained by using IMU data integration solely tend to drift away after a few seconds. The reduction of this drift can be achieved via a good calibration (estimation) of the sensor's bias, eventhough the time-invariant model used for the bias is only an approximation which, moreover, does not account for temperature variation effects. To this aim other sensors that provide explicit or implicit attitude and position measurements must be employed. A nonlinear observer that employs the passive complementary filter for attitude estimation, and a novel observer for position estimation and accelerometer bias calibration, have been developed. Global exponential stability of this observer is proved independently of the angular velocity. The implementation of the proposed method is simpler than Kalman-based estimators. Experimental validation of this solution using a camera for pose measurements is in progress. Furthermore, online calibration of other parameters (such as rigid transformations between different sensors' frames) is also investigated. Part of this work has been submitted for publication at the next ICRA (IEEE Conf. on Robotics and Automation). This work is also being used within the Eco-Industrie collaborative project RAPACE, and is part of Glauco Scandaroli's Ph.D. work.

Observers on the Lie group SL(3) of image homographies

A key tool in mono-camera vision and vision-based control is the Lie group SL(3), which can be associated with the set of homography matrices that relate image points of two views of a planar scene. Homographies have been used in many vision-based control schemes, for different types of robotic systems, because they implicitly contain the Euclidean information that can be retrieved from a mono-camera visual system. Several algorithms have been proposed in the past for calculation of the homography between two views of a planar scene. Understandably, these algorithms only provide in practice an imperfect estimate of the "true" homography. Furthermore, for visual-servoing applications which require high-rate real-time estimates, algorithms based on optimization methods may fail to provide accurate enough results. For all these reasons, it can be useful to use complementary sensors to improve the quality of the calculated homography. For example, inertial sensors constitute an interesting possibility, since their high bandwith can compensate for the relatively low bandwith of visual sensors.

In this work, gyrometer measurements, possibly complemented with velocity measurements, are used in order to improve homography estimates. Nonlinear observers with semi-global stability properties are derived. Different cases of interest in practice are considered, depending on whether velocity measurements are available or not. Under some conditions on the camera motion, these observers can also provide complementary information on the scene structure, like e.g. the normal to the observed plan. This joint work with R. Mahony (Australian National University) and E. Malis (RoboCortex) will soon be submitted for publication.

Simultaneous Localization And Mapping (SLAM)

6dofs Bearing-only SLAM

Bearing-only SLAM is the classic formulation of the SLAM problem in the case of monocular vision where the range is not observable directly. The work started last year based on an omnidirectional camera was pursued. We developped an enhanced version of the Inverse Depth Parameterization which was firstly introduced in [43] . Inverse Depth Parameterization is generally used with an overparameterized Extended Kalman Filter (EKF) which can lead to inconsistencies. It has been shown that in the Smoothing and Mapping (SAM) formulation(The full trajectory is updated at each time-step (contrary to the EKF which filters only the current robot pose).), it is possible to define a non-overparameterized state which leads to much better results than with the EKF. The approach was tested and validated with the data provided by an omnidirectional camera carried out by our indoor mobile robot. In a first experiment, the robot is driven on a ramp (3D trajectory) which points out the inconsistency of the EKF and, in opposite, the accuracy of the SAM formulation. In a second experiment, the robot is constrained to follow a planar trajectory (the estimation of the roll and pitch angles and the z coordinate should be zero). Once again, only our algorithm respected this property although no hypothesis was made concerning the trajectory (figure 3 ). These results were presented in the conference ICINCO'10 ([32] ).

Figure 3. Results of the 6dofs visual SAM algorithm in the case of a 2D trajectory — Left: altitude estimation is very close to zero – Right: roll and pitch estimations are also very close to zero
IMG/coordMap2IMG/anglesMap2
Auto-calibration of the camera with respect to the odometry for 2D trajectories

In the case of 2D trajectories, fusing visual data and odometry information to solve the SLAM problem is a well known method. The idea is to use the odometry as an input of the motion prediction equation and the visual data in the measurement equation. However, such a method implies that the system is well calibrated: the position of the camera with respect to the odometry frame has to be known. To our knowledge, there is no method to perform the calibration operation automatically. We propose to augment the state in the SAM formulation with the unknown camera parameters (with respect to the odometry frame). This method requires to adapt a few Jacobians with respect to the original SAM algorithm which assumes that these parameters are known. However, this approach can lead to instabilities or divergence if the new system is not observable. To deal with this issue, a complete observability analysis was performed. We demonstrated that the camera parameters are observable if and only if the radius of curvature of the robot's trajectory is not constant. In practice, the robot has to make several turns. Results are presented in the figure 4 . The initial values for the camera parameters are set to zero. The reference values were measured by hand on the robots. It can be seen that the estimation of the camera parameters starts when the radius of curvature changes, which was the expected result. Then, the final values provided by the algorithm are very close to the reference values. Finally, the robot trajectory and the map provided were consistent. This work has been submitted to International Conference on Robotics and Automation 2011 (ICRA11).

Figure 4. Estimation of the x coordinate of the camera and variation of the curvature radius of the trajectory
IMG/borelParam2

Indoor SLAM relying on a hybrid laser/omnidirectional sensor

Although the SLAM problem has been solved using many different approaches, important problems, often directly linked to the used sensors, remain. Laser range finders cannot always help in evaluating the translational motion of a robot along a straight line inside a corridor without the risk of encountering observability-related difficulties. Mapping in dynamic environments is also hard using laser data only, due to 2D measurements and slow acquisition rate. On the other hand, using exclusively visual sensors introduces issues such as propagating the scale factor correctly. We develop a novel composite sensor approach that combines the information given by an omnidirectional camera and a laser range finder to efficiently solve the indoor Simultaneous Localization and Mapping problem (SLAM). The complementarity of the sensors allows us to build a 3D representation of the environment. The planar map is obtained from the laser scans using an Enhanced Polar Scan Matching algorithm which is a generalization of Diosi's PSM algorithm. The vertical lines in the 3D scene are estimated using the radial lines extracted from the omnidirectional images. Finally, thanks to the generic projection model of the omnidirectional camera and the depth information provided by the laser range finder, we are able to build a basic wire frame 3D representation of the environment in a global coordinate system. This work [30] was published in the IEEE International Conference on Robotics and Automation (ICRA2010), held in Anchorage, Alaska from May 3 - 8, 2010.

The results above were extended in a second conference paper [29] published in the IEEE International Conference on Intelligent Robots and Systems (IROS 2010) held in Taipei, Taiwan from October 18-22, 2010. We proposed an efficient hybrid laser/vision appearance-based SLAM approach providing a reliable 3D odometry robust to illumination changes and in the presence of occluding and moving objects. The robot trajectory is correctly estimated and the drift is minimized. Furthermore, a 3D textured representation of the environment with a good resolution was obtained. Figure 5 shows the 3D textured reconstruction of the robotic hall corresponding to the merge of about 3,300 images an laser scans correctly synchronized (Videos of the results can be visualized at:https://www-sop.inria.fr/arobas/videos/HybridLaserOmni_IROS10.mp4 ).

Figure 5. 3D reconstruction.
IMG/Reconstruction3D3

Outdoor Visual SLAM

In order to navigate safely and autonomously in complex urban environments, it is necessary to have a precise localization of the robot. Classical methods, such as odometry, typically performed by wheel encoders or inertial sensors, is prone to drift and not suitable for large environments. Low cost GPS stations are inaccurate and satellite masking effect happens too frequently in urban environments to obtain a reliable localization.

Our approach consist in a vision based system associated with a database of spherical images acquired during an offline phase, which permits to obtain a robust drift free localization.

Basically, the database is constituted of spherical images augmented by depth and geo-located in a GIS (Geographic Information System). This spherical robot centered representation accurately represents all necessary information for vision based navigation. Furthermore, this model is generic, which means that any kind of camera sensor can be registered on it. During online navigation, the current vehicle position is obtained by comparing the current vehicle camera view with the closest reference sphere extracted from the database.

A spherical augmented acquisition system has been developed and tested on our Cycab vehicle (figure6 Left). This system is composed of six wide angle stereo cameras in overlap, which permits to extract depth information by dense correspondence. Since the depth dimension is available, we are able to construct 360 degrees spherical images with a unique center of projection (figure 6 Right). Those 3D spheres are then used in an image-based spherical odometry algorithm to obtain the vehicle's trajectory (a result submitted to the International Conference on Robotics and Automation 2011 (ICRA11)), fuse the spheres and construct the database.

During the online navigation, we consider a vehicle equipped with a simple camera (perspective, omnidirectional...). Here the aim is to register the current view on the closest sphere stored in the database. To achieve this we have developed a spherical image-based registration, derived from [26] . This method uses all image information to compute the registration which leads to a robust and accurate localization, but since the images are in high resolution, the computational cost can be too important for real-time application. To solve this we have proposed an efficient information selection based on geometric and luminance information, which allows to speed up the registration algorithm [33] without degrading localization accuracy.

Figure 6. Left: Spherical system mounted on a Cycab robot. Right:Spherical image
IMG/cycab-sphereIMG/SPHERE

Loop closure detection for spherical view based SLAM

One of the major problems in SLAM is to solve the loop closure issue, a robot should be able to determine if the place it is visiting has already been visited; in a metric approach, it allows error reduction with retro-propagation and in a topological approach, it allows the graph construction and the self-localization in the environment. This year, we started a new research axis which aims at extending a loop closure algorithm previously done by Angeli Adrien and Filliat David (co-advisor of A. Chapoulie's Thesis). This algorithm performs well when the place is visited in the same direction as previously (same point of view), an example could be the detection of a loop around buildings. We extend this approach to the case of spherical views in order to allow for loop closure detection independently from the direction (different points of view). This takes advantage of a plain 360°view of the environment, essential feature for a topological multi point of view loop closure detection and map construction algorithm. Further work will focus on a real-time version of the algorithm and an improved accuracy of the loop closure detection.


previous
next

Logo Inria