Section: New Results
Visual servoing
Visual features from a spherical projection model
Participants : Roméo Tatsambon Fomena, François Chaumette.
This study is directly related to the search of adequate visual features, as described in Section 3.1 . The approach we developed this year is based on the spherical projection model since it provides interesting geometrical properties. It also allows considering the same modeling for classical perspective cameras and omnidirectional vision sensors such as fish-eye and catadioptric sensors. This year, we have been interested in considering a set of three points as visual features. Decoupling properties have been obtained by using the distances between the points on the sphere, which are invariant to any rotation. The three other features used to control the robot orientation are based on a particular rotation matrix. This particular choice of features allowed to revisit the classical singular configurations exhibited a long time ago, but with a very complex demonstration. Furthermore, simulation results have shown a wide convergence domain even in the presence of points depth estimation errors.
Photometric visual servoing
Participant : Éric Marchand.
One of the main problems in visual servoing is to extract and track robustly the image measurements that are used to build the visual features
(see equation (1 )) involved in the control scheme. This may lead to complex and time consuming image processing algorithms, and may increase the effect of image noise. To cope with this problem, we proposed to use directly photometric features as input of the control scheme. More precisely, the luminance of all pixels in the image, that is
, is used as input of the control scheme.
This year, in collaboration with Christophe Collewet who is now a member of the Fluminance team, we proposed a way to perform visual servoing tasks from color attributes. This approach can be seen as an extension of our previous works based on the luminance. Indeed, as we did for the luminance, color attributes are directly used in the control law avoiding therefore any complex images processing as features extraction or matching. We proposed several potential color features and then a way to select a priori the best choice among them with respect to the scene being observed [31] .
Mutual information-based visual servoing
Participants : Amaury Dame, Éric Marchand.
This work is related to the photometric features modeling (see previous section 6.1.2 ). The goal remains the same, positioning a robot in a desired position using only one image taken from this position without using features extraction. Visual servoing is achieved by using directly the information of the image. In this study, mutual-information is used as visual feature for visual servoing and involved in a new control law that we have developed to control the 6 dof of a robot. Among various advantages, this approach does not require any matching nor tracking step, it is robust to large illumination variation and allows considering, within the same task, different image modalities [34] . The same work on second order derivatives, as in the tracking problem described in Section 6.4.5 , has been performed and allows reaching the desired position with more accuracy and using less parameters, causing a nicer trajectory of the camera in the 3D space. Experiments have been realized on the Afma 6 robot (see Figure 2 .a) to demonstrate these advantages.
Design of new control schemes
Participants : Mohammed Marey, François Chaumette.
This study is devoted to the design of new kinematics control schemes. This year, we have developed a new projection operator for the redundancy framework. Classically, this operator allows considering the components of any secondary task that do not perturb the regulation to zero of the primary task. This means that some degrees of freedom have to remain available, which is very restrictive in practice. The new projection operator does not consider all the components of the main task, but its norm, which involves only one scalar constraint, instead of several, while preserving the stability properties of the system. The new projection operator has been validated using as main task a visual homing that induces all the six degrees of freedom of the system and trajectory following as secondary task. Current works are devoted to consider joints limits avoidance.
Visual servoing for aircrafts
Participants : Laurent Coutard, Xiang Wang, François Chaumette.
This study aims at developing visual servoing control schemes for fixed wing aircrafts. The first application we considered was the automatic landing in the scope of the European FP6 Pegase project (see Section 8.3.1 ). After modeling decoupled visual features based on the measurements that can be extracted from the image of the runway (typically, its border and central lines), a simple lateral control scheme has been derived. A longitudinal controller using both 2D and 3D data has also been designed. Both controllers have been integrated in the Pegase simulator and assessed by the industrial partners.
At the end of the year, we have started a new study devoted to the automatic landing on an aircraft carrier.
Multi sensor-based control
Participants : Olivier Kermorgant, François Chaumette.
This study is realized within the ANR Psirob Scuav project (see Section 8.2.2 ). We are interested in fusing the data provided by several sensors directly in the control law, instead of estimating the state vector. For that, we have first considered autocalibration methods to estimate the intrinsic and extrinsic parameters of sensors such as cameras and inertial measurement units. The method we have developed is based on the simultaneous measure of the robot velocity and features velocity in the sensor space. It has been validated through experimental results using a classical camera.
Visual servoing of non-holonomic mobile robots
Participants : Andrea Cherubini, François Chaumette.
This long-term study is devoted to appearance-based navigation from an image database. It is carried out in the scope of the ANR Tosa Cityvip project (see Section 8.2.4 ). The navigation relies on a monocular camera, and the navigation path is represented as a set of reference images, acquired during a preliminary teaching phase. This year, we have developed a new control scheme that is based on a sliding reference, instead of a set of static references. It allows reaching a good compromise between small 3D tracking errors and memory storage constraints, while providing a smoother behavior, as validated through experiments obtained with the Cycab vehicle [28] .
We have also started a study in order to be able to avoid potential obstacles during the navigation. For that, we use a pan-tilt camera so that it is able to observe the environment along the path while avoiding the obstacles. A new control scheme has been developed and validated thanks to simulation results.
MEMS micro-assembly
Participant : Éric Marchand.
This work has been done in collaboration with FEMTO-ST/AS2M in
Besançon. Robotic microassembly is a promising way to build
micro-metric components based on 3D compound products where the
materials or the technologies are incompatible: structures, devices,
MEMS, MOEMS,... In this work the relevance of real-time 3D visual
tracking and servoing has been demonstrated. The poses of the MEMS are
supplied in real time by the 3D model-based tracking algorithm we
developed few years ago [5] .
The assembly of 400 m × 400
m ×
100
m parts by their 100
m × 100
m × 100
m notches with a mechanical play
of 3
m is achieved with a rate of 41 seconds per assembly. Assembly of
a complex compound has also been
demonstrated [40] ,[41] .