Section: Scientific Foundations
Transverse research themes
Participants : Ezio Malis, Pascal Morin, Patrick Rives, Claude Samson, Tiago Ferreira Goncalves, Melaine Gautier.
Robustness of Sensor-based Control
Interacting with the physical world requires to appropriately address perception and control aspects in a coherent framework. Visual servoing and, more generally, sensor-based robot control consists in using exteroceptive sensor information in feedback control loops which monitor the dynamic interactions between a robot and its environment. Since the beginning of the 1990's, a lot of works have been done in sensor-based control in the case of fully-actuated holonomic systems. The control of these systems is much simplified by the fact that instantaneous motion along any direction of the configuration space is possible and can be monitored directly [55] . However, this assertion is not true in the case of critical or underactuated systems like most ground, marine or aerial robots. New research trends have to be investigated to extend the sensor-based control framework to this kind of mechanisms.
Robustness is needed to ensure that the controlled system will behave as expected. It is an absolute requirement for most applications, not only to guarantee the good execution of the assigned tasks, but also for safety reasons, especially when these tasks involve direct interactions with humans (robotic aided surgery, automatic driving,...). A control law can be called "robust" if it is able to perform the assigned stabilization task despite modeling and measurement errors. Determining the "size" of "admissible" errors is understandably important in practice. However, carrying out this type of analysis is usually technically quite difficult. For standard vision-based control methods [55] , only partial results have been obtained in a limited number of cases [51] . Recently, we have studied the robustness of classical vision-based control laws (relying on feedback linearization) [3] with respect to uncertainties upon structure parameters, and proved that small estimation errors on these parameters can render the control laws unstable [64] . This study has been extended to central catadioptric cameras [67] . One of our objectives is to develop tools for the evaluation of robustness properties of sensor-based control schemes, for generic vision devices (by extending existing results).
Mimetic Approach to Sensor-based Navigation
Sensor-based robot tasks were originally designed in the context of manipulation, with the control objective stated in terms of positioning and stabilizing the end-effector of a manipulator with respect to a structured object in the environment. Autonomous navigation in an open indoor or outdoor environment requires the conceptualization and definition of new control objectives. To this aim, a better understanding of the natural facilities that animals and human beings demonstrate when navigating in various and complex environments can be a source of inspiration. Few works have addressed this type of issue with a focus on how to define navigation control objectives and formulate them mathematically in a form which can be exploited at the control level by application of methods and techniques of Control Theory. Numerous questions arise. For instance, what is the right balance between planned (open-loop) and reactive (feedback) navigation? Also, what is the relative importance of topological-oriented versus metric-oriented information during navigation? Intuitively, topological aspects encompassing the accessibility of the environment seem to play an important role. They allow for a navigation which does not heavily rely on the knowledge of Cartesian distances. For example, when navigating along a corridor, it is more important to have information about possibilities of access than calculating distances between walls precisely. The nature of the “percepts” at work in animal or human autonomous navigation is still poorly known and understood. However, it would seem that the implicit use of an ego-centered reference frame with one of its axes aligned with the gravitational direction is ubiquitous for attitude (heading and trim) control, and that specific inertial and visual data are somehow directly acquired in this frame. In [73] , we have exploited a similar idea for the automatic landing of an aerial vehicule by implementing a visual feedback which uses features belonging to the plane at infinity (vanishing point and horizon line). It is also probable that the pre-attentive and early cognitive vision emphasized by the Gestalt theory provide useful inputs to the navigation process in terms of velocity, orientation or symmetry vector fields. Each of these “percepts” contributes to the constitution of sub-goals and elementary behaviors which can be adaptatively inhibited or re-enforced during the navigation process. Currently, little is known about the way animals and humans handle these different, and sometimes antagonistic, sub-goals to produce "effective" motions. Monitoring concurrent sub-goals, within a unified sensor-based control framework, is still an open problem which involves both perception and control issues.