2021
Activity report
Project-Team
ACENTAURI
RNSR: 202124072D
In partnership with:
CNRS, Université Côte d'Azur
Team name:
Artificial intelligence and efficient algorithms for autonomus robotics
In collaboration with:
Laboratoire informatique, signaux systèmes de Sophia Antipolis (I3S)
Domain
Perception, Cognition and Interaction
Theme
Robotics and Smart environments
Creation of the Project-Team: 2021 May 01

Keywords

  • A3.4.1. Supervised learning
  • A3.4.3. Reinforcement learning
  • A3.4.4. Optimization and learning
  • A3.4.5. Bayesian methods
  • A3.4.6. Neural networks
  • A3.4.8. Deep learning
  • A5.4.4. 3D and spatio-temporal reconstruction
  • A5.4.5. Object tracking and motion analysis
  • A5.4.7. Visual servoing
  • A5.10.2. Perception
  • A5.10.3. Planning
  • A5.10.4. Robot control
  • A5.10.5. Robot interaction (with the environment, humans, other robots)
  • A5.10.6. Swarm robotics
  • A5.10.7. Learning
  • A6.2.3. Probabilistic methods
  • A6.2.4. Statistical methods
  • A6.2.5. Numerical Linear Algebra
  • A6.2.6. Optimization
  • A6.4.2. Stochastic control
  • A6.4.3. Observability and Controlability
  • A6.4.4. Stability and Stabilization
  • A6.4.6. Optimal control
  • A7.1.4. Quantum algorithms
  • A8.2. Optimization
  • A8.3. Geometry, Topology
  • A8.11. Game Theory
  • A9.2. Machine learning
  • A9.5. Robotics
  • A9.6. Decision support
  • A9.10. Hybrid approaches for AI
  • B5.1. Factory of the future
  • B5.6. Robotic systems
  • B7.2. Smart travel
  • B7.2.1. Smart vehicles
  • B7.2.2. Smart road
  • B8.2. Connected city

1 Team members, visitors, external collaborators

Research Scientists

  • Ezio Malis [Team leader, Inria, Senior Researcher, from May 2021, HDR]
  • Philippe Martinet [Inria, Senior Researcher, from May 2021, HDR]
  • Patrick Rives [Inria, Emeritus, from May 2021, HDR]

PhD Students

  • Luis Guardini [Renault, CIFRE, From May 2021, Co-supervised with CHROMA]
  • Maria Kabtoul [Inria, from May 2021 until Nov 2021, Co-supervised with CHROMA]
  • Ziming Liu [Inria, from May 2021]
  • Diego Navarro Tellez [Cerema - Centre d'études et d'expertise sur les risques l'environnement la mobilité et l'aménagement, from Oct 2021]

Technical Staff

  • Emmanuel Alao [Inria, Engineer, from May 2021]

Interns and Apprentices

  • Hermes Mcgriff [Univ Côte d'Azur, from Apr 2021 until Sep 2021]
  • Daniel Nieto [Inria, from May 2021 until Jul 2021]

Administrative Assistant

  • Patricia Riveill [Inria, from May 2021]

External Collaborator

  • Guillaume Allibert [Univ de Nice - Sophia Antipolis, from May 2021 until Oct 2021]

2 Overall objectives

The goal of ACENTAURI is to study and to develop intelligent, autonomous and mobile robots that collaborate between them to achieve challenging tasks in dynamic environments. The team focuses on perception, decision and control problems for multi-robot collaboration by proposing an original hybrid model-driven / data driven approach to artificial intelligence and by studying efficient algorithms. The team focuses on robotic applications like environment monitoring and transportation of people and goods. In these applications, several robots will share multi-sensor information eventually coming from infrastructure. The team will demonstrate the effectiveness of the proposed approaches on real robotic systems like cars AGVs and UAVs together with industrial partners.

The scientific objectives that we want to achieve are to develop:

  • robots that are able to perceive in real-time through their sensors unstructured and changing environments (in space and time) and are able to build large scale semantic representations taking into account the uncertainty of interpretation and the incompleteness of perception.The main scientific bottlenecks are (i) how to exceed purely geometric maps to have semantic understanding of the scene and (ii) how to share these representations between robots having different sensomotoric capabilities so that they can possibly collaborate together to perform a common task.
  • autonomous robots in the sense that they must be able to accomplish complex tasks by taking high-level cognitive-based decisions without human intervention. The robots evolve in an environment possibly populated by humans, possibly in collaboration with other robots or communicating with infrastructure (collaborative perception). The main scientific bottlenecks are (i) how to anticipate unexpected situations created by unpredictable human behavior using the collaborative perception of robots and infrastructure and (ii) how to design robust sensor-based control law to ensure robot integrity and human safety.
  • intelligent robots in the sense that they must (i) decide their actions in real-time on the basis of the semantic interpretation of the state of the environment and their own state (situation awareness), (ii) manage uncertainty both on sensor, control and dynamic environment (iii) predict in real-time the future states of the environment taking into account their security and human safety, (iv) acquire new capacities and skills, or refine existing skills through learning mechanisms.
  • efficient algorithms able to process large amount of data and solve hard problems both in robotic perception, learning, decision and control. The main scientific bottlenecks are (i) how to design new efficient algorithms to reduce the processing time with ordinary computers and (ii) how to design new quantum algorithms to reduce the computational complexity in order to solve problems that are not possible in reasonable time with ordinary computers.

3 Research program

The research program of ACENTAURI will focus on intelligent autonomous systems, which require to be able to sense, analyze, interpret, know and decide what to do in the presence of dynamic and living environment. Defining a robotic task in a living and dynamic environment requires to setup a framework where interactions between the robot or the multi-robots system, the infrastructure and the environment can be described from a semantic level to a canonical space at different levels of abstraction. This description will be dynamic and based on the use of sensory memory and short/long term memory mechanism. This will require to expand and develop (i) the knowledge on the interaction between robots and the environment (both using model-driven or data-driven approaches), (i) the knowledge on how to perceive and control these interactions, (iii) situation awareness, (iv) hybrid architectures (both using model-driven or data-driven approaches), for monitoring the global process during the execution of the task.

Figure 1 illustrates an overview of the global systems highlighting the core topics. For the sake of simplicity, we will decompose our research program in three axes related to Perception, Decision and Control. However, it must be noticed that these axes are highly interconnected (e.g. there is a duality between perception and control) and all problems should be addressed in a holistic approach. Moreover, Machine Learning is in fact transversal to all the robot's capacities. Our objective is the design and the development of a parameterizable architecture for Deep Learning (DL) networks incorporating a priori model-driven knowledge. We plan to do this by choosing specialized architectures depending on the task assigned to the robot and depending on the input (from standard to future sensor modalities). These DL networks must be able to encode spatio-temporal representations of the robot's environment. Indeed, the task we are interested in considers evolution in time of the environment since the data coming from the sensors may vary in time even for static elements of the environment. We are also interested to develop a novel network for situation awareness applications (mainly in the field of autonomous driving, and proactive navigation).

Intelligent autonomous mobile robot system overview highlighting core axis of research and methodologies, Machine Learning and Efficient Algorithms.
Figure 1: Intelligent autonomous mobile robot system overview highlighting core axis of research and methodologies, Machine Learning and Efficient Algorithms.

Another transversal issue concerns the efficiency of the algorithms involved. Either we must process a large amount of data (for example using a standard full HD camera (1920x1080 pixels) the data size to process is around 5 Terabits/hour) or the problem is hard to solve (for example path optimization problems for multiple robots are all NP-complete, even when the underlying graph is planar. A particular emphasis will be given to efficient numerical analysis algorithms (in particular for optimization) that are omnipresent in all research axes. We will also explore a completely different and radically new methodology with quantum algorithms. Several quantum basic linear algebra subroutines (BLAS) (Fourier transforms, finding eigenvectors and eigenvalues, solving linear equations) exhibit exponential quantum speedups over their best known classical counterparts. This quantum BLAS (qBLAS) translates into quantum speedups for a variety of algorithms including linear algebra, least-squares fitting, gradient descent, Newton's method. The quantum methodology is completely new to the team, therefore the practical interest of pursuing such research direction should be validated in the long-term.

The research program of ACENTAURI will be decomposed in the following three research axes:

3.1 Axis A: Augmented spatio-temporal perception of complex environments

The long-term objective of this research axis is to build accurate and composite models of large-scale environments that mix metric, topological and semantic information. Ensuring the consistency of these various representations during the robot exploration and merging/sharing observations acquired from different viewpoints by several collaborative robots or sensors attached to the infrastructure, are very difficult problems. This is particularly true when different sensing modalities are involved and when the environments are time-varying. A recent trend in Simultaneous Localization And Mapping is to augment low-level maps with semantic interpretation of their content. Indeed, the semantic level of abstraction is the key element that will allow us to build the robot’s environmental awareness (see Axis B). For example, the so-called semantic maps have already been used in mobile robot navigation, to improve path planning methods, mainly by providing the robot with the ability to deal with human-understandable targets. New studies to derive efficient algorithms for manipulating the hybrid representations (merging, sharing, updating, filtering) while preserving their consistency are needed for long-term navigation.

3.2 Axis B: Situation awareness for decision and planning

The long-term objective of this research axis is to design and develop a decision-making module that is able to (i) plan the mission of the robots (global planning), (ii) generate the sub-tasks (local objectives) necessary to accomplish the mission based on Situation Awareness and (iii) plan the robot paths and/or sets of actions to accomplish each subtask (local planning). Since we have to face uncertainties, the decision module must be able to react efficiently in real-time based on the available sensor information (on-board or attached to an IoT infrastructure) in order to guarantee the safety of humans and things. For some tasks, it is necessary to coordinate a multi-robots system (centralized strategy), while for other each robot evolves independently with its own decentralized strategy. In this context, Situation Awareness is at the heart of an autonomous system in order to feed the decision-making process, but also can be seen as a way to evaluate the performance of the global process of perception and interpretation in order to build a safe autonomous system. Situation Awareness is generally divided into three parts: perception of the elements in the environment (see Axis A), comprehension of the situation, and projection of future states (prediction and planning). When planning the mission of the robot, the decision-making module will first assume that the configuration of the multi-robot system is known in advance, for example one robot on the ground and two robots on the air. However, in our long-term objectives, the number of robots and their configurations may evolve according to the application objectives to be achieved, particularly in terms of performance, but also to take into account the dynamic evolution of the environment.

3.3 Axis C: Advanced multi-sensor control of autonomous multi-robot systems

The long-term objective of this research axis is to design multi-sensor (on-board or attached to an IoT infrastructure) based control of potentially multi-robots systems for tasks where the robots must navigate into a complex dynamic environment including the presence of humans. This implies that the controller design must explicitly deal not only with uncertainties and inaccuracies in the models of the environment and of the sensors, but also to consider constraints to deal with unexpected human behavior. To deal with uncertainties and inaccuracies in the model, two strategies will be investigated. The first strategy is to use Stochastic Control techniques that assume known probability distribution on the uncertainties. The second strategy is to use system identification and reinforcement learning techniques to deal with differences between the models and the real systems. To deal with unexpected human behavior, we will investigate Stochastic Model Predictive Control (MPC) techniques and Model Predictive Path Integral (MPPI) control techniques in order to anticipate future events and take optimal control actions accordingly. A particular emphasis will be given to the theoretical analysis (observability, controllability, stability and robustness) of the control laws.

4 Application domains

ACENTAURI focus on two main applications in order to validate our researches using the robotics platforms described in section 6. We are aware that ethical questions may arise when addressing such applications. ACENTAURI follows the recommendations of the Inria ethical committee like for example confidentiality issues when processing data (RGPD).

4.1 Environment monitoring with a collaborative robotic system

The first application that we will consider, concerns monitoring the environment using an autonomous multi-robots system composed by ground robots and aerial robots. The ground robots will patrol following a planned trajectory and will collaborate with the aerial drones to perform tasks in structured (e.g. industrial sites), semi-structured (e.g. presence of bridges, dams, buildings) or unstructured environments (e.g. agricultural space, forest space, destroyed space). In order to provide a deported perception to the ground robots an aerial drone will be in operation while the second one will be recharging its batteries on the ground vehicle. Coordinated and safe autonomous take-off and landing of the aerial drones will be a key factor to ensure the continuity of service for a long period of time. Such a multi-robot system can be used to localize survivors in case of disaster or rescue, to localize and track people or animals (for surveillance purpose), to follow the evolution of vegetation (or even invasion of insects or parasites), to follow evolution of structures (bridges, dams, buildings, electrical cables) and to control actions in the environment like for example in agriculture (fertilization, pollination, harvesting, ...), in forest (rescue), in land (planning firefighting). To successfully achieve such an application will require to build a representation of the environment and localize the robots in the map (see Axis A in section 3.1), to re-plan the tasks of each robot when unpredictable events occurs (see Axis B in section 3.3) and to control each robot to execute the tasks (see Axis C in section 3.2). Depending on the application field the scale and the difficulty of the problems to be solved will be increasing. In the Smart Factories field, we have a relatively small size environment, mostly structured and with the possibility to communicate with and highly instrumented (sensors) and connected environment. In the Smart Territories field, we have large semi-structured or unstructured environments that are not instrumented. To set up demonstrations of this application, we intend to collaborate with industrial partners and local institutions. For example, we plan to set up a collaboration with the Parc Naturel Régional des Prealpes d'Azur to monitor the evolution of fir trees infested by bark beetles.

Environment monitoring with a collaborative robotic system composed by aerial and ground robots.
Figure 2: Environment monitoring with a collaborative robotic system composed by aerial and ground robots.

4.2 Transportation of people and goods with autonomous connected vehicles

The second application that we will consider, concerns the transportation of people and goods with autonomous connected vehicles. ACENTAURI will contribute to the development of Autonomous Connected Vehicles (e.g. Learning, Mapping, Localization, Navigation) and the associated services (e.g. towing, platooning, taxi). We will develop efficient algorithms to select on-line connected sensors coming from the infrastructure in order to extend and enhanced the embedded perception of a connected autonomous vehicle. In cities, there exists situations where visibility is very bad for historical reason or simply occasionally because of traffic congestion, service delivery (trucks, buses) or roadworks. It exists also situation where danger are more important and where a connected system or intelligent infrastructure can help to enhance perception and then reduce the risk of accident (see Axis A in section 3.1). In ACENTAURI, we will also contribute to the development of assistance and service robotics by re-using the same technologies required in autonomous vehicles. By adding the social level in the representation of the environment, and using techniques of proactive and social navigation, we will offer the possibility of the robot to adapt its behavior in presence of humans (see Axis B in section 3.2). ACENTAURI will study sensing technology on SDVs (Self-Driving Vehicles) used for material handling to improve efficiency and safety as products are moved around Smart Factories. These types of robots have the ability to sense and avoid people, as well as unexpected obstructions in the course of doing its work (see Axis C in section 3.3). The ability to automatically avoid these common disruptions is a powerful advantage that keeps production running optimally. To set up demonstrations of this application, we will continue the collaboration with industrial partners (Renault) and with the Communauté d'Agglomération Sophia Antipolis (CASA). Experiments with 2 autonomous Renault Zoe cars will be carried out in a dedicated space lend by CASA. Moreover, we propose, with the help of the Inria Service d'Expérimentation et de Développement (SED), to set up a demonstration of an autonomous shuttle to transport people in the future extended Inria/UCA site.

Transportation of people and goods with autonomous connected vehicles in human populated environments.
Figure 3: Transportation of people and goods with autonomous connected vehicles in human populated environments.

5 Social and environmental responsibility

ACENTAURI is concerned with the reduction of the environmental footprint the team's activities and it is involved in several research projects related to the environmental challenges.

5.1 Footprint of research activities

The main footprint of our research activities comes from travels and power consumption (computers and computer cluster). Concerning travels, due to the COVID-19 pandemic they have been considerably limited. Concerning power consumption, besides classical actions to reduce the waste of energy, our research focus on efficient optimization algorithms to minimize the computation time of computers onboard of our robotic platforms.

5.2 Impact of research results

We have planned to propose several projects related to the environmental challenges. We give below two examples of the most advanced projects that will be proposed in 2022.

The first concerns the monitoring of forest in collaboration with the Parc Naturel Regional des Préalpes d'Azur, ONF and DRAAF.

The second concerns the autonomous vehicles in agricultural application in collaboration with INRAE Clermont-Ferrand in the context of the PEPR "Agrologie et numérique". The project aims to develop robotic approaches for the realization of new cultural practices, capable of acting as a lever for agroecological practices.

6 New software and platforms

ACENTAURI develops and maintains the following robotic platforms and software.

With respect to the robotic platforms, it concerns:

  • The ICAV platform is composed, to date, by 2 autonomous cars (AGV) and one instrumented car.
  • The DRONIX platform is composed, to date, by a flying room equipped with a Qualysis localization system and 1 drone (UAV).

With respect to the software platforms, it concerns:

  • The Perception360 software is a collection of libraries and applications for robot vision-based localization with omnidirectional RGB-D sensors or standard perspective cameras.

6.1 New software

6.1.1 Perception360

  • Name:
    Robot vision and 3D mapping with omnidirectional RGB-D sensors.
  • Keywords:
    Depth Perception, Localization, 3D reconstruction, Realistic rendering, Sensors, Image registration, Robotics, Computer vision, 3D rendering
  • Functional Description:
    This software is a collection of libraries and applications for robot vision and 3D mapping with omnidirectional RGB-D sensors or standard perspective cameras. This project provides the functionality to do image acquisition, semantic annotation, dense registration, localization and 3D mapping. The omnidirectional RGB-D sensors used within this project have been developed in INRIA Sophia-Antipolis by the team LAGADIC. Modifications to the software have been made by the ACENTAURI team in order to make certain features of the software easily accessible in the form of a library.
  • Contact:
    Nicolas Chleq

6.2 New platforms

Participants: Ezio Malis, Philippe Martinet, Emmanuel Alao, Nicolas Chleq, Nejma Elkoudarchi.

ICAV platform

ICAV platform has been funded by PUV@SOPHIA project (CASA, PACA Region and state), self funding, Digital Reference Center from UCA, and Accademy 1 from UCA. We have now two autonomous vehicles, one instrumented vehicle, many sensors (RTK GPS, Lidars, Cameras), Communications devices (C-V2X, IEEE 802.11p), and one standalone localization and mapping system.

ICAV platform is composed of

  • ICAV1 is an old generation of ZOE. It has been bought fully robotized and intrumented. It is equiped with Velodyne Lidar VLP16, low cost IMU and GPS, three cameras and one embedded computer.
  • ICAV2 is a new generation of ZOE which has been instrumented and robotized in 2021. It is equiped with Velodyne Lidar VLP16, low cost IMU and GPS, three cameras, two solidstate Lidars RS-M1, one embedded computer and one NVIDIA Jetson AGX Xavier.
  • ICAV3 will be instrumented with different LIDARS and multi cameras system (LADYBUG5+)
  • A ground truth RTK system. An RTK GPS base station has been installed and a local server configured inside the Inria Center. Each vehicle is equiped with an RTK GPS receiver and connected to a local server in order to compute a centimeter localization accuracy.
  • A standalone localization and mapping system. This system is composed of a Velodyne Lidar VLP16, low cost IMU and GPS, and one NVIDIA Jetson AGX Xavier.
  • A communication system V2X based on the technology C-V2X and IEEE 802.11p.
  • Different lidar sensors (Ouster OS2-128, RS-LIDAR16, RS-LIDAR32, RS-Ruby), and one multi-cameras system (LADYBUG5+)

The main applications of this platform are:

  • datasets acquisition
  • localization, Mapping, Depth estimation, Semantization
  • autonomous navigation (path following, parking, platooning, ...), proactive navigation in shared space
  • situation awareness and decision making
  • V2X communication
  • autonomous landing of UAVs on the roof
 
IMG/ICAV1
 
IMG/ICAV2
 
IMG/ICAV3
 
Figure 4: Overview of ICAV platform (ICAV1, ICAV2-1, ICAV3)

ICAV2 has been used by Maria Kabtoul in order to demonstrate the effectiveness of autonomous navigation of a car in a crowd.

Indoor autonomous mobile platform

The mobile robot platform has been funded by the MOBIDEEP project in order to demonstrate autonomous navigation capabilities in emcumbered and crowded environment. This platform is composed of:

  • one omnidirectional mobile robot (SCOOT MINI with mecanum wheels from AGIL-X)
  • one NVIDIA Jetson AGX Xavier for deep learning algorithm implementation
  • one general labtop
  • one Robosense RS-LIDAR16
  • one Ricoh Z1 360° camera
  • one Sony RGB-D D455 camera
Overview of MOBIDEEP platform
Figure 5: Overview of MOBIDEEP platform

The main applications of this platform are:

  • indoor datasets acquisition
  • localization, Mapping, depth estimation, Semantization
  • proactive navigation in shared space
  • pedestrian detection and tracking

This platform is used in MOBI-DEEP project for integration of different work from the consortium. It is used to demonstrate new results on social navigation.

E-Wheeled platform

E-WHEELED is an AMDT Inria project (2019-22) coordinated by Philippe Martinet. The aim is to provide mobility to things by implementing connectivity techniques. It makes available an Inria expert engineer (Nicolas Chleq) in ACENTAURI in order to demonstrate the Proof of Concept using a small size demonstrator. Due to the COVID19, the project has been delayed.

Overview of E-wheeled platform
Figure 6: Overview of E-wheeled platform

7 New results

Automous parking using multi sensor based approach

Participants: David Perez Gonzalez (LS2N), Olivier Kermorgant (LS2N), Salvador Dominguez Quijada (LS2N), Philippe Martinet.

Autonomous parking has been mainly addressed from a path planning point of view, and not often in a generic way to deal with all kind for all types of parking maneuvers (perpendicular, diagonal for both forward and backward motions and parallel for backward motions). An alternative way is to address the parking problem from a control of view. The main problems to be solved are to find an empty spot, to parametrize the autonomous parking framework for the type of parking, and to adapt the behavior in regard with the dynamic evolution of the environment. Generally, not all that point are taken into account in the state of the art solution. To address all the mentioned problems, we have proposed a single common Multi-Sensor-Based Predictive Control framework 4 in the context of the PhD of David Perez Morales.

The contribution of this work is the formalization of parking operations under a common MSBPC framework allowing the vehicle to park autonomously into perpendicular and diagonal parking spots with both forward and backward motions and into parallel ones with backward motions. By considering an additional auxiliary subtask and a predictive approach, the presented technique is capable of performing multiple maneuvers (if necessary) in order to park successfully in constrained workspaces. The auxiliary subtask is a key since it allows to account for the potential motions that go essentially against the final goal (i.e. drive the vehicle away from the parking spot) but that at the end allows to park successfully.

Platooning

Participants: Ahmed Khalifa (LS2N), Olivier Kermorgant (LS2N), Salvador Dominguez Quijada (LS2N), Philippe Martinet.

Car sharing system, autonomous shuttles and taxi are now available in our society even if some legislation issues remain to be addressed in most of countries. In order to make these systems efficient, it is very important to regulate the availability in space and time of individual vehicles to the users. In this aim, using platoon facilities to fill and refill different places in the city, and to collect or deliver empty vehicles appears to be an adequate solution. Platoon of vehicle have been studied first on highways and then, little by little in inner cities. The problem of platooning on highway is considered to be well solved. However, it is not completely the case in inner cities due to the nature of the environment (density of traffic, presence of pedestrians and/or human driven electrical mobility devices), to the nature of the path to follow (high curvature), to the varying longitudinal velocity, and to the use or not of a human to manually drive the leader of the platoon. In platooning, the main problems to be solved are to propose new models and controllers able to make the platoon string stable while preserving individual stability, to develop techniques to rebuild the state of the platoon in case of loss of communication, to deal with the heterogeneity of the platoon, and to propose new controllers ensuring robustness with regard with communication delays, lags, actuators dynamics and unexpected events.

The main contributions of the proposed solution 10 (published in 2021) with respect to previous works is composed of a control algorithm considering varying velocity and high curvature and hybrid PLF topology, the study of the conditions for both internal and string stability under the effect of communication and sensor delay and actuator dynamics, and the validation of the framework with realistic simulations and experiments with 3 or 4 commercial cars.

Autonomous navigation in human populated environment

Participants: Maria Kabtoul, Anne Spalanzani, Philippe Martinet.

The work 9 is focused on developing a navigation system for autonomous vehicles operating around pedestrians. The suggested solution is a proactive framework capable of anticipating pedestrian reactions and exploiting their cooperation to optimize the performance while ensuring pedestrians safety and comfort. A cooperation-based model for pedestrian behaviors around a vehicle is proposed. The model starts by evaluating the pedestrian tendency to cooperate with the vehicle by a time-varying factor. This factor is then used in combination with the space measurements to predict the future trajectory.

The model is exploited in the navigation system to control both the velocity and the local steering of the vehicle. Firstly, the longitudinal velocity is proactively controlled. Two criteria are considered to control the longitudinal velocity. The first is a safety criterion using the minimum distance between an agent and the vehicle’s body. The second is proactive criterion using the cooperation measure of the surrounding agents. The latter is essential to exploit any cooperative behavior and avoid the freezing of the vehicle in dense scenarios. Finally, the optimal control is derived using the gradient of a cost function combining the two previous criteria.

Minimizing risk injury in motion planning

Participants: Luis Guardini, Anne Spalanzani (CHROMA), Philippe Martinet, Christian Laugier (CHROMA), Anh-Lam Do (Renault).

In motion planning, generally the main goal is to find a safe path free from any collision. If this appears clear and moreless easy in static environment, it is no more the case when we consider dynamic human populated environment. The main problems remains to take into account unexpected event which occur most of time from the human side and few time from natural phenomena, or to face situations where the environment is unknown and/or not fully observable. In all these situations, it is difficult to warranty fully global safety among one horizon of prediction, except being very conservative. Then it exists a risk of collision that must be taken into account. Among the research work done recenty, some have considered the evaluation of the injury risk associated with a particular and global situation, and the Probability of Collision with Injury Risk (PCIR) has been defined, making collision mitigation an important element in motion planning.

Despite the rich number of functionalities of Advanced Driver-Assistance Systems (ADAS), there is still a gap on finding a way to evaluate the best decision globally. A novel motion planning framework to generate emergency maneuvers in complex and risky scenarios using active mitigation has been developed. This framework is built on the Model Predictive Path Integral (MPPI) technique developed recently.

Sampled Based Vision Based Control

Participants: Ihab Mohamed, Guillaume Allibert (I3S), Philippe Martinet.

Visual servoing control schemes, such as Image-Based (IBVS), Pose Based (PBVS) or Hybrid-Based (HBVS) have been extensively developed over the last decades making possible their uses in a large number of applications. It is well-known that the main problems to be handled concern the presence of local minima or singularities, the visibility constraint, the joint limits, etc. Some problems are related to the need to compute the inverse of the interaction model that are used in the control scheme either the robot jacobian matrix which links the 3D motion to the joint motion or the interaction matrix which links the variations of the sensors information to the 3D motion. Recent states of the art report that it is still necessary to instantiate Visual servoing to dedicated applications, to develop new non linear control strategies (i.e multi sensor based control, NMPC, SMPC, …), and to explore new Direct Visual Servoing approaches by the use of a data-driven methodology. In addition, the study of stability, singularity locus, local minima of Visual Servoing scheme are still open problems to be addressed in deep. To avoid the computation of the inverse of the interaction matrix, it is required to compute a control set over one horizon following the strategy which is commonly used in Model Predictive Control (MPC). A similar approach is used when performing motion planning directly in joint space. Then, the main encountered problem is to develop an adequate “Importance Sampling” technique sufficiently relevant with regard to the nonlinearities of the studied system.

A global MPPI-VS framework 6 based on PI control theory has been developped. More precisely, a realtime and inversion-free control method for all image-based, 3D point-based (3DVS), and position-based visual servo schemes has been studied and validated on a 6-DoF Cartesian robot with an eye-in-hand camera. A sampling based MPC approach is proposed for predicting the future behavior of the VS system, without solving the online optimization problem which usually exceeds the real system-sampling time and suffers from the computational burden.

Singularity analysis in pose estimation

Participants: Beatriz Pascual-Escudero (LS2N), Abhilash Nayak (LS2N), Sebastien Briot (LS2N), Olivier Kermorgant (LS2N), Mohab Safey El Din (LIP6), Francois Chaumette (RAINBOW), Philippe Martinet.

Pose estimation is a classical problem in computer vision, and visual servoing is a dual problem in control. Due to the duality of control and observer, when using particular features (points, lines, ….) it is possible to recover the pose, or it is possible to control the pose. When image points serve as image features and are matched with their corresponding 3D points, this problem is refered as PnP (Perspective-from-n-Points). If many solutions have been developed, unfortunately, the failure cases did not receive a lot of attention from the community. To solve the PnP problems many methods use a jacobian matrix which is a part of the interaction matrix which is used in classical visual servoing techniques. So, finding singularities of the interaction matrix is crucial in order to avoid inaccuracy when estimating the pose of the object, or control problems in visual servoing due to the loss of rank of the interaction matrix.

When observing n image points, we have explained 3 how to compute a basis of the rows of this interaction matrix, basis which leads further to the simplification of the calculation of the singularity conditions. The full P4P problem has been studied and the determination of the singular configurations is then done from the 28 (6 X 6) minors of this basis. This turns out to be a better approach in terms of analytical computations compared to considering directly the interaction matrix, and allowed us to obtain the equations for determining the singular configurations for each choice of 4 points. Using this approach, we are able to prove that, for any relative orientation between the object and camera frames, and for any generic choice of four points, singular positions for the camera center do exist, which is a new result. These singular positions are just a finite number of isolated points in 3D space, which can be computed as the intersection of four cylinders. More precisely, there are at least two and at most six singular positions. They have been obtained through Gröbner basis computations.

Representation of the environment in autonomous driving applications

Participants: Ziming Liu, Ezio Malis, Philippe Martinet.

Visual odometry is an important task for the localization of autonomous robots and several approaches have been proposed in the literature. We consider visual odometry approaches that use stereo images since the depth of the observed scene can be correctly estimated. These approaches do not suffer from the scale factor estimation problem of monocular visual odometry approaches that can only estimate the translation up to a scale factor. Traditional model-based visual odometry approaches are generally divided into two steps. Firstly, the depths of the observed scene are estimated from the disparity obtained by matching left and right images. Then, the depths are used to obtain the camera pose. The depths can be computed for selected features like in sparse methods or for all possible pixels in the image like in dense direct methods 1.

Recently, more and more end-to-end deep learning visual odometry approaches have been proposed, including supervised and unsupervised models. However, it has been shown that hybrid visual odometry approaches can achieve better results by combining a neural network for depth estimation with a model-based visual odometry approach. Moreover, recent works have shown that deep learning based approaches can perform much better than the traditional stereo matching approaches and provide more accurate depth estimation.

State-of-the-art supervised stereo depth estimation networks can be trained with the ground truth disparity (depth) maps. However, ground truth depth maps are extremely difficult (almost impossible for dense depth maps) to be obtained for all pixels in real and variable environments. Therefore, some works explore to train a model on simulated environments and reduce its gap with the real world data. However, there are only few works focusing on unsupervised stereo matching. Besides using the ground truth depth maps or unsupervised training, we can also build temporal images reconstruction loss with ground truth camera pose which is easy to be obtained.

The main contributions of our work are (i) a novel pose-supervised network (that is named PDENet) for dense stereo depth estimation, that can achieve the SOTA results without ground truth depth maps and (ii) a dense hybrid stereo visual odometry system, which combines the deep stereo depth estimation network with a robust model-based Dense Pose Estimation module (that is named DPE).

Take-off and Landing of UAVs on a Mobile Robotic Platform

Participants: Daniel Nieto, Ezio Malis, Philippe Martinet.

Multi-agent systems consisting of both ground and aerial vehicles present numerous advantages for applications including surveillance, precision agriculture, or wildfire detection and fighting.

An autonomous take-off and landing solution for UAVs on moving platforms would allow to solve the issue of low battery life presented by aerial vehicles by providing an opportunity for them to use ground vehicles as charging stations, while not compromising the tasks being carried out by the robots on which the platforms sit, thus improving the autonomy and lifetime of the overall systems.

Different literature works have approached this problem by defining robust platform localization strategies or utilizing more descriptive dynamic models and control laws that can inherently counteract the perturbations undergone by a UAV when moving at high speeds in outdoor environments. The current work proposes a novel approach to this problem that apart from considering all of these aspects attempts to estimate the effect of disturbances acting on the UAV directly, so that they can be compensated by controlling the vehicle accordingly.

A nonlinear observer for external wrench estimation is derived based on an advanced dynamic model of the UAV and used in conjunction with a linear MPC controller to define a system capable of taking-off from, tracking, and landing on a moving platform by using the estimates coming from a vision-based relative platform localization and velocity estimation system. Results in simulation validate the different components of the system both separately and working in conjunction to achieve the objective at various speeds.

Path planning using Quantum computing

Participants: Hermes Mcgriff, Guillaume Allibert (I3S), Ezio Malis.

Path planning plays an important role in the navigation of autonomous mobile robots. Indeed, a standard approach to robot navigation consists of guiding a robot towards several way-points designed on a map of the environment. Path planning algorithms are usually based on configuration space representations and optimization algorithms to minimize given criteria. One of the major problems to be solved is the computational time needed to find the optimal paths (e.g. in terms of time and/or distance). Indeed, path planning is a NP-hard problem and the computation time can range from several hours to several days for very large problems. The first objective of this work is to study and implement the most efficient State of The Art path planning approaches for mobile robots. The second objective of this work is to design and implement a path planning algorithm on a quantum computer and compare it to the State of The Art.

Quantum computing is revolutionizing several scientific disciplines and their applications, such as cryptography or scientific computing and has potential application to robotics. The quantum optimization approaches, that will be used to build the quantum path planning algorithm promise a quadratic acceleration over standard approaches.

The considered application concerns the generation of the trajectory of a mobile robot that has to move in a known environment (a city for example) by necessarily passing through a certain number of predefined crossing points that changes every day (e.g. a delivery service or on-demand transportation). It is therefore related to the vehicle routing problem (VRP), a combinatorial optimization and integer programming problem that generalizes the well-known traveling salesman problem (TSP). The objective of the VRP is generally to minimize the total route cost.

The first problem to be solved is to model the problem into a combinatorial optimization problem that can be solved by a quantum computer. In order to express the problem in a way understandable by the quantum computer, we have first express it as a constraint dependent cost function of a binary matrix. Then, in order to force the solution binary matrix, after optimization, to respect some specific constraints, we need to translate these constraints as additional costs, so that we can fit them in the cost function. This form is called the QUBO formulation, for "Quadratic Unconstrained Binary Optimization". It has been studied for a lot a different optimization problems, and is easily translatable to a "Hamiltonian" form.

After creating an appropriate Hamiltonian, we can compute the eigenvector associated with the smallest eigenvalue will correspond to the shortest Hamiltonian cycle of the graph, i.e. the solution of our problem. To calculate the eigenvalues and eigenvectors of a given Hamiltonian we studied the Variational Quantum Eigensolver that is a hybrid quantum-classical algorithm. It can leverage the quantum computing power of small quantum computers. It has been used initially in quantum chemistry to solve the Schroedinger equation of small molecules, that is a costly operation in classical computing. Since then, it has found many applications, like in combinatorial optimization (that's why we are interested here).

The algorithm has been validated on standard computers as well as on already existing open-access IBM quantum computers programmed with the Qiskit language. Even if the the method can be applied for any number of points, today we do not exceed 5 way-points and the quantum algorithm is still quite slow compared to traditional methods. This seems to be linked to the simulated nature of the experiments and the inability to test it on instances large enough to win the race for complexity.

8 Bilateral contracts and grants with industry

8.1 Bilateral contracts with industry

Participants: Ezio Malis, Philippe Martinet.

ACENTAURI will be responsible of two research contracts signed at the end of year with Naval Group and beginning in 2022.

8.2 Bilateral Grants with Industry

Participants: Philippe Martinet, Christian Laugier (CHROMA), Anne Spallanzani (CHROMA).

Renault (2018 - 2021) Participant: Philippe Martinet (in collaboration with A. Spalanzani and C. Laugier from CHROMA)

This contract (CHROMA 45k€, ACENTAURI (15k€ for supervision)) is linked to the PhD Thesis of Luiz Guardini (Cifre Thesis). The objective is to develop contextualized emergency trajectory planning with minimum criticality by employing dynamic probabilistic occupancy grid.

9 Partnerships and cooperations

Participants: Ezio Malis, Philippe Martinet, Patrick Rives.

9.1 International initiatives

9.1.1 Participation in other International Programs

Since 2020, Inria Sophia-Antipolis (ACENTAURI, Hephaistos) are associated partner of the Eramus Mundus European Master MIR. The (MIR) Marine and Maritime Intelligent Robotics Master, innovatively combines Robotics and Artificial Intelligence in the context of advancing marine and maritime science and their technological applications. Unfortunately, due to COVID19 the setup of the master has been postponed in 2022.

9.2 European initiatives

9.2.1 Horizon Europe

In 2021, two main actions have been conducted:

  • Decathlon (E. Malis, P. Martinet). In the framework of the HORIZON-CL5-2022-D6-01 call, ACENTAURI has contributed to prepare a proposal (IA: Innovative Action) as partner. The proposal has been submitted the 12th january 2022. In DECATHLON, ACENTAURI will contribute in planning and control.
  • euROBIN (E. Malis). In the framework of the call HORIZON-CL4-2021-DIGITAL-EMERGING-01, a proposal of Network of Excellence in the field of Robotics and AI (RIA: Research & Innovative Action) where Inria is partner has been submitted. ACENTAURI is involved in the networking activities.

9.3 National initiatives

ANR projects

  • HIANIC: (18-22) Human Inspired Autonomous Navigation In Crowds: Inria (CHROMA, R-ITS), LS2N. (P. Martinet). In collaboration with CHROMA (Anne Spalnazani), we are involved in Human cooperability estimation Vehicle Human interaction, and Proactive navigation of a car in crowdy environment. One PhD thesis (Maria Kabtoul, working on Proactive and Social Navigation For Autonomous Vehicles In Shared Spaces) has been defended in december 2021.
  • MOBIDEEP: (17-23) technology-aided MOBIlity by semantic Deep learning: INRIA (ACENTAURI), GREYC, INJA, SAFRAN (Group, Electronics & Defense). (P. Martinet, P. Rives). We are involved in Personal assistance for blind people, Proactive navigation of a robot in human populated environment, and Deep learning in depth estimation and semantic learning.
  • ANNAPOLIS: (22-25) AutoNomous Navigation Among Personal mObiLity devIceS: INRIA (ACENTAURI, CHROMA), LS2N, HEUDIASYC. (E. Malis, P. Martinet, P. Rives). This project has been accepted in 2021. We will be involved in Augmented Perception using Road Side Unit PPMP detection and tracking, Attention map prediction, and Autonomous navigation in presence of PPMP.
  • SAMURAI: (22-26) ShAreable Mapping using heterogeneoUs sensoRs for collAborative robotIcs: INRIA (ACENTAURI), LS2N, MIS. (E. Malis, P. Martinet, P. Rives). This project has been accepted in 2021. We will be be involved in building Shareable maps of a dynamic environment using heterogeneous sensors, Collaborative task of heterogeneous robots, and Update the shareable maps.
  • TIRREX (21-29) is an EQUIPEX+ funded by ANR and coordinated by N. Marchand. It is composed of six thematic axis (XXL axis, Humanoid axis, Aerial axis, Autonomous Land axis, Medical axis, Micro-Nano axis) and three transverse axis (Prototyping & Design, Manipulation, and Open infrastructure). The kick-off has been done in december 2021. Acentauri is involved in:
    • Autonomous Land axis (ROB@t) is coordinated by P. Bonnifait and R. Lenain is covering Autonomous Vehicles and Agricultural robots (E. Malis, P. Martinet, P. Rives).
    • Aerial Axis is coordinated by I. Fantoni and F. Ruffier (E. Malis, P. Martinet).
  • PEPR: agroecology and digital (E. Malis, P. Martinet, P. Rives). In the framework on this PEPR, ACENTAURI is involved in the coordination (R. Lenain (INRAE), P. Martinet (INRIA), Yann Perrot (CEA)) of a proposal called NINSAR (New ItiNerarieS for Agroecology using cooperative Robots) that will be submitted in 2022.

Defi

  • Inria-Cerema ROAD-AI (E. Malis, P. Martinet, P. Rives). The aim of this defi is to invent the asset maintenance of infrastructures that could be operated in the coming years. This is to offer a significant qualitative leap compared to traditional methods. Data collection is at the heart of the integrated management of road infrastructure and engineering structures and could be simplified by deploying fleets of autonomous robots. Indeed, robots are becoming an essential tool in a wide range of applications. Among these applications, data acquisition has attracted increasing interest due to the emergence of a new category of robotic vehicles capable of performing demanding tasks in harsh environments without human supervision. The kick-off has been done in July 2021.

Tight collaboration

Due to the secondment of Philippe Martinet, we have a strong collaboration with ARMEN team concerning Multi-robot control, Autonomous navigation, Deep learning, and Visual servoing. Some topics appear outside the scope of the ACENTAURI Team, but they have already ended two years ago. Only the valorisation is remaining. They are mentionned below:

  • In 5, we study the full dynamic modelling and control of a flying architecture called a flying parallel robot (FPR). This architecture, which can be seen as a parallel robot whose actuators have been replaced by drones, offers novel possibilities for robotic and aerial manipulation. The FPR concept has several advantages: all possible DoF of the end-effector can be controlled; by sharing the efforts over several drones, and by using no additional embedded motors, the payload capability is enhanced. A decoupling property of the dynamic model has been established and exploited for the design of a cascade controller handling the underactuation of the FPR.
  • In 2, we investigate the use of variable stiffness springs (VSS) in parallel configuration with the motors. These springs store the energy during the braking phase, instead of dissipating it. The energy is then released to actuate the robot in a next displacement phase. This design approach is combined with a motion generator which seeks to optimize trajectories for input torques reduction (and thus of energy consumption), through solving a boundary value problem (BVP) based on the robot dynamics.

9.4 Regional initiatives

  • PUV@SophiaTech (P. Martinet, P. Rives) is a regional project funded by the PACA Region, the State and the CASA in the framework of Sophiatech2.0. PUV@SophiaTech concerns the setup of three differents platforms: PLATON (Telecommunication & IOT), UBIQUARIUM (Data Analysis & Complex Software) and Autonomous Vehicle. ACENTAURI has in charge the autonomous vehicle platform. Globally, we have acquired two ZOE electrical vehicles, many sensors (LIDARS, Cameras, RTK GPS), and V2X communications components. One ZOe has been fully robotized to become autonomous, and the other one will be instrumented for data collection.
  • DriveInSophia (E. Malis, P. Martinet, P. Rives) is and ADT Inria project setup in 2021. The aim is to collect datasets in Sophia-Antipolis. This project is in direct relation with the ANNAPOLIS ANR project.

10 Dissemination

Participants: Ezio Malis, Philippe Martinet, Patrick Rives.

10.1 Promoting scientific activities

10.1.1 Scientific events: organisation

General chair, scientific chair

  • IROS21 workshop on Perception and Navigation for Autonomous Robotics in Unstructured and Dynamic Environments, September 27th 2021 (D. Wang, C. Laugier, P. Martinet, Y. Yue )

Member of the organizing committees

  • IROS21 workshop on Perception and Navigation for Autonomous Robotics in Unstructured and Dynamic Environments, September 27th 2021 (D. Wang, C. Laugier, P. Martinet, Y. Yue )

10.1.2 Scientific events: selection

Member of the conference program committees

  • ITSC21 : Associated Editor (P. Martinet)
  • IROS21 : Associated Editor (P. Martinet)
  • PSIVT21 : Regional Chair (P. Martinet)
  • ROBOVIS21 : Program Committee Member (E. Malis)

Reviewer

  • ICRA22 : Reviewer 5 papers (P. Martinet)

10.1.3 Journal

Member of the editorial boards

  • RA-L : Associated Editor in the area “Vision and Sensor-Based Control” (E. Malis)

Reviewer - reviewing activities

  • IEEE T-RO : Reviewer 1 paper (E. Malis)
  • IEEE TAES : Reviewer 1 paper (E. Malis)

10.1.4 Leadership within the scientific community

  • Corresponding Cochair of the RAS-TC on AGV & ITS (P. Martinet)

10.1.5 Scientific expertise

Project expertises

  • H2020-ICT-2019-2 mid-term expertises (P. Martinet)
  • HORIZON-CL4-2021-DIGITAL-EMERGING-01 project expertise (P. Martinet)
  • ANR CES22 Expert (P. Martinet)
  • ANR CES33 Expert (E. Malis)
  • ANR ASTRID21 project expertise (P. Martinet)

10.1.6 Research administration

  • Coordinator of the ANR project MOBI-DEEP (P. Martinet)
  • Coordinator of the ADT DriveInSophia (P. Martinet)
  • Scientific Committee Member of the French National Robotic Network - GdR Robotique (P. Martinet)

10.2 Teaching - Supervision - Juries

The team has received 2 master students.

10.2.1 Supervision

  • Ihab Mohammed, Coupling Deep Learning and Advanced Control in UAV Navigation, Univ Côte d'Azur, (1/12/2018, 31/01/2021), Phd supervisor P. Martinet, Co-supervisor: G. Allibert
  • Maria Kabtoul (Defense on December 2nd 2021): Proactive Social navigation for autonomous vehicles among crowds, Univ Grenoble Alpes, 1/09/2018-30/10/2021, PhD supervisors: A. Spalanzani and P. Martinet
  • Luis Alberto Serafim Guardini (in progress), Autonomous car driving: use of dynamic probabilistic occupancy grids for contextualized planning of emergency trajectory with minimal criticity, Univ Grenoble Alpes, (1/10/2018, 31/10/2021), Phd supervisor A. Spalanzani, co-supervisors: P. Martinet and C. Laugier
  • Ziming Liu (in progress): Representation of the environment in autonomous driving applications, (1/12/2020-...) Phd supervisors: P. Martinet and E. Malis)
  • Diego Navarro (in progress): , Defi Inria-Cerema ROAD-AI, (1/12/2021-...) Phd supervisors: E. Malis and C. Fauchard, Co-supervisors: N. Mitton, P. Martinet, R. Antoine (CEREMA)

10.2.2 Juries

  • Yassine AHMINE: "Localisation et cartographie basées sur le couplage télémètre laser et vision pour la navigation autonome d'un robot mobile" - Reviewer (E. Malis)
  • Kevin Chappellet: "Multimodal and multi-objectives control by quadratic programming for humanoid robot in industrial contexts" - Reviewer (P. Martinet)
  • Dimitri Leca: "Navigation autonome d'un robot agricole" - Reviewer (P. Martinet)
  • Best PhD Committee Member of the French National Robotic Network - GdR Robotique (P. Martinet)

10.3 Popularization

10.3.1 Internal or external Inria responsibilities

  • Member of DS4H COSP (P. Martinet)

10.3.2 Articles and contents

  • 7 R. Lenain, P. Martinet, Wheeled Robots, In: Ang M.H., Khatib O., Siciliano B. (eds) Encyclopedia of Robotics. Springer, Berlin, Heidelberg. Book Chapter, pp. 1-13, August 2021
  • 8 P. Long, P. Martinet, T. Padir, Collaborative Robotics for Deformable Object Manipulation with Use Cases from Food Processing Industry, in World Scientific Series in Advanced Manufacturing Manufacturing in the Era of 4th Industrial Revolution, A World Scientific Reference, Volume 2: Recent Advances in Industrial Robotics, Book Chapter, Chapter 10, Vol. 2, pp. 267-296, March 2021

11 Scientific production

11.1 Major publications

11.2 Publications of the year

International journals

International peer-reviewed conferences

  • 6 inproceedingsI. S.Ihab S Mohamed, G.Guillaume Allibert and P.Philippe Martinet. Sampling-Based MPC for Constrained Vision Based Control.IROS 2021 - IEEE/RSJ International Conference on Intelligent Robots and SystemsPrague, Czech RepublicSeptember 2021

Scientific book chapters

  • 7 inbookR.Roland Lenain and P.Philippe Martinet. Wheeled Robots.Encyclopedia of RoboticsSpringer Berlin HeidelbergAugust 2021, 1-13
  • 8 inbookP.Philip Long, P.Philippe Martinet and T.Taskin Padir. Collaborative Robotics for Deformable Object Manipulation with Use Cases from Food Processing Industry.2Manufacturing in the Era of 4th Industrial RevolutionWorld ScientificMarch 2021, 267-296

Doctoral dissertations and habilitation theses

  • 9 thesisM.Maria Kabtoul. Proactive and social navigation of autonomous vehicles in shared spaces.Universite Grenoble AlpesDecember 2021

11.3 Cited publications

  • 10 articleA.Ahmed Khalifa, O.Olivier Kermorgant, S.Salvador Dominguez and P.Philippe Martinet. Platooning of Car-like Vehicles in Urban Environments: Longitudinal Control Considering Actuator Dynamics, Time Delays, and Limited Communication Capabilities.IEEE Transactions on Control Systems TechnologyDecember 2020