Section: Scientific Foundations
Sensors and information processing
Participants : Fawzi Nashashibi, Yann Dumortier, André Ducrot, Gwenaëlle Toulminet, Olivier Garcia, Laurent Bouraoui, Paulo Lopes Resende.
Sensors and single-sensor information processing
The first step in the design of a control system are sensors and the information we want to extract from them, either for driver assistance or for fully automated guided vehicles. We put aside the proprioceptive sensors, which are rather well integrated. They give information on the host vehicle state, such as its velocity and the steering angle information. Thanks to sensor data processing, several objectives can be reached. The following topics are some applications validated or under development in our team:
localization of the vehicle with respect to the infrastructure, i.e. lateral positioning on the road can be obtained by mean of vision (lane markings) or by mean of magnetic, optic or radar devices;
detection and localization of the surrounding vehicles and determination of their behavior can be obtained by a mix of vision, laser or radar based data processing;
detection of obstacles other than vehicles (pedestrians, animals objects on the road, etc.) that requires multisensor fusion techniques;
simultaneous localization and mapping as well as mobile object tracking using a generic and robust laser based SLAMMOT algorithm.
Since INRIA is very involved in image processing, range imaging and multisensor fusion, IMARA emphasizes vision techniques, particularly stereo-vision, in relation with MIT, LITIS (Rouen) and Mines ParisTech.
Disparity Map Estimation
Participants : Yann Dumortier, Laurent Bouraoui, André Ducrot, Fawzi Nashashibi, Gwenaëlle Toulminet.
In a quite innovative approach presented in last year's report, we developed the Fly Algorithm, an evolutionary optimisation applied to stereovision and mobile robotics. Although successfully applied to real-time pedestrian detection using a vehicle mounted stereohead (see LOVe project), this technique couldn't be used for other robotics applications such as scene modeling, visual SLAM, etc. The need is for a dense 3D representation of the environment obtained with an appropriate precision and acceptable costs (computation time and resources).
Stereo vision is a reliable technique for obtaining a 3D scene representation through a pair of left and right images and it is effective for various tasks in road environments. The most important problem in stereo image processing is to find corresponding pixels from both images, leading to the so-called disparity estimation. Many autonomous vehicle navigation systems have adopted stereo vision techniques to construct disparity maps as a basic obstacle detection and avoidance mechanism.
We are working on a new approach for computing the disparity field by directly formulating the problem as a constrained optimization problem in which a convex objective function is minimized under convex constraints. These constraints arise from prior knowledge and the observed data. The minimization process is carried out over the feasibility set, which corresponds to the intersection of the constraint sets. The construction of convex property sets is based on the various properties of the field to be estimated. In most stereo vision applications, the disparity map should be smooth in homogeneous areas while keeping sharp edges. This can be achieved with the help of a suitable regularization constraint. We propose to use the Total Variation information as a regularization constraint, which avoids oscillations while preserving field discontinuities around object edges.
The algorithm we are developing to solve the estimation disparity problem has a block-iterative structure. This allows a wide range of constraints to be easily incorporated, possibly taking advantage of parallel computing architectures. This efficient algorithm allowed us to combine the Total Variation constraint with additional convex constraints so as to smooth homogeneous regions while preserving discontinuities.
Multi-sensor data fusion
Participants : Fawzi Nashashibi, Yann Dumortier, André Ducrot, Olivier Garcia, Laurent Bouraoui, François Charlot.
Advanced Driver Assistance System (ADAS) and Cybercars applications are moving towards vehicle-infrastructure cooperation. In such scenario, information from vehicle based sensors, roadside based sensors and a priori knowledge is generally combined thanks to wireless communications to build a probabilistic spatio-temporal model of the environment. Depending on the accuracy of such model, very useful applications from driver warning to fully autonomous driving can be performed.
IMARA has developed a framework for data acquisition, spatio-temporal localization and data sharing. Such system is based on a methodology for integrating measures from different sensors in a unique spatio-temporal frame provided by GPS receivers/WGS-84. Communicant entities, i.e. vehicles and roadsides exhibit and share their knowledge in a database using network access. Experimental validation of the framework was performed by sharing and combining raw sensor and perception data to improve a local model of the environment. Communication between entities is based on WiFi ad-hoc networking using the Optimal Link State Routing (OLSR) algorithm developed by the HIPERCOM research project at INRIA.
The Collaborative Perception Framework (CPF) is a combined hardware/software approach that permits to see remote information as its own information. Using this approach, a communicant entity can see another remote entity software objects as if it was local, and a sensor object, can see sensor data of others entities as its own sensor data. Last year's developments permitted the development of the basic hardware pieces that ensures the well functioning of the embedded architecture including perception sensors, communication devices and processing tools. The final architecture was relying on the SensorHub presented in last year's report. This year, we focused on the development of applications and demonstrators using this unique architecture. Thus, a canonical application was developed to demonstrate the ability of platooning using vehicle-to-vehicle communications to exchange vehicles absolute positions provided by respective GPS receivers.
This approach was presented at the ITS World Congress under the form of a cooperative driving demonstration with communicant vehicles. This demonstration was also the context of an international collaboration involving our team, the robotics center of ENSMP and the SwRI (see Section 8.1 ). A similar demonstration was presented in the context of the international workshop on “The automation for urban transport” that was held in the french city of La Rochelle. Here three Cycabs have shown platooning capacities and demonstrated the ability of supervising collision free insertion at an intersection. The Intersection Collision Warning System (ICWS) application was built here on top of CPF to warn a driver in case of potential accident. It relies on precise spatio-temporal localization of entities and objects to compute the Time To Collision (TTC) variables but also on a “Control Center” that collects the vehicles positions and sends back to them the appropriate instructions and speed profiles.
Finally, in a recent activity, we demonstrated an application of platooning in a public showcase in the town of Montbéliard. Two Cycabs were involved in a demonstration were vision-based and laser-based platooning capacities were demonstrated combined to dedicated controls.
Associated projects: Sharp, Icare, Complex.