Section: New Results
Extracting information from a video sensor network
Participants : Jean–Pierre Le Cadre, Adrien Ickowicz.
Recent trends lead to consider globally networks of (video) sensors. These networks can be relatively large, so we have to face specific problems. How can we use the data at the sensor level, how to represent the information collected at the sensor level, how to fuse ? A first step consists in extracting spatio–temporal informations from video sensors. Of course, these sensors are generally uncalibrated and asynchronous. So we have to consider rather rough informations. Roughness can go up to reduce the information to proximity and to a binary information about object motion (closing or not). A first step has been to consider the use of the estimated closest point of approach (CPA) times for estimating the parameters of the target trajectories. This study is relatively simple but has the great advantage to put in evidence the basic requirements and the limits of this approach. In a second step, we considered the estimation of the CPA times from a sequence of images.
The limits of the above approach are quite obvious. If it is able to exploit a temporal contrast, there is a strong need to use a spatio–temporal contrast at the (binary) sensor network level. Actually, it has been shown that the separation problem we have to solve present strong similarities with the optimization problems we have to solve in a SVM context. The benefits of this approach are multiple: it is well adapted to (robust) tracking and the combinatorial problems which plagued multitarget tracking are fundamentally reduced. For the tracking step, particle filtering is the natural way since it can easily include complex priors, non–linear measurements as well as separation properties, within a hierarchical context. Our contribution this year has been a first step towards multiple target tracking within this framework  ,  .