Section: Scientific Foundations
New forms of man-machine interaction based on perception
Surfaces are pervasive and play a predominant role in human perception of the environment. Augmenting surfaces with projected information provides an easy-to-use interaction modality that can easily be adopted for a variety of tasks. Projection is an ecological (non-intrusive) way of augmenting the environment. Ordinary objects such as walls, shelves, and cups may become physical supports for virtual functionalities [49] . The original functionality of the objects does not change, only its appearance. An example of object enhancement is presented in [27] , where users can interact with both physical and virtual ink on a projection-augmented whiteboard.
Combinations of a camera and a video projector on a steerable assembly [28] are increasingly used in augmented environment systems [48] [51] as an inexpensive means of making projected images interactive. Steerable projectors [28] [49] provide an attractive solution overcoming the limited flexibility in creating interaction spaces of standard rigid video-projectors (e.g. by moving sub windows within the cone of projection in a small projection area [58] ).
The PRIMA group has constructed a new form of interaction device based on a Steerable Camera-Projector (SCP) assembly. This device allows experiments with multiple interactive surfaces in both meeting and office environments. The SCP pair, shown in figure 4 , is a device with two mechanical degrees of freedom, pan and tilt, mounted in such a way that the projected beam overlaps with the camera view. This creates a powerful actuator-sensor pair enabling observation of user actions within the camera field of view. This approach has been validated by a number of research projects as the DigitalDesk [59] , the Magic Table [27] or the Tele-Graffiti application [54] .
|
For the user interaction, we are experimenting with interaction widgets that detect fingers dwelling over button-style UI elements, as shown to the right in figure 4 .
Given the limited personnel available to pursue this area, we have concentrated our efforts on
-
Analysis of the the mathematical foundations for projected interaction devices, and
-
Developing software toolkits that provide easy programming for a wide variety of interaction models.
An important challenge is real time rectification for both the projected interaction patterns, and the perceptual field in which actions are observed. When the projected workspace is fixed, it is possible to pre-calibrate the homographies that relate the projected pattern and sensitive field. However, when the interaction surface is free to travel around the environment, these homographies must be re-computed in real time.
To provide real time re-calibration, we have implemented a procedure that detects and tracks the boundaries of a rectangular screen, referred to as the "portable display screen" or PDS. The intersection of the four boundary lines provides the image location of the observed corners of the PDS, which are then used to directly recalculate the transformation from camera to screen. Because the camera is rigidly mounted to the projector, the relation between the camera and the projector is also a homography. This homography is precalibrated using projected patterns as a calibration grid, The product of the homography from projector to camera, and the homography from camera to screen, gives the homography from projector to screen.
Evaluating the entire Hough space from scratch can be costly, and can lead to errors. In order to provide fast, robust, estimation, we track each peak in the Hough space using a robust tracking procedure based on a Kalman filter. The result is a fast, robust method for real time estimation of the projections from camera and projector to display screen. This method has been published at the first ProCams workshop, [28] and is now often cited in the camera-projector community.
In order to develop experiments with projected interaction widgets, we have recently developed a component-oriented programmers tool-kit for vision-based interactive systems, taking inspiration from [41] . In this toolkit, we separate vision components for interaction from the functional core of the application. The implementation of the vision-components draws on the VICs framework presented by Ye et. al in [60] .
This tool-kit approach to interactive system design seeks to minimize the difficulties related to the deployment of perceptual user interface by:
a) encapsulating vision components in isolated services,
b) imposing these services to meet specific usability requirements, and
c) limiting communications between the services and the interactive applications to a minimum.
Our early demonstrations of the device were met with enthusiasm by our partners at Xerox Research XRCE, Univeristy of Karlsruhe, IRST Trento, and France Telecom. In October 2003, we participated in a workshop on Camera-Projector systems organised by IBM Watson and Carnegie Mellon University. Since then an enthusiastic community of researchers has formed around this subject, and devices and innovations are moving quickly from laboratory to commercial applications.
Several important commercial opportunities have recently presented themselves. The start up company HiLabs has recently been created such for use in interactive publicity and information kiosk in public places.