Section: New Results
Multimodal interaction with objects in virtual worlds
This section presents our recent results within the scope of multimodal interactions, successively exploring issues in Brain-Computer interaction, in Haptic interaction and around augmented walking in virtual environments.
Brain-computer interaction in virtual reality
Participants : Anatole Lécuyer [ contact ] , Bruno Arnaldi, Vincent Delannoy, Fabrice Lamarche, Yann Renard, Aurélien Van Langhenhove.
Brain-computer interfaces (BCI) are communication systems that enable to send commands to a computer using only the brain activity. Cerebral activity is generally sensed with electroencephalography (or EEG). We describe hereafter our recent results in the field of brain-computer interaction with virtual environments: (1) Novel signal processing techniques for EEG-based Brain-Computer Interfaces, and (2) Design and study of Brain-Computer Interaction with real and virtual environments.
Novel signal processing techniques for EEG-based Brain-Computer Interfaces
A first part of the BCI research conducted in the team is dedicated to EEG signal processing and classification techniques applied to cerebral EEG data. We have introduced a novel trainable feature extraction algorithm for BCI which relies on inverse solutions. This algorithm is called FuRIA which stands for Fuzzy Region of Interest Activity. FuRIA aims at automatically identifying what are, for a given subject, the relevant Regions Of Interest (ROI) and their associated frequency bands for the discrimination of mental states. The activity in these ROI and associated frequency bands can be used as features for any classifier. This approach also introduced the concepts of fuzzy ROI and fuzzy frequency bands that enable to use efficiently the available information and, thus, to increase the classification performances.
The evaluation of the proposed method showed its efficiency in terms of classification accuracy. Actually, the obtained results were comparable to other exisiting efficient methods. Indeed, it seems that the inverse solution combined with the FuRIA training, acts as a spatial filter that removes the background activity and the noise not correlated with the targeted mental states. As such it focuses on relevant, subject-specific, brain activity features. An additionnal advantage of FuRIA is the interpretability of the learnt and extracted features.
This work has been published in the IEEE Transactions on Signal Processing [20] .
Brain-Computer Interaction with real and virtual environments
A second part of our BCI research is dedicated to the improvement of BCI-based interaction with real and virtual environments.
A major limitation of BCI-based applications is the electrical sensitivity of EEG which causes severe deterioration of the signals when the user is moving. This constrains current EEG-based BCI to be used only by sitting and still subjects, hence limiting the use of BCI for applications such as video games. We have thus conducted a feasibility study on whether a BCI system, based for instance on the well-known P300 brain signal(Among the various existing BCI paradigms, the "P300 evoked potential" consists in making use of a positive waveform appearing 300ms in the EEG signals after a rare stimulus expected by the user has occurred.), could be used with a moving subject. We recorded EEG signals from 5 users in 3 conditions: sitting, standing and walking.
This work has been published in the Advances in Computer Entertainment Technology conference ([55] .
Then, in a second study, we introduced a novel approach that can be used to model a wide range of interaction techniques for BCI based on P300 signals. Our model is based on Markov chains and can predict, for any P300-based technique, both the time required to perform an action and the number of flashes needed.
To test the validity of our model, we compared predicted selection times and predicted number of flashes with real experimental data. We modelled three specific P300-based selection techniques that aim at selecting a target on a map. A preliminary experiment with three healthy participants showed that our experimental data are indeed similar to the performances predicted by our theoretical model for the three selection techniques, which suggests a good match between our model and the experimental results (published in the ACM CHI conference [59] ).
Haptic interaction
Participants : Georges Dumont [ contact ] , Anatole Lécuyer [ contact ] , Maud Marchal [ contact ] , Bruno Arnaldi, Zhan Gao, Loeiz Glondu, Loïc Tching.
We describe hereafter our recent results obtained in the field of haptic interaction with virtual environments. They mainly concern: (1) the concept of pseudo-haptic feedback, (2) the haptic interaction at a nanoscopic scale, (3) the Spatialized Haptic Rendering, (4) a new coupling scheme for haptic rendering and (5) haptics for CAD.
Pseudo-haptic feedback
We have first conducted a survey of main researches and applications concerning "pseudo-haptic feedback". Pseudo-haptic feedback is a technique meant to simulate haptic sensations in virtual environments using visual feedback and properties of human visuo-haptic perception. Pseudo-haptic feedback uses vision to distort haptic perception and verges on haptic illusions. Pseudo-haptic feedback has been used to simulate various haptic properties such as the stiffness of a virtual spring, the texture of an image, or the mass of a virtual object. Our survey describes the several experiments in which these haptic properties were simulated. It assesses the definition and the properties of pseudo-haptic feedback. It also describes several virtual reality applications in which pseudo-haptic feedback has been successfully implemented, such as a virtual environment for vocational training of milling machine operations, or a medical simulator for training in regional anesthesia procedures.
This extensive survey has been published in Presence journal [19] .
Haptic interaction at nanoscopic and microscopic scales
We have developped a toolkit for path planning and manipulation of Carbon nanotubes in Virtual Reality. The toolkit is equipped with visual and haptic feedback to assist the manipulation of nanotubes in the remote environment. The toolkit can be used to generate optimal and safe manipulation paths. It is capable of modeling the interactions between an Atomic Force Microscope tip and carbon nanotubes on a substrate surface and generating optimum and safe manipulation paths. It also provides virtual guides for guidance and assistance during the nanotube manipulation tasks. These virtual guides are based on haptic and visual feedback and enable the operator to perform tasks with higher confidence and accuracy.
This work has been published in the IEEE International Conference on Virtual Environments, Human-Computer Interfaces and Measurement Systems [50] .
Spatialized Haptic Rendering
An impact on a manipulated object creates high-frequency propagating vibrations. These vibrations produce different transient patterns sensed by the hand depending on the impact position on the object. We have introduced a "Spatialized Haptic Rendering" technique to enhance 6DOF haptic manipulation of virtual objects with impact position information using vibrations. This rendering technique uses our perceptive ability to determine the contact position by using the vibrations generated by the impact. In particular, the different vibrations generated by a beam are used to convey the impact position information.
We have conducted two experiments in order to tune and evaluate our spatialized haptic rendering technique. The first experiment investigated the vibration parameters (amplitudes and frequencies) needed to enable an efficient discrimination of the force patterns used for spatialized haptic rendering. The second experiment was an evaluation of spatialized haptic rendering during 6DOF manipulation. Taken together, our results suggest that spatialized haptic rendering can be used to improve the haptic perception of impact position in complex 6DOF interactions
This work has been published in the IEEE International Conference on Virtual Reality [60] .
Coupling scheme for haptic rendering of physically-based rigid bodies simulation
In the virtual reality context, the physical realism of the interaction between the user and the objects of the virtual world is particularly important when dealing with contact or collision between rigid objects as, for example, in assembly tasks. The use of haptic rendering significantly improves the degree of realism of the virtual worlds. However, the high frequency rates required for smooth manipulations are often difficult to reach, in particular for rigid bodies simulations. Hence, we proposed a new coupling scheme based on a dynamic subset of the virtual world, a localized Haptic Sub-World, running at a higher frequency than the rest of the virtual world. This Sub-World, located around the virtual object manipulated by the user, is synchronized with the virtual world through a dynamic analysis of the interface between the two subsets. Using this coupling scheme in our software environment, we are able to achieve high frequency haptic rendering using sophisticated simulation methods on virtual worlds with a large number of rigid bodies.
This work has been published in the Sixth Eurographics Workshop in Virtual Reality, Interactions and Physical Simulations [52] .
Haptic interaction for assembly/disassembly tasks
Within the industrial framework, applications that need haptic interfaces mainly consist in combining assembly and disassembly tasks to validate virtual prototypes or to train operators. To assist users in performing the assembly of CAD objects, the fields of teleoperation and virtual reality provide methods resulting from algorithmic assembly planning. The objective of such assembly planning is to determine sequences to assemble a product from its individual parts, ensuring that the moving objects do not collide with any other objects in the working environment. Nevertheless, in complex assembly simulation, the user can intuitively interact with the virtual CAD environment. It is then possible, by using virtual fixtures (abstract perceptual information added to the simulation), to assist the user while leaving him a partial control of his movements.
We consider that the virtual scene can be decomposed in two areas. The first area is related to exploration of the virtual environment. It is composed of zones where no assembly is planned and where the user gets a classical haptic control of the object. In this area, the collision reactions are computed thanks to the non- smooth dynamics algorithm we developed in the last few years [33] . The second area is related to the assembly tasks. To apply the constraint based guidance in the functional area, we qualify the task and associate both mechanical linkage(s) and virtual fixture(s). To create the constraints, we use topological information of CAD objects to identify the assembly areas, and model the trajectory with mechanical linkages, when possible. This phase is carried out by pre- processing and placing constraints for each zone of interest. Once the constraints and their associated geometric guides are set-up, the simulation switches between two different modes of control, related to the two areas described above.
A prototype has been developed developed in collaboration with the CEA-LIST within the Part@ge ANR project. In 2007, we began this Loïc Tching's PhD thesis granted by Haption SA (CIFRE contract).
Augmented walking in virtual environments
Participants : Anatole Lécuyer [ contact ] , Maud Marchal [ contact ] , Bruno Arnaldi, Gabriel Cirio, Tony Regia Corte, Sébastien Hillaire, Léo Terziman.
We describe hereafter our recent results obtained in the field of "augmented walking in virtual environments". Our first objective is here to better understand the properties of human perception and human locomotion when walking in virtual worlds. Then, our intention is to improve the multisensory rendering of human locomotion and human walk in virtual environments, making full use of both haptic and visual feedback. Last, we intend to design advanced interactive techniques and interaction metaphors to enhance, in a general manner, the navigation possibilities in VR systems. Our major results in this area concern : (1) gaze analysis and gaze prediction when turning in virtual environments, (2) the use of camera motions for virtual walking, and (3) a novel metaphor for infinite walking in virtual worlds.
Gaze prediction when turning in virtual environments
We have first analyzed the gaze behavior of participants navigating in virtual environments. We focused on first-person navigation in virtual environments which involves forward and backward motions on a ground-surface with turns towards the left or right.
We found that gaze behavior in virtual reality, with input devices like mice and keyboards, is similar to the one observed in real life. Participants anticipated turns as in real life conditions i.e. when they can actually move their body and head. We also found influences of visual occlusions and optic flow similar to the ones reported in existing literature on real navigations.
Then, we proposed three simple gaze prediction models taking as input: (1) the motion of the user as given by the rotation velocity of the camera on the yaw axis (considered here as the virtual heading direction), and/or (2) the optic flow on screen. These models were tested with data collected in various virtual environments. Results showed that these models can significantly improve the prediction of gaze position on screen, especially when turning, in the virtual environment. The model based on rotation velocity of the camera seems to be the best trade-off between simplicity and efficiency. We suggest that these models could be used in several interactive applications using gaze point as input. They could also be used as a new top-down component in any existing visual attention model.
This study has been published in ACM International Symposium on Virtual Reality Software and Technology [53] .
Camera motions for enhanced perception of virtual walking
In first-person videogames, visual simulations of human walking have been developed, with the use of camera shifts calculated to reproduce the motions of the eyes of a human being during the walk, in response to the oscillations of the user's head produced at each step by natural walking.
We have conducted one experiment to evaluate the influence of such oscillating camera motions on the perception of traveled distances in virtual environments.
In the experiment, participants viewed visual projections of translations along straight paths. They were then asked to reproduce the traveled distance during a navigation phase using keyboard keys. Each participant had to complete the task (1) with linear camera motion, and (2) with oscillating camera motion that simulates the visual flow generated by natural human walking. Taken together, our results suggest that oscillating camera motions allow a more accurate distance reproduction for short traveled distances. As such, camera motions could be used in various VR applications for enhanced perception of virtual walking: videogames, architectural or urban project reviews, scale 1 military or industrial training, etc.
This study has been published in IEEE International Conference on Virtual Reality [61] .
Magic Barrier Tape: infinite walking in virtual environments
In most virtual reality simulations the virtual world is larger than the real walking workspace. The workspace is often bounded by the tracking area or the display devices. We have developed a novel interaction metaphor called the Magic Barrier Tape, which allows a user to navigate in a potentially infinite virtual scene while confined to a restricted walking workspace. The technique relies on the barrier tape metaphor and its "do not cross" implicit message by surrounding the walking workspace with a virtual barrier tape in the scene. Therefore, the technique informs the user about the boundaries of his walking workspace, providing an environment safe from collisions and tracking problems. It uses a hybrid position/rate control mechanism to enable real walking inside the workspace and rate control navigation to move beyond the boundaries by "pushing" on the virtual barrier tape.It provides an easy, intuitive and safe way of navigating in a virtual scene, without break of immersion.
Considering it can be used in many different virtual reality systems, it is an interaction metaphor suitable for many different applications, from the entertainment field to training simulations scenarios.
This study has been published in ACM International Symposium on Virtual Reality Software and Technology [47] .
Interactions within 3D virtual universes
Our work focuses upon new formalisms for 3D interactions in virtual environments, to define what an interactive object is, what an interaction tool is, and how these two kinds of objects can communicate together. We also propose virtual reality patterns to combine navigation with interaction in immersive virtual environments.
Generic interaction tools for collaboration
Our goal is to propose software utilities in order to help implementation of new interaction metaphors for collaborative virtual environments. These software utilities rely on a generic interaction protocol that describes what kind of data an interaction tool needs to exchange with an interactive object in order to take control of it, and ought to be generic enough in order to be deployed on different software integration platforms such as OpenMASK, Spin3D or Virtools.
We have published state of the art about 3D collaborative interactions (paradigms and metaphors) [31] , in order to propose an interaction protocol able to fit with the most commonly used 3D interactions [37] . This protocol has been used with a new collaborative interaction metaphor called "the 3-hand manipulation technique" [38] and has been shown at the JVRC 2009 conference [39] .
Extension to the Collada exchange scheme
We have proposed new extensions to the Collada exchange scheme in order to take into account the interactive and collaborative capabilities of virtual objects. Objects described using this scheme can be exploited with our interaction protocol within our OpenMASK VR framework.
The immersive virtual cabin (IVC)
The objective of the Immersive Virtual Cabin is to improve the user's immersion with all his real tools and so to make the design and the use of 3D interaction techniques easier, and to make possible to use them in various contexts, either for different kinds of applications, or with different kinds of physical input devices. We have developped a new 3D interaction metaphor called the "2D Pointer/3D Ray" [49] that can be carried by our CVI.