Team Bunraku

Members
Overall Objectives
Scientific Foundations
Application Domains
Software
New Results
Contracts and Grants with Industry
Other Grants and Activities
Dissemination
Bibliography

Section: New Results

Autonomous and expressive virtual humans

The overall objective of this topic is to create lively and believable inhabited virtual environments and to be able to interact with them. We thus subdivise the topic into three components that explore different scales, ranging from the one of virtual environments and global displacement of virtual humans, to the local one of motion synthesis. We finally consider the interaction between real and virtual humans.

Autonomous Navigation and Virtual Environments

Participants : Fabrice Lamarche [ contact ] , Julien Pettré [ contact ] , Stéphane Donikian, Samuel Lemercier, Thomas Lopez, Jan Ondrej, Yijiang Zhang.

TopoPlan: a topological planner for 3D environments

Navigation inside virtual environments has a key role in behavioral animation as it is part of a large number of behaviors. Most often, virtual environments are furnished as 3D databases modelled by 3D designers. Populating such environments requires to compute data structures, based on the environment geometry, enabling path planning and obstacle avoidance for virtual human navigation. The challenge is then to plan a path and adapt the humanoid motion to the environmental constraints in real time.

TopoPlan is a model enabling real time path planning inside complex 3D environments [18] . It is able to analyse a 3D database in order to automatically extract an informed topology. It relies on a 3D exact subdivision enabling the computation of accurate spatial relation between cells. Starting from those spatial relations, the model automatically extracts a topological representation of the environment. A topological representation relies on the computation of continuous surfaces (named zones) compounded of cells having similar properties (those properties are user defined and can relate to geometrical properties and / or semantic ones). Once zones are computed, their relations are identified and used to compute the final topology. The system is then able to automatically characterize continuous surfaces, stairs (even spiral stairs) or steps. Moreover it computes bottlenecks on flat or uneven surfaces. Finally, thanks to the 3D subdivision, ceiling is also identified. Once the subdivision and its topology are extracted, they can be used in real-time to compute paths inside 3D complex environments. This model can be used to control a virtual character (animated by MKM) that can plan a path inside a complex environment and adapt its motion to the ceiling geometry and the floor constraints.

ToD & DyP : Topology Detection and Dynamic Planning

When automatically populating 3D geometric databases with virtual humanoids, modeling the navigation behavior is essential since navigation is used in most exhibited behaviors. In many application fields, the need to manage navigation in dynamic environments arises (virtual worlds taking physics laws into account, numerical plants in which step stools can be moved,...). This study focuses on the following issue: how to manage the navigation of virtual entities in such dynamic environments where topology may change at any time i.e. where unpredictable accessibility changes can arise at runtime. In opposition to current algorithms, movable items are not only considered as obstacles in the environment but can also help virtual entities in their navigation.

The proposed algorithm splits that problem into two complementary processes: ToD (Topology Detection) and DyP (Dynamic Planning) [69] . The aim of ToD is to continuously detect and update topological relations between moving objects i.e. accessibility or obstruction. To compute accessibility relations, ToD relies on the navigation capabilities of a virtual human (climbing a step, jumping etc…). The aim of DyP is to use the topology computed by ToD in order to maintain / compute a roadmap enabling accurate path planning inside the dynamic environment. The coupling between ToD and DyP helps to tackle the problem of planning inside dynamic environments at different granularities while precisely identifying elements that require to be updated. This enhances global system performances by enabling local adaptation that can be computed only when required.

Populating large environments with numerous virtual humans

The creation of lively and believable virtual worlds requires to populate them with virtual humans. The number of virtual humans to populate rapidly grows with the size of virtual worlds. Then, the simulation and animation of virtual humans is time consuming whilst their role may remain secondary. We proposed the Crowd Patches [63] approach to both ease the population design process and drastically diminish the computation needs for simulation and animation. The key-idea of this approach is to build inhabited environments by assembling small portions of precomputed crowd simulation. Our main contribution is to break limitations in terms of memory requirements and size of virtual environments. This work was realized in collaboration with EPFL-VRlab.

Motion Synthesis

Participants : Richard Kulpa [ contact ] , Fabrice Lamarche [ contact ] , Franck Multon [ contact ] , Julien Pettré, Ludovic Hoyet.

Reactive Motion control for virtual characters

The goal of behavioral animation is to automate the process of populating a virtual environment with autonomous virtual humans. Applications in interactive environments are numerous such as in virtual reality, games, crowd simulations or numerical plant simulations. Such applications raise several issues in terms of motion control:

  1. The credibility of the virtual human behavior often relates to the naturalness of its movements.

  2. The traditional long-term approaches of motion planning cannot be used since they suffer from scalability issues in the context of interactive applications.

  3. The dynamic of the world requires fast adaptation of the behavior and hence fast adaptation of the humanoid postures to environmental constraints.

To tackle those problems, we are working on motion control processes [18] that are compatible with unpredictability i.e. that do not require long term prediction, and that can be easily combined with higher level processes i.e. behaviors issued from a decisional model.

Such control processes propose three main features (i) posture adaptation for avoiding ceiling constraints in reactive way, (ii) smart environment-adaptive footprint generation and (iii) automated extraction and combination of data-based motion captures through a high-level motion control process

The design of those processes relies on the properties of MKM motion model (morphology independent motion representation, real-time motion retargeting) and exploit the quality of TopoPlan environment representation to automatically adapt motion / postures to the virtual human morphology, to the environmental constraints and to high level commands in case of process (3). All those processes have a low response time to be compatible with interactivity constraints and are fully automatic i.e. do not require additional user control.

Realistic interactions between virtual humans

A realistic navigation asks for virtual humans' ability to react to the presence of other virtual humans walking nearby. When two virtual humans have crossing trajectories, they need to adapt their motion in order to avoid a collision. In order to achieve realistic collision avoidance, we experimentally studied real humans' behavior in such situations. We acquired a large number of experimental data, and could elaborate a new model using our observations as a basis [72] . Moreover, we were able to calibrate the parameters of our model and validate results by directly confronting real and synthetic trajectories.

Dynamics in humanoid motions

Purely motion-capture based techniques do not guarantee physical correctness of synthetic motions. It is however possible to re-process resulting motions to respect the basic laws of Physics. Extending some previous results that used pose adjustments to enforce physical laws such as body weight, we have considered, this year, the issues related to external forces applied on the character [22] .

We have worked in collaboration with LAAS/CNRS (Locanthrope ANR project) in order to clarify how the ZMP (Zero Moment Point) behaves in human walking and for unbalanced motions (leading to a fall). In the literature, the ZMP is generally constrained inside the base of support. The experiments enabled us to estimate what was the actual influence of the choice of the model (inverted pendulum, particles or connected rigid bodies) on the computation of the ZMP [17] . We also studied the behavior of this point compared to other indexes used in biomechanics such as the Center of Pressure (COP) and the extended Center of Mass (xCOM). The preliminary results tend to demonstrate that it seems to offer better results than ZMP does [54] . We wish now to validate this approach on a wide set of subjects and to design a motion controller based on this index for virtual humans.

All this work was carried out in collaboration with Taku Komura, Edinburgh University, who co-supervises the PhD-thesis of Ludovic Hoyet.

Virtual reality to analyze interaction between humans

Participants : Franck Multon [ contact ] , Julien Bilavarn, Bruno Arnaldi, Stéphane Donikian, Richard Kulpa.

Understanding interaction between humans is very challenging because it addresses many complex phenomena including perception, decision-making, cognition, social behaviors. Consequently, defining a protocol for studying a subset of those phenomena is really complex for real situations. Using VR to standardize experimental situations is a very promising issue: experimenters can accurately control the simulated environment, contrary to real world. However, the main problem is: how to ensure that people behave as in real world when they are immersed in a simulated environment?

In the past, in collaboration with M2S (University Rennes 2), we have worked on the interaction between two opponents in handball. We have designed a framework to animate virtual throwers in a reality center and to analyze the gestures of real goalkeepers whose objective was to intercept the corresponding virtual balls. The main advantage of this situation is that the goalkeeper has to anticipate the trajectory of the ball according to the opponent's gestures otherwise it could not have enough time to intercept the ball [30] , [29] .

These works have been extended to the study of deceptive movements in rugby. Combining perceptual analysis based on the use of cutoffs with biomechanical analysis, we have extracted important kinematic information that could explain differences between experts and novices. Indeed, thanks to the cutoffs, it is possible to determine how early each of these two levels of practice can perceive the correct final direction of the opponent. Then this information is correlated to kinematical parameters of this player [11] .

Finally, we have made some experiments in order to determine if the dynamic reaction of subjects avoiding projectiles is the same in real situation or immersed using a HMD. First results confirmed that statement.

At last, early study has been experimented in order to know if training of sports in virtual reality can be useful in real situations. The goal is to define training tools that can be used by coaches in order to train athletes to repetitive motions such as katas in karate.

This work will continue to involve specialists in sports sciences (M2S of University Rennes 2) and neuroscientists (Queen's University of Belfast).


previous
next

Logo Inria