Overall Objectives
Scientific Foundations
New Results
Contracts and Grants with Industry
Other Grants and Activities

Section: New Results

Garment Simulation

Garment simulation seems to be a completely different research domain which has nothing to do with the other activities of MIRAGES. In fact, they are intimately related. As it has been said previously, human body modeling and tracking are the core of our research activity. Usually, people that we have to track are dressed, so that what we have to track is more a garment than a human body itself. As we use model-based approaches, we need to have a garment model at our disposal. As it is a very complex domain, it became a research domain in itself in MIRAGES. Research on cloth animation and computer generated garments is a field of major interest among both Computer Graphics and Textile research communities. The most common approach is to model cloth objects using mass-spring systems and to resolve Newton's equation governing the system motion using implicit integration methods. Unfortunately, results usually lack of realism. So the main effort carried out was to improve realism which is both necessary for its use in the textile industry as well as for a precise 3D human body tracking.

Development of a virtual try-on system

Participants : Le Thanh Tung, André Gagalowicz.

We want to develop a complete virtual try-on system, from the design over the tailoring to the try-on and customization of virtual garments with fewer interactions required from the user.

We have built a first prototype of a virtual try-on system, named TUNGA, the scheme of which is shown in figure 2 . The system is composed of three main modules: (1) 2D module (designed for stylists); (2) 3D module (designed for clients); (3) Garments server.

Figure 2. Schema of TUNGA SYSTEM

The first module, provides an easy work environment for tailors with a total of about thirty main functions. We have improved its interface and adapted to the desires and recommendations of Nadine Corrado, our partner in the Simulvet project.

We have also introduced a first multi-layer prototype of garment, 2D working space is now organized in many levels separately corresponding to the layers of the garment. The modelist can create as many levels as he/she wants since the number of levels is not limited in our system.

The modifications of the 3D module and of the garment server have been modified accordingly in order to cope with the new structure of the multi-layer garment. However, we do not have any mechanical friction parameters between the materials of the different layers so that we treat the interactions between these levels only geometrically.

In the future, we intend to continue to collaborate with Nadine Corrado, in order to improve our system . We will also improve the 3D simulation module in order to have a realistic result in a reasonable simulation time.

Improvement of the collision technique

Participants : Weiran Yuan, Yujun Chen, André Gagalowicz.

Collision is one of the most essential problems in cloth simulation. Improper collision handling will create unrealistic simulation and lead the simulation to unstable states. We improved the implicit contact handling for deformable objects presented by Otaduy [Miguel09] and combined buckling with the collision handling in the dynamic system solver. This method can handle complex and self-collision situations.

Implicit collision handling

The basic concept for the implicit contact handling is non penetration constraint. It can be described briefly as follows.

The set of object q configurations free of contact can be limited by a constraint manifold in a high-dimensional configuration space G . Collision detection locally samples this constraint manifold. More specifically, grouping all contact points in one vector p , the free space defined by the constraint manifold G can be approximated by a set of algebraic inequalities g(p)>0 .

In order to enforce non-penetration at the end of the time step, we can formulate the constraints implicitly. Specifically, we propose a semi-implicit formulation of contact constraints linearized as:

Im1 ${g{(p)}=g_0+\mfrac {\#8706 g}{\#8706 p(p-p_0)\#8805 0}}$(1)

with the rows of the Jacobian Im2 ${\#8706 g\#8726 partialp}$ formed by the contact normal n at the time of impact and g0 = g(p0) .

The solution to our constrained dynamics problem alone does not guarantee a penetration-free state at the end of a time step. There are two possible reasons: the linearization of the contact constraints, and the fact that the collision response induced by some constraints may in turn violate other constraints that were not yet accounted for. We use a time-stepping algorithm based upon constraint manifold refinement that will guarantee a collision-free state.

Given initial state(q0, v0) (where v is the velocity) , and using one Newton iteration of implicit Euler for the numerical integration, before the CMR (Constraint Manifold Refinement; see details in [Miguel09]) procedure, we add the buckling operation, which insures a collision free status of the buckling masses. The full algorithm per time step proceeds as follows:

(1) Linearize and digitize the dynamics, compute(A, G, b) , at(q, v0) , where A is the matrix used in the system solver Av = b .

(2) Solve for unconstrained velocities in Im3 ${Av^\#9734 =b}$ .

(3) Execute tolerance-based collision detection at q0 and initialize the constraint set g with newly-found constraints Im4 $g^\#9734 $ .

(4) Loop CMR while Im5 ${g^\#9734 \#8800 \#8709 }$ , (at most 5 times is sufficient experimentally):

4.1 Compute collision response $ \delta$v based on Im6 ${(q_0,v^\#9734 ,g)}$ .

4.2 Compute tentative Im7 ${v=v^\#9734 +\#948 v}$ and q = q0 + $ \delta$tGv .

4.3 Find a buckling point Im8 ${bp^\#9734 }$ on the new position q , compute the destination position bp as described in reference( Buckling model in MIRAGES Project). Update q using bp .

4.4 Execute continuous collision detection between q0 and q .

4.5 Augment the constraint set g with newly found constraints Im4 $g^\#9734 $ .

(5) Combine the buckling operation to the system, add the perturbation(small distance) to system constrains.

With this method, we can implicitly handle the collision with the buckling effect.

Collision repairing method

We implemented a robust collision repairing method inside our simulation software using the intersection contour minimization technique presented by Volino [VMT00]. The reason for the use of collision repairing is that the initial input for the cloth simulation may already intersect the human body model.

This method is based upon minimizing edge-polygon intersections.In the following, we assume that the surfaces are described as polygonal meshes, and that the polygons are flat. In this context, regular surface collisions are typically detected as proximities between vertices and polygons, sometimes between edges. These collisions illustrate that surfaces are close enough for considering them as interacting. Meanwhile, surface intersections are typically detected as intersections between edges and polygons. Their presence denotes that surfaces interpenetrate in a usually not physically plausible way, and we aim to present a scheme to resolve this situation. The surface intersection contour is actually described as two lines, drawn identically on the two intersecting surfaces. The idea of resolution scheme is fairly simple: We define a collision response scheme that induces a relative displacement between intersecting edges and polygons so as to reduce the length of the intersection contour, ultimately leading to the disappearance of the surface intersection.

We combined this method into the solver so that our simulator can repair the intersections which already existed in the initial state.


Figure 1 and 2 are the results using our collision handling method described above. One experiment is the cloth to cylinder model collision for the cloth-body test; the other one is twisting cylinder cloth for the self-collision test. These two experiments contain complex and self-collision situations. Our buckling and implicit collision handling method can handle them very well.

Figure 3. Results for collision handling, cloth and cylinder model
Figure 4. Results for collision handling,twisting self-collision

Human body deformation and reconstruction

Participants : Thibault Luginbühl, Andre Gagalowicz.

Human body 3D scanners can now produce large point clouds lying on the surface of the scanned person with high precision. However, because of occlusions, these point clouds are always presenting holes. Furthermore they are not well-suited for higher level processing such as animation. In order to get a manifold surface of genus 0 from the point cloud, we propose to use a generic model. This model has a valid geometry and topology. By adapting it to the data we will get a specific model usable for advanced treatment. We will first present a registration process to fit the generic model to the data. In order to improve the quality of the results, we need to have more control on the deformation of the model. Therefore we present a deformation model that enables us to add more control during the registration process.

Registration process

Our registration process is divided in two steps. First the generic model is roughly deformed to get closer to the data by matching automatically detected feature points. Then we attract the points of the surface of the generic model to the closest corresponding data points with some constraints to keep the regularity of the mesh and to fill the parts that are missing.

The regularity of the generic model is kept by using the laplacian coordinates of the mesh. Laplacian encoding of the geometry is widely used for surface deformation or compression (See [38] or [20] ).

The discrete Laplace-Beltrami operator of a function u can be easily expressed at each point pi of a mesh using only the values of the first ring neighborhood vertices N(pi) .

Im9 ${\#916 u{(p_i)}=\munder \#8721 {p_j\#8712 N{(p_i)}}w_{ij}{(u{(p_j)}-u{(p_i)})}}$

Thus, the operator can be represented as a sparse matrix L . The discretization of the operator (definition of the wij ) has already been studied in several works [23] . The simplest solution consists in averaging the values of the first ring neighborhood. Other weightings have been proposed such as the co-tangent weights or the mean value coordinates.

Applying the Laplace-Beltrami operator on the coordinates or the mesh results in the so-called laplacian coordinates. If X is the vector of all the x coordinates of the points then the x laplacian coordinates $ \delta$x is defined by :

LX = $ \delta$x

In order to register the generic model to the data, we use these laplacian coordinates and add constraints to the system with the feature points. Feature points were detected automatically by the software of the scanner. The detected points are also defined on the generic model. We add this information to the linear system and build the new coordinates by inverting the matrix (least-square solution). This enables to roughly approximate the target position and allows to work more locally after.

To adjust locally the shape, we use an iterative method. At each step we compute a displacement vector for each point of the generic model so that each point of the data attracts the closest point on the surface. We add a regularity constraint: the laplacian coordinates should not be too different from the original ones, weights on these constraints are chosen according to the distance between the point on the generic model and the closest point on the surface. If the point is far away we put more weight on the conservation of laplacian coordinates so that parts where the data were occluded are filled with the information of the generic model (See figure  5 for a result using this registration process).

Figure 5. The registration process: from left to right, the generic model, the data with missing parts due to occlusions, our reconstruction.
Deformation Model

The previous registration system has important limitations. For example the pose of the generic model should not be too far away from the pose of the specific model, because laplacian coordinates aren't invariant under rotations. Besides unrealistic shapes can appear where no data were available (feet here for instance were only translated whereas they should have been slightly rotated). In order to add more control over the deformation, we developed a deformation model allowing to take into account rotations at handle regions. We use a similar approach as [43] to guide the deformation.

The main reason for this choice is the possibility of computing the deformation really fast thanks to the laplacian matrix: once the handle regions are chosen the factorization can be computed and deformation can be realized at interactive rate because only the back substitutions have to be performed.

Handle regions are defined by a set of points of the mesh and a rigid local transform for those points. We use these points in the same manner as the feature points in the previous section, their positions are added as constraints in the laplacian system. But we also use the rotation information. Since the coordinates are not invariant under rotations, we will transform them before the reconstruction. To find a local transformation at each point we use again the pre-factorized laplacian matrix to compute an interpolation of the quaternions defined at each handle. The interpolation is computed by solving Ls = 0 with the constrained values at handle points. This results in a harmonic function interpolating the quaternions' coordinates at each handle. Even though this solution doesn't result in a rigid transform at each point it gives visually pleasant deformations and this is enough for our rough approximation.

The deformation model was implemented in order to test the limitations. We chose to define handles as slices located near real articulations : shoulder, elbow, wrist, knee, ankle....(See figure  6 ).

Figure 6. The deformation model, we used planes to slice the model and build handle regions which are colored here.

One of the limitations was the lack of hierarchical control between the handles, we had a good but not intuitive deformation of the surface. Therefore we added the possibility to define a hierarchy between the handle regions (see figure  7 ). Finally we have a model that enables to compute intuitive deformations as if there was a skeleton but without rigging or skinning. The regions used to control the mesh can be freely defined by the user.

Figure 7. Effect of the hierarchy on the deformation. We applied a rotation on the leg and a translation on the elbow. The transformation is the same on the two images, left image has no hierarchy, right has intuitive hierarchy for human body deformations

Further improvements are possible. We see two different options regarding the application targeted. For animation purpose, we would need to add more control of the local shape during a movement (for example muscle bulging). On the other hand if we want to automatically fit this model to a completely different pose, improvements have to be made to keep coherency when handles undergo very large deformations.


Logo Inria