Physics simulation for character hair

Hmm, right.

Aaah, controlJoint() is defined in Actor! Good point.

Ok. Thanks for the clarification.

The algorithm needs not only the transform of the parent joint, but also the global transform, because it must know the local direction of the gravity vector (which is defined in global coordinates). The parent joint’s transform is needed for enforcing the bone length constraint.

The simulation is based on treating the joints (including the terminator) as the endpoints of rigid rods. One endpoint of the whole hair chain (or tree) is considered fixed. The other end (the “leaf” end) is free. All points except the fixed one are subjected to a Newtonian physics simulation under some constraints.

The physics simulation updates the positions of these points at each frame, and the joint transforms are updated by converting this position data into orientation data, using lookAt(). The conversion runs from the root of the tree toward the leaf level. The position difference between two successive points in the simulation basically gives the +y axis of the joint. The initial version tracked only the y axis and used a hack for locking the joint’s roll, but in the final version I intend to track also an auxiliary axis (z) representing the local “up” vector, genuinely producing a unique orientation. The neutral orientation of a joint can be encoded as a position (with points representing the “y” and “z” axes) in its parent joint’s coordinate system.

I’ve been thinking a bit about what you said. You’re right in that the scene graph sounds expensive, but on the other hand, it seems to me that it could be the correct abstraction for coordinate system conversion between the local cartesian frames (as in coordinate frame in physics) of arbitrary objects (joints, nodes and the scene root).

As I think I might have mentioned, I’m planning to split the motion into two parts: run the simulation (which has its own data structures and is in principle completely independent of the joints) in the local coordinate system of the hair root joint (to which all the first segments of the hair chains are attached), and then account for the motion of this hair root joint (w.r.t. the global scene coordinates) by introducing fictitious forces. The fictitious forces formally convert the moving coordinate frame into a stationary one, where the usual equations of motion apply.

Tracking the motion of the hair root joint directly in global coordinates automatically accounts for the combined effect of any rigid-body motion of the character, and any animations that move (or rotate) the head - the trick is to notice that although the character deforms, the head can be treated as a rigid body.

The gravity vector must be converted from global coordinates to hair root joint coordinates so that it can be applied in the simulation. And in order to compute the fictitious forces, the linear acceleration, the rotation axis, and the rotation speed (angular velocity) of the hair root joint must be determined in the global scene coordinates.

The acceleration can be computed by backward-differencing the position to produce velocity information, and then backward-differencing this velocity to get the acceleration. During the first two frames (as in rendered frame in 3D graphics) this of course produces nonsense, because the previous velocity is not yet initialized, but it is easy to catch this special case, and just pretend that the acceleration is always zero at the start. Games always render many more than three frames :stuck_out_tongue: so this is not a problem in practice.

As for the rotation axis and angular velocity, I made some (Python-based) experiments comparing the orientation at successive frames (as in rendered frame) using Panda’s quaternion system and the scene graph, and I think I got the necessary information. Basically:

# setup
hairRootNode = XXX  # this is an exposed joint

prevNode = PandaNode("HairRootPreviousTransformStorage")
prevNP = NodePath(prevNode)
prevNP.reparentTo(render)

self.initDone = False

# ...more code goes here...

def normalizeAngleDeg(angle):
    """Normalize an angle (given in degrees) to [-180, 180)."""
    result = angle
    while result <= -180.0:
        result += 360.0
    while result > 180.0:
        result -= 360.0
    return result

def rotationTask(task):
    # initialize prev if not initialized yet
    if not self.initDone:
        prevNP.setTransform( hairRootNode.getTransform( other=render ) )
        self.initDone = True
        return task.cont

    # difference in orientation w.r.t. previous frame
    Q = hairRootNode.getQuat( other=prevNP )

    # axis of rotation (vector)
    r_local = Q.getAxisNormalized()

    # rotation increment (degrees), effectively theta = omega*dt
    # without normalization, we may get e.g. 357 degrees per frame, whereas we want the equivalent -3.
    theta = normalizeAngleDeg( Q.getAngle() )

    # convert to scene global coordinates
    r_global = render.getRelativeVector( other=hairRootNode, vec=r_local )
    x0 = hairRootNode.getPos( other=render )

    # for debug visualization, it is possible to use something like this (given the appropriate definitions):
    scaleMult = 10.0  # exaggerate for easier visibility
    vertex3 = GeomVertexWriter(vdata3, 'vertex')
    halfvec = (r_global/2.0) * (2.0*np.pi/360.0) * theta * scaleMult
    vertex3.setData3f( x0 - halfvec )
    vertex3.setData3f( x0 + halfvec )

    # at the end of the update task, update prev:
    prevNP.setTransform( hairRootNode.getTransform( other=render ) )

    return task.cont

taskMgr.add(rotationTask, 'MyRotationTask', priority=10000, sort=0)

The variables x0, r_global, and theta encode all the necessary information about the motion of the head in global coordinates.

This should always work, because any rotation in 3D space can be expressed as a single rotation (by Euler’s rotation theorem), and thus regardless of the specific rigid-body motion, the axis/angle representation always exists. (Strictly speaking, one point must be held fixed in the rigid-body motion for Euler’s theorem to hold, but this is abstracted away, because when we look only at the orientation, any linear motion of the origin is effectively discarded. Thus the origin can be considered a fixed point.)

This sounds elegant. But as mentioned, the algorithm needs more than just the parent joint. If it is possible to get the whole chain of parents this way (all the way to the scene root), it could work… but I’m not sure if that would be elegant anymore, or if it’s better just to use the scene graph.