Hardware Skinning

EDIT: Now also supported in the automatic shader generator. To enable it, you need these settings:

hardware-animated-vertices true
basic-shaders-only false

You also need to call setShaderAuto() on the actor in question.

The Cg and Shader Generator support will be 1.10.0 and above. I’ve backported the GLSL p3d_TransformTable support to 1.9.1 by user request.

Hi all,

I just checked in a change today that makes it more straightforward to implement skeleton animation in a shader using the existing Panda3D animation system.

It’s implemented via a new shader input p3d_TransformTable, and a flag you can set on ShaderAttrib that indicates that Panda should not attempt to animate the vertices on the CPU (though I may change my mind about the specifics of this interface later).

It should be trivial to extend this to cover Cg and the shader generator, which I plan to do soon.

I plan on making a similar interface for implementing morph targets as well.

If you don’t have a recent build, these were made today:
buildbot.panda3d.org/downloads/4 … 1cceb4df8/

This brief sample program shows a panda animated by a GLSL shader, next to another panda animated by Panda’s CPU animation system for reference.

from direct.showbase.ShowBase import ShowBase
from direct.task import Task
from direct.actor.Actor import Actor

from panda3d.core import Shader, ShaderAttrib

shader = Shader.make(Shader.SL_GLSL, """#version 130

in vec4 p3d_Vertex;
in vec4 p3d_Color;
in vec2 p3d_MultiTexCoord0;

in vec4 transform_weight;
in uvec4 transform_index;

uniform mat4 p3d_ModelViewProjectionMatrix;

uniform mat4 p3d_TransformTable[100];

out vec4 color;
out vec2 texcoord;

void main() {
  mat4 matrix = p3d_TransformTable[transform_index.x] * transform_weight.x
              + p3d_TransformTable[transform_index.y] * transform_weight.y
              + p3d_TransformTable[transform_index.z] * transform_weight.z
              + p3d_TransformTable[transform_index.w] * transform_weight.w;

  gl_Position = p3d_ModelViewProjectionMatrix * matrix * p3d_Vertex;
  color = p3d_Color;
  texcoord = p3d_MultiTexCoord0;
}
""", """
#version 130
in vec4 color;
in vec2 texcoord;

uniform sampler2D p3d_Texture0;

void main() {
  gl_FragColor = color * texture(p3d_Texture0, texcoord);
}
""")
 
class MyApp(ShowBase):
    def __init__(self):
        ShowBase.__init__(self)

        model = "panda"
        anim = "panda-walk"
        scale = 1.0
        distance = 6.0

        # Load the panda model.
        self.pandaActor = Actor(model, {"walk": anim})
        self.pandaActor.setScale(scale)
        self.pandaActor.reparentTo(self.render)
        self.pandaActor.loop("walk")

        # Load the shader to perform the skinning.
        # Also tell Panda that the shader will do the skinning, so
        # that it won't transform the vertices on the CPU.
        attr = ShaderAttrib.make(shader)
        attr = attr.setFlag(ShaderAttrib.F_hardware_skinning, True)
        self.pandaActor.setAttrib(attr)

        # Create a CPU-transformed panda, for reference.
        self.pandaActor2 = Actor(model, {"walk": anim})
        self.pandaActor2.setScale(scale)
        self.pandaActor2.setPos(distance, 0, 0)
        self.pandaActor2.reparentTo(self.render)
        self.pandaActor2.loop("walk")

app = MyApp()
app.trackball.node().setPos(0, 50, -5)
app.run()

Cool! This only takes the 4 most-weighted bones into account, tho, or?

The four most-weighted bones for each vertex. This is enough for most cases. It can be extended to 8 if necessary with a second column, but that’s probably not necessary.

I think there’s one more step needed here - the normal has to be transformed. I’m not sure how, inversed transposed upper 3x3 part of the animation matrix constructed in the shader?

Yes, that’s tricky. If you know you have only a uniform scale on the bones, you can do this:

normal = (mat3)matrix * normal;

If there are joints that are stretched (ie. have a scale that is not the same in x, y, z) then that won’t work and it’ll have to be something like:

normal = transpose(inverse((mat3)matrix)) * normal;

…however, that would be quite slow. We could alternatively pass two matrix tables to the shader, one with the inverse transpose 3x3 matrices for the normals, but that also sounds like trouble.

One “solution” is just to assert that you shouldn’t have non-uniform scales applied to bones, although it does sound useful to be able to do this for bone stretching and character customisation.

(This does not apply to binormals and tangents, of course.)

Now also supported under Cg and under the shader generator. You can now let Panda automatically enable hardware skinning whenever the shader generator is enabled, by enabling these Config.prc variables:

hardware-animated-vertices true
basic-shaders-only false

Also enable setShaderAuto() on your actors of course.

Morphs will still be applied on the CPU.

Dear all -

Does anyone know if/how it would be possible to use

 mat4 matrix = p3d_TransformTable[transform_index.x] * transform_weight.x
              + p3d_TransformTable[transform_index.y] * transform_weight.y
              + p3d_TransformTable[transform_index.z] * transform_weight.z
              + p3d_TransformTable[transform_index.w] * transform_weight.w;

in conjunction with the hardware instancing index (gl_InstanceID)?

I would like to use HW skinning not only for one instance but several ones based upon HW instancing?

Thanks !

Yes, it will work, but all instances will play the same frame of the animation.

Many thanks rdb for your reply.

However, I am currently struggling to understand how to mix HW instancing and skinning in a VS.

Let’s say a (simplified) HW instancing VS shader section made as below:
[this is actually how my HW instancing works, with some culling to cope with the OmniBoundingVolume() applied to the model]

mat4 transform =  transpose(shader_transformmatrix[gl_InstanceID])
gl_Position = p3d_ModelViewProjectionMatrix * (p3d_Vertex * transform);

where shader_transformmatrix is an uniform filled through a task with the instanced nodepath transform matrices - nodepath.getMat()

How should I modify the HW skinning matrix calculation below:

mat4 matrix = p3d_TransformTable[transform_index.x] * transform_weight.x
              + p3d_TransformTable[transform_index.y] * transform_weight.y
              + p3d_TransformTable[transform_index.z] * transform_weight.z
              + p3d_TransformTable[transform_index.w] * transform_weight.w;

with any additional ‘transform’ matrix based upon gl_InstanceID? (I am probably wrong/not clear in my understanding of how HW skinning is working…)

Thanks again fo your help!

Just wondering, why aren’t you doing the transform directly, e.g.

gl_Position = p3d_ViewProjectionMatrix * transform * p3d_Vertex;

As rdb already mentioned, if you use instancing, all models will play the same animation. To support different animations, panda would have to support that first. To combine the instancing and hardware skinning, I guess you want something like that:

gl_Position = p3d_ViewProjectionMatrix * transform * matrix * p3d_Vertex;

whereas matrix is the transform matrix.

Keep in mind that if you use p3d_ModelViewProjectionMatrix, this also includes the base models transform, rotation, and scale. Most likely you don’t want that if you pass a custom transform, so you should use p3d_ViewProjectionMatrix.

Many thanks Tobias for your reply!

Sure, that was my understanding. I was not thinking about that feature you described, but if Panda can support it one day, it could be great.

Thanks for this. I was not sure that it was the right direction but you confirmed it. After a serie of tests (the models were not displayed correctly or not at all), I managed to make it work. The above line should be written like this:

gl_Position = p3d_ViewProjectionMatrix * (matrix * p3d_Vertex * transform);

The order of the matrix multiplication and the brackets are mandatory to make this code work. As a side note, the same order should be applied when calculating instanced normals / Eye direction.

My mistake and it makes perfectly sense. I corrected my code: thanks for pointing this out.

Thanks again!

Hm, how are you passing your transform? Usually you will be doing transform * p3d_Vertex, instead of p3d_Vertex * transform … Have you tried setting “gl-coordinate-system default” ?

I use a task to fill a PTA_LMatrix4f() (self.shader_data). I apply culling and for every visible nodepath (np):

self.shader_data.pushBack(UnalignedLMatrix4f())
self.shader_data[t] = UnalignedLMatrix4f(np.getMat())

I then need to transpose() the transform in the VS before using it in the calculation:

mat4 transform =  transpose(shader_transformmatrix[gl_InstanceID])

I don’t use the ‘gl-coordinate-system default’ but I’ll give it a try.
EDIT: I made a few tests and I don’t have noticed any change.

Thank you!

If this is the calculation you use for instancing:

gl_Position = p3d_ModelViewProjectionMatrix * (p3d_Vertex * transform);

And this is the calculation we use for animating a model:

gl_Position = p3d_ModelViewProjectionMatrix * matrix * p3d_Vertex;

Then this would be the calculation to use for instancing the animated model:

gl_Position = p3d_ModelViewProjectionMatrix * ((matrix * p3d_Vertex) * transform);

Thanks rdb.

For my understanding: wouldn’t be better to use p3d_ViewProjectionMatrix (as per tobspr’s suggestion), provided ‘transform’ already contains the ‘transform matrix’’?

Thanks!

Sure.