So I’ve observed some interesting and confusing behavior when dealing with shaders and multiple cameras. I’m going to try and explain it the best I can and see if this is expected behavior.
We have render with a standard camera set as it’s scene. Reparented to render are several debug visual parts which have transforms on them. Underneath one of those debug visual parts is a node which another camera is reparented to, which is used as our shadow cam and most of the physics and visual geometry. The shadow camera has it’s scene set to this node as well.
To reduce the geom node count of the scene, I setup a special shader that took in physics node paths as inputs, then used their transforms on parts of the visual geometry. This works great, as I have 5 physics objects per one visual geom. I used the trans_physicsNode#_to_model information to get each transform for a part of the model.
A problem showed up when I looked at the shadows. They were not in the correct position and through some experimentation I found that they were missing the transform from render to the node that the shadow cam was under.
My question is: Is this expected behavior? It seems like the “world” of the shader changes based on the camera the shader is being run with.