Hi! I can see from documentation, that we have this function for all nodes:
getMat ()
Returns the transform matrix that has been applied to the referenced node
If I use this on base.camera, will I get camera’s projection matrix as the result, or just node transform matrix?
The reason is, after I get depth buffer, I’m trying to reconstruct pixel distance. I have depth map rendered-to-texture; Inside CG shader, I would do this for reconstructing:
Use xform_point to transform a point, xform_vector to transform a vector, or operator * to multiply two matrices together.
However, in your case, you should be aware that there are functions to do all of this. lens->extrude_depth() takes a 3D point in range -1 to 1, with the third component being depth, and writes out the corresponding 3D point in the scene.
Then, you can use render.get_relative_point(camera, point) to transform it into render’s coordinate space.
Example:
LPoint3 clip_point(0, 0, depth);
LPoint3 view_point;
if (lens->extrude_depth(clip_point, view_point)) {
LPoint3 world_point = render.get_relative_point(camera, view_point);
} else {
// Point was not inside camera frustum
}
No matter what I do, I just can not make it working, though I am almost there. I’ve tried million of different combinations, I wonder what else might be I missing.
Basically, this is the function I submit: (Texture, PNMImage, Point3, PerspectiveLens, Render, Camera)
Firstly, I reconstruct depth map along with everything:
At this point, I set depth values back to my PNMIMage reference:
and compare them side-by-side, I get same results with my depth texure, which I show using onScreenImage;
So I am positive, that all those mapX, mapY and mapZ values are in proper -1…1 range, because reconstructed image looks identical to that rendered depth texture I originally itterate through;
Now should be the easy part - just get the world position:
I expect to see color gradually changing from bottop to top, as here I try to output world-position-z value coming from world_point2.get_z(), but colors are distributed somehow similar to z-buffer, going into depth;
I was also trying all kind of manual transformations like this:
With effectively same result. I also was trying outputing X/Y values, but no matter what, I can not that correct gradual Z position mapping.
Actually, I realized that no one will ever read such large code, lol. I’ll go different approach. Assign custom CG shaders to my cameras, and write all the get-world-position code on GPU side.