Get camera inverse projection matrix?

Hi! I can see from documentation, that we have this function for all nodes:

getMat ()
Returns the transform matrix that has been applied to the referenced node

If I use this on base.camera, will I get camera’s projection matrix as the result, or just node transform matrix?

The reason is, after I get depth buffer, I’m trying to reconstruct pixel distance. I have depth map rendered-to-texture; Inside CG shader, I would do this for reconstructing:

float4 vPositionVS = mul(vProjectedPos, g_matInvProjection);

So right now I’m trying to find some equivavelnt for g_matInvProjection to run it from inside my C++ script. Is there any?

Ah, don’t mind, its in the Lens class. Don’t know how could I miss it for so long.

What is the ‘mul’ equivalent? Would it be the right one? (myCurrentLensMaxtrix->xform)

Use xform_point to transform a point, xform_vector to transform a vector, or operator * to multiply two matrices together.

However, in your case, you should be aware that there are functions to do all of this. lens->extrude_depth() takes a 3D point in range -1 to 1, with the third component being depth, and writes out the corresponding 3D point in the scene.

Then, you can use render.get_relative_point(camera, point) to transform it into render’s coordinate space.

Example:

LPoint3 clip_point(0, 0, depth);
LPoint3 view_point;
if (lens->extrude_depth(clip_point, view_point)) {
  LPoint3 world_point = render.get_relative_point(camera, view_point);
} else {
  // Point was not inside camera frustum
}

Thanks a lot, you really helped me. Was stuck here for a several hours, as I see now, completely in wrong direction.

No matter what I do, I just can not make it working, though I am almost there. I’ve tried million of different combinations, I wonder what else might be I missing.

Basically, this is the function I submit: (Texture, PNMImage, Point3, PerspectiveLens, Render, Camera)

Firstly, I reconstruct depth map along with everything:

At this point, I set depth values back to my PNMIMage reference:

and compare them side-by-side, I get same results with my depth texure, which I show using onScreenImage;

So I am positive, that all those mapX, mapY and mapZ values are in proper -1…1 range, because reconstructed image looks identical to that rendered depth texture I originally itterate through;

Now should be the easy part - just get the world position:

myCurrentLens->extrude_depth(clip_point, view_point);
LPoint3 world_point2 = renderNode->get_relative_point(refval, view_point);
float colorPixel = world_point2.get_z() * 10;

myMapCanvasImg->set_blue_val(y, x, colorPixel);
myMapCanvasImg->set_red_val(y, x, colorPixel);
myMapCanvasImg->set_green_val(y, x, colorPixel);

And here what I get all the time:


I expect to see color gradually changing from bottop to top, as here I try to output world-position-z value coming from world_point2.get_z(), but colors are distributed somehow similar to z-buffer, going into depth;

I was also trying all kind of manual transformations like this:

With effectively same result. I also was trying outputing X/Y values, but no matter what, I can not that correct gradual Z position mapping.

Actually, I realized that no one will ever read such large code, lol. I’ll go different approach. Assign custom CG shaders to my cameras, and write all the get-world-position code on GPU side.