I am working on a robot simulator four our lab using Panda3D. I am simulating a laser scanner using the depth buffer, which has been amazingly straightforward to do. Before explaining my problems, here are the relevant portions of the code:
Panda3D seems to default to byte values on the depth buffer, which is not precise enough for this particular use. I’ve tried to specify this in the code "self.depthmap.setComponentType( Texture.TFloat ) " but, somehow, the texture component type reverts to byte.
Even with a real valued depth texture I suppose I’m going to run into problems when storing it in a PNMImage, because AFAIK it does not handle real values.
Do you have any suggestions on how to implement this?
Not sure about the first problem, but perhaps this helps with the second problem. You can store PNG images as 16bit (and not 8bit). Still not float, but better than the default 8bit.
Thanks for your answer. I tried it, the problem is that calling ‘self.depthmap.store( self.texels )’ will write an 8 bit image regardless of the former parameters. I wrote the following:
Hmm, the missing parentheses were not a typo.
2 << 16 - 1 (or 2<<15 is basically the same) should be the correct one. (it outputs 65536 here), if you add parentheses it will give 131071 which is probably not what you want.
OK, bottom line here: Panda is supposed to automatically set the texture component width according to the number of bits in the depth buffer. Because of a couple of different bugs, it’s getting it wrong, so even though you set your texture to a 16-bit component width, Panda is (helpfully) resetting it 8 bits for you, believing you have only 8 bits in your depth buffer. It’s mistaken, of course, but that doesn’t help you here.
I’ll see about getting this fixed shortly. Sorry about that.
And, oh yeah, if you want to use true floating-point values, that’s a bit more trouble too–as you have surmised, PNMImage doesn’t support floats. Would 16-bit integers be good enough for your purposes?
I’m pretty sure that depth values are always 24-bit integers internally (assuming you’re using an nvidia or ati card). Of course, the component_type might have an effect when you read the values out into RAM.
treeform: Since there will many laser scanners, each one casting hundreds of rays, I need to do ray casting as fast as possible. Hence, it is way more efficient to let the GPU do the hard work .
David: Yes, 16 bit should be enough. Regarding the bugs writing the texture, I can give a shot to fixing them if you give me a pointer of where to look.
Josh: Do you think then that it is possible to somehow copy the 24 bit depth buffer into a 8 bit per channel color buffer? this seems to be a nice alternative.