logging visible objects

Hi everyone,

I’m using Panda3D for science - I’ll have study participants play a little game (essentially an extension of roaming ralph where they run around picking up boxes). For evaluation, I want to log all visible objects at each frame. And by visible I mean really visible to the participant. I.e. if the box is behind some hill in my landscape, this box should be marked as invisible. Do you have suggestions how to do this?

  • Using base.camNode.isInView(box.getPos(base.cam)) gives me boxes hidden behind some occluder as well.
  • Converting the 3D position of each box into rendered 2D coordinates has the same issue - occluded boxes will be projectd onto some 2D position as well.

I guess, I will have to use some collision ray, but I am not sure how to do this. So basically, I would need a ray starting from the camera to each of my boxes and find out if it hits terrain first, right?
Is there a way to do this without adding a CollideMask to the whole terrain?

Thanks!

Edit: of course, this does not need to be real-time per se (it will be evaluated after all the game has finished anyway). I.e. if I could log all objects that were visible on the last frame or so, would be equally appreciated!

Hi, welcome to the forums!

What you’re looking for is called “occlusion culling”. You might find it interesting to check the occlusion culling sample programs. Generally, though, this is a costly method requiring manual set-up of occluders, and I’m not sure if Panda provides a way to query the results of occlusion culling back.

Using a CollisionRay is certainly an approach. As you noted, it would require using collision geometry for the terrain, which may be expensive if the geometry is particularly complex or if many objects may be in the way. However, Roaming Ralph’s terrain already uses collision geometry for the terrain, so it’s certainly an approach. A single ray isn’t very fine-grained (it’d be hard to detect if the object only barely peeks around the corner), of course.

Another option might be to use a draw callback in conjunction with an occlusion query, which is a hardware feature in which the hardware counts the number of pixels that are being rendered of a particular object. I see that we don’t currently expose this feature to Python applications, though, regrettably. If you’re okay with using a development build of Panda, I’d be willing to look into exposing this feature to Python.
(It is important to keep in mind that occlusion queries will not work with transparent objects, ie. if you have a texture containing an alpha cut-out, it may not yield the correct results for objects that are behind this cut-out)

A fourth option, one I’ve implemented in the past for this kind of application, is to create a render-to-texture buffer to which you are rendering your scene, that is marked with RTMCopyRam. Then you give each object a flat, unique colour when rendering to this buffer, and then when rendering is done, you look at the contents of this texture, and based on the colours of the pixels you find, you can determine whether the object is in view or not. This might be prohibitively expensive, though, depending on the resolution at which you render. It could be acceptable if you are okay with rendering this view at a reduced resolution, and possibly save the images to disk for later processing.

Finally, another approach that I just thought of is using a custom GLSL fragment shader on the object in question in which you use imageAtomicAdd to add to a counter stored in a 1x1 texture with FR32i format. Then, at the end of a frame, you can use base.gsg.extractTextureData to copy the contents of the texture back to system RAM and read out the value that was written to this texture. This is fairly advanced stuff, and limits you to OpenGL 4 level hardware, but this might be particularly efficient, it’d work in current versions of Panda, and gives you advanced control over when an object is considered “visible” (meaning it would work with alpha cut-outs). You can write to different pixels of the same texture for different objects, and get all the data in one go.

Note that both the occlusion query approach and the imageAtomicAdd approach require front-to-back sorting; ie. they require objects closer to the camera to be rendered first, so that the depth test will fail on objects that are behind it. If you go with either of these approaches, it may be necessary to force the terrain to be rendered first and use setBin to specify a front-to-back ordering.

If you can use OpenGL 4.3, then the easiest approach is:

  1. Generate a unique ID for each object
  2. Render the scene normally, store the object ID in a secondary buffer (e.g. aux attachment)
  3. In a second render pass, you use a fullscreen shader which takes the aux-attachment texture as input, and write the object id to a buffer texture (i.e. buffer_texture[object_id] = 1)

You can then copy that buffer texture back to the CPU, and analyze which values were set to 1 to get the list of visible objects.

Notice that this approach has the advantage that you don’t need front-to-back rendering (which might break very fast)

Cool, thanks for all these fast suggestions! :slight_smile:

I implemented some collision ray approach in the meanwhile, but it is extremely slow. I’m not sure if I chose a clumpsy implementation or if the approach itself is not suitable… I actually thought about logging the game state in detail, then replay the game later and do the costly evaluations then.

The render-to-texture buffer sounds interesting. I tried to save screenshots at every frame for fun, and found that that’s too slow :wink: Is there hope that something else will be faster?

I looked into how to actually do this, the manual ist not very encouraging though (“If you need this level of control, you need to use a lower-level API. The documentation for the lower-level API is not currently written.”) :wink:

But my boxes have unique flat colours already - is there maybe a function that just writes the latest rendering buffer to file?

[…if not, I will read your other suggestions again and read up a little :slight_smile:]

One more question: as I said, I implemented an approach using a collision ray. In my implementation, I iterate over all boxes and for each box, I set the ray direction towards the box, traverse the collisionTraverser and evaluate the results. Would you expect a speedup, if instead I have one collision ray and collision handler per box, all added to the same traverser and thus traverse only once?