Hi, welcome to the forums!
What you’re looking for is called “occlusion culling”. You might find it interesting to check the occlusion culling sample programs. Generally, though, this is a costly method requiring manual set-up of occluders, and I’m not sure if Panda provides a way to query the results of occlusion culling back.
Using a CollisionRay is certainly an approach. As you noted, it would require using collision geometry for the terrain, which may be expensive if the geometry is particularly complex or if many objects may be in the way. However, Roaming Ralph’s terrain already uses collision geometry for the terrain, so it’s certainly an approach. A single ray isn’t very fine-grained (it’d be hard to detect if the object only barely peeks around the corner), of course.
Another option might be to use a draw callback in conjunction with an occlusion query, which is a hardware feature in which the hardware counts the number of pixels that are being rendered of a particular object. I see that we don’t currently expose this feature to Python applications, though, regrettably. If you’re okay with using a development build of Panda, I’d be willing to look into exposing this feature to Python.
(It is important to keep in mind that occlusion queries will not work with transparent objects, ie. if you have a texture containing an alpha cut-out, it may not yield the correct results for objects that are behind this cut-out)
A fourth option, one I’ve implemented in the past for this kind of application, is to create a render-to-texture buffer to which you are rendering your scene, that is marked with RTMCopyRam. Then you give each object a flat, unique colour when rendering to this buffer, and then when rendering is done, you look at the contents of this texture, and based on the colours of the pixels you find, you can determine whether the object is in view or not. This might be prohibitively expensive, though, depending on the resolution at which you render. It could be acceptable if you are okay with rendering this view at a reduced resolution, and possibly save the images to disk for later processing.
Finally, another approach that I just thought of is using a custom GLSL fragment shader on the object in question in which you use imageAtomicAdd to add to a counter stored in a 1x1 texture with FR32i format. Then, at the end of a frame, you can use base.gsg.extractTextureData to copy the contents of the texture back to system RAM and read out the value that was written to this texture. This is fairly advanced stuff, and limits you to OpenGL 4 level hardware, but this might be particularly efficient, it’d work in current versions of Panda, and gives you advanced control over when an object is considered “visible” (meaning it would work with alpha cut-outs). You can write to different pixels of the same texture for different objects, and get all the data in one go.
Note that both the occlusion query approach and the imageAtomicAdd approach require front-to-back sorting; ie. they require objects closer to the camera to be rendered first, so that the depth test will fail on objects that are behind it. If you go with either of these approaches, it may be necessary to force the terrain to be rendered first and use setBin to specify a front-to-back ordering.