1.10: Offscreen buffer

I’ve been working on switching over to Panda 1.10, prompted by an issue that I encountered related to the “p3d_Color” shader input, and the change in behaviour between 1.9 and 1.10 with regards to it. As mentioned in another thread, I’ve been working on generating a local build, but I began with the downloadable SDK, and it’s with that version that I encountered the following issue, as I recall.

My game currently renders the level to an off-screen buffer, then presents this on the screen via a full-screen quad. This worked well under 1.9, but under 1.10 the view is blood-red, as though only the red channel were being rendered.

The issue doesn’t appear to affect objects not rendered via the off-screen buffer, as shown with things like inventory items. However, it may also be relevant that these are parented beneath aspect2d, I believe, and thus don’t share the same scene-root.

Something that may or may not be related–I wasn’t sure of whether or not to make a separate thread for it: something seems to be wrong with culling, too. I haven’t quite managed to figure out quite what it’s doing, but I seem to be seeing things not culled that should be, and geometry incorrectly culled within models. Looking at it again, it looks almost as though either depth -testing or -writing is disabled, for some reason. o_0

(This is under Ubuntu 16.04, in case it’s relevant.)

A screenshot showing both issues. Note the right-hand corner of the central object being cut off, with the wall being rendered instead. (I think that it’s the object’s shadow that’s allowing us to see where it presumably would be.)


Could you post an apitrace file?

Sure! Hmm… Would it perhaps be more convenient if I posted a version that didn’t apply my various custom shaders? A quick experiment suggested that the bug still occurs in this case, and I imagine that it will result in rather fewer OpenGL calls.

[edit]
Actually, for the sake of expedience, let me simply do just that. Additionally, this was recorded in a relatively minimal scene–while the UI, inventory, etc. are present, I hadn’t loaded an actual level just yet. Behind the UI, only a (now-red) skybox was visible, I believe.
python2.7.trace.zip (6.77 MB)

Sorry, in my mind I’d responded to this, but it turns out I hadn’t.

Unfortunately, I can’t replay the trace. Perhaps it relies on a framebuffer format my hardware doesn’t support. I just get a red window.

You’re using custom shaders, correct? If so, it would be good to remove terms from your shaders, one by one, to figure out which inputs may be different in 1.10 to cause it to be red.

Furthermore, please check all the times you are using FrameBufferProperties. And if you are specifying setColorBits, try explicitly requesting bits for each channel using something like setRgbaBits(8, 8, 8, 8 ). It might be that we added support for getting a red-only framebuffer, and that the particular bits you are requesting is resulting in a red-only framebuffer.

(For what it’s worth, disabling custom shaders might not result in fewer OpenGL calls, since Panda needs to make individual calls to configure every aspect of the fixed-function pipeline. Having custom shaders, especially in combination with something like “gl-version 3 2” in Config.prc, should probably yield fewer calls.)

Not a problem! :slight_smile:

That seems odd–although the presence of the colour red is at least consistent.

As to the rest, between this issue and the “rtdist” issue in 1.10 I’m more inclined to stick with 1.9.4, back-porting the vertex-colour behaviour. If it will help you with 1.10, I can perhaps reinstall that version and try some of the things that you suggest; if it’s just a quirk of the way that I’ve set things up, then it doesn’t seem worth further investigation at this point.

I did take a quick look at the code in which I set up the off-screen buffer. I see that I set the number of multisamples (since the main purpose of the off-screen buffer right now is to allow antialiasing that can be easily toggled), but nothing else. Perhaps under 1.10 it’s defaulting to red-only colour bits, and no depth bits?

Nevertheless, thank you very much for looking into the matter, and for responding here! :slight_smile:

Yes, if you don’t request any color bits, it’s conceivable that you’ll get a red-only framebuffer. You are not guaranteed of any bits unless you explicitly request them.

Indeed, looking at the trace file, I see a call to glTexImage2D with format GL_RED at resolution 800x600, which suggests that this is exactly the case. You should call setRgbaBits(8, 8, 8, 0) to be guaranteed of that amount of bits in each channel. (This call is also supported in 1.9.)

It may be good if Panda3D issued an appropriate warning in this situation, or if we at least published this caveat in a migration guide. I’ll look into this.

If you want to install multiple Panda versions side-by-side without much effort, you could use virtualenv and use “pip install --pre panda3d” to install Panda3D into the virtualenv. I’d certainly feel better knowing we haven’t uncovered a regression (or if we have, that it is fixed as soon as possible), but I don’t want to use up your time.

Ah, that’s interesting. Perhaps that also explains the broken depth-testing, I imagine: no depth bits.

I’ll give the matter some thought tomorrow, I think–I’m a little tired right now. ^^;

That said, I might be rebuilding the SDK version of 1.9.4 anyway (I suspect that one of the back-porting changes may be causing an issue with a particular model), so it may be simpler to just install 1.10 and try out your recommendations between uninstalling the current build and installing the next.

Aha! It would seem that you were indeed correct! In fact, the lack of explicitly-defined frame-buffer properties seems to be the source of both problems, the blood-red view and the depth issues.

Specifically, this is what happened:

I uninstalled my build of Panda3D 1.9.4, then reinstalled 1.10. I ran my game with no changes, as a control run, and noted that the scene still showed up in red (as expected). I then located the code in which I set up my off-screen buffer, and explicitly requested both the colour bits that you provided above and depth bits. I ran the game again, and noted that the expected colour and depth-behaviour seemed to have been restored! (A quick test without the request for depth-bits, but with colour bits, showed the expected colour, but with the depth issue once again in place.)

So, it seems that it’s not likely a regression, but indeed perhaps something to mention in a migration guide!

(Another thing that might be worth mentioning in such a guide is that, when applying vertex colours in code, the colours may not take effect if one doesn’t call “np.setAttrib(ColorAttrib.makeVertex())”. This doesn’t seem to have been the case in 1.9.4, but does seem to be in 1.10, as I recall.)

Glad to hear the problem is solved. I think we should probably emit a warning message if you try to bind bitplanes that you didn’t request.

Yeah, I suppose a side effect of the new vertex colour handling is that you need to call make_vertex() for vertex colours to take effect. I thought that the .egg loader would automatically apply make_vertex() to models with vertex colours, though, so that it wouldn’t be an issue most of the time. Is this using custom-generated geometry?

Oh, sorry, I should perhaps have specified–this came up when adding vertex colours to loaded geometry that lacked it, as I recall.

For example, in one case I have a rope model, composed of several sections, each being a model loaded from an egg file. For most of my ropes this is fine, but in this one particular case I wanted to additionally have vertex colours shading from the bottom to the top, for use in a shader. So, using GeomVertex -Reader and -Writer, I added them. In 1.9 this didn’t require a call to make_vertex, while in 1.10 it did, I believe.