GL error 1282 when using depth buffer

Hi guys,

I ran into a strange OpenGL today:

:display:gsg:glgsg(error): at 3364 of panda/src/glstuff/glGraphicsStateGuardian_src.cxx : GL error 1282

using the head of the CVS tree. This is the code that reproduces it:

from direct.directbase import DirectStart
from pandac.PandaModules import *

props = FrameBufferProperties()
#props.setBlahBits makes no difference
output = base.graphicsEngine.makeOutput(
         base.pipe, "offscreen buffer", -2,
         props, WindowProperties.size(512,512),
         GraphicsPipe.BFRefuseWindow,
         base.win.getGsg(), base.win)

depthmap = Texture()
output.addRenderTexture(depthmap, GraphicsOutput.RTMBindOrCopy, GraphicsOutput.RTPDepth)

run()

I tried it with 1.5.3 as well, but that just gives me a hard segfault.
My graphics card is GeForce 8600, using proprietary nvidia drivers, on linux. Grepping through glxinfo tells me that depth textures definitely are supported.

Any help would be appreciated. I’d spam the source using report_my_gl_errors but I can’t get Panda3D to compile right now.

pro-rsoft

I’ve narrowed the error down to these lines:

  if (new_image) {
    // We have to create a new image.
    GLP(CopyTexImage2D)(target, 0, internal_format, xo, yo, w, h, 0);

in the function framebuffer_copy_to_texture of glstuff/glGraphicsStateGuardian_src.cxx.

However I would have no clue what to do next. I never did any OpenGL programming in C++ so I have no idea what this means.

PS. GL error 1282 seems to mean “invalid operation” which of course doesn’t help me at all.

All I read in the OpenGL specification was:

If depth component data is required and no depth buffer is present, the error INVALID OPERATION is generated.

Could this be the issue? How can I determine if there is indeed no depth buffer present?

Also, I noticed the error goes away if I use GraphicsOutput.RTMNone, but this leaves me a black depth buffer.

Sorry to be such a bother, but this is really annoying me. I kind of can’t live without depth buffer. :slight_smile:

Hmm, is it possible it failed to create a depth buffer when it created an offscreen buffer? Does it work if you make this call on the main window, instead of on the offscreen buffer? If so, how about if you simply put:

prefer-parasite-buffer 1

in your Config.prc file?

In general, buffer.getFbProperties() is supposed to return an object that describes the state of the buffer, including whether or not it has an associated depth buffer. This doesn’t work properly in 100% of the situations, though–some of Panda’s windowing code does a poor job of managing the framebuffer state.

David

i was getting this error what i was rendering but not using the offscreen buffer ( not tied to texture )

Thanks for your replies.

I did try to put the call on the main window but that didn’t change anything.
I already tried prefer-parasite-buffer but that didn’t change anything either.
This is what output.getFbProperties() prints:

depth_bits=24 color_bits=24 alpha_bits=8 accum_bits=64 back_buffers=1 force_hardware=1

@treeform, this error is not specific to this function call, it means “invalid operation” which can basically be thrown by every function.

I just upgraded to the latest bleeding-edge NVIDIA driver, and I still got the same error, but now followed by 10x an invalid value error:

:display:gsg:glgsg(error): at 3375 of ../glstuff/glGraphicsStateGuardian_src.cxx : invalid operation
:display:gsg:glgsg(error): at 3380 of ../glstuff/glGraphicsStateGuardian_src.cxx : invalid value
:display:gsg:glgsg(error): at 3380 of ../glstuff/glGraphicsStateGuardian_src.cxx : invalid value
(same error several more times)
:display:gsg:glgsg(error): at 3380 of ../glstuff/glGraphicsStateGuardian_src.cxx : invalid value
:display(error): Deactivating glxGraphicsStateGuardian.

Here’s the interesting piece of code:

  if (new_image) {
    // We have to create a new image.
    GLP(CopyTexImage2D)(target, 0, internal_format, xo, yo, w, h, 0);
  } else {
    // We can overlay the existing image.
    GLP(CopyTexSubImage2D)(target, 0, 0, 0, xo, yo, w, h);
  }

The first time, the error is in the CopyTexImage2D line, while all the other errors are about the CopyTexSubImage2D call.
I found out that “xo” and “yo” are both 0, and “w” and “h” are both 512.

Well, I’m pretty sure the “invalid value” here is the target; there’s something it doesn’t like about either the framebuffer or the texture.

I haven’t done any work at all on the depth-texture support for Panda, so I know little about it. This was Josh’s baby. It will take me a while before I can get a chance to research it, but I can get to it eventually.

I do vaguely remember some problems with the whole design of setting the texture mode to depth-texture. This conflicted with existing code that was trying to match the texture mode to the framebuffer mode. I don’t know how Josh resolved this problem; maybe there are still problems in there.

David

Okay. Well, it would be great if this would get fixed, but it doesn’t have such high-priority since I found a way around this – I was able to easily fake a 32-bits depth buffer by a secondary shader on the main camera. Gives pretty good results.

Thanks alot for your help!

I was also thinking of doing the depth buffer for different reason. If i write depth buffer with a shader then i can write depth values from texture into fake depth buffer, for cooler shadowing and AO fx.

I’ve hunted it down.

Apparently, someone defined Texture::F_depth_stencil and Texture::F_depth_component to be the same value (both 1).
It was not by accident. The glGraphicsStateGuardian checks for F_depth_stencil everywhere and does not even contain F_depth_component.
Here’s how the glGSG does it:

  • In constructor, check whether we have support for GL_EXT_packed_depth_stencil.
  • When translating F_depth_stencil (and thus also F_depth_component) to GL, if we have the packed_depth_stencil support, make it GL_DEPTH_STENCIL. If not, make it GL_DEPTH_COMPONENT.

Apparently, my card does support packed_depth_stencil. But, I don’t need it. There is probably some other bug with that, but in my case I just need the depth component only.

I have no idea why it was done this way. If there’s no particular reason to force the format to be depth_stencil, could I go ahead, set the value for F_depth_component to something different than 1, and add code for it in glGraphicsStateGuardian?

(EDIT: oOh, it’s trickier. GraphicsOutput automatically assigns it to depth_stencil, and why is that?)

PS. What about a better error reporting systems for those things? E.g. a parameter like report_my_gl_errors(“Something must have gone wrong with copying a texture!”), that at least doesn’t leave users in the dark so much.

Hmm. This is stuff Josh had been working on. I suspect the reason he did it this way is that most modern cards actually implement depth and stencil buffers packed into the same 32-bit buffer, so packed_depth_stencil is a direct translation of that buffer and therefore very efficient. On the other hand, depth_component requires de-interleaving the depth buffer from the stencil buffer and is therefore relatively expensive. I suspect Josh was thinking in terms of making the fast operation work well, and wasn’t thinking too much about the slower operation.

If you’d like to experiment with fixing it by separating out these two different modes, please feel free.

Adding a context to report_my_gl_errors() is not a bad idea, though it would have to be added in quite a few places. Note also that if GLU can be obtained on your system, then it will be used (in get_error_string()) to report a more verbose error message, which might be more informative than just a simple error number. Since you’re seeing only the error number, it must be that libGLU could not be found or opened for some reason.

David

Hmm, if packed_depth_stencil really is so efficient, there must be some bug regarding it. On the other hand, if the user actually requests an FDepthComponent format he should also get one. The user can use FDepthStencil himself if he so pleases.

What should I make the default setting GraphicsOutput sets it to when RTPDepth is requested? Should I leave it as FDepthStencil and require the user to set the format manually if he really wishes to use FDepthComponent?

Do you also happen to know whether the depthcomponent is stored as T_float or as T_unsigned_byte? FDepthStencil stores it as T_float.

And actually get_error_string() works fine here, but “invalid value” or “invalid operation” still isn’t very clear. If we make the context string an optional parameter, it wouldn’t break anything though.

A default setting of FDepthStencil sounds reasonable. FDepthComponent should probably be TFloat, to be consistent with old SGI conventions.

I concur with all of your points.

David

Ok, if I change the value of F_depth_component, that will change the value of the rest of the enum vars too. Won’t that break anything that writes the format to a file?

Also, the same is with RTP_depth and RTP_depth_stencil, they are both 1. Can I add support for it too in GraphicsOutput and GraphicsBuffer?

Move F_depth_component to the end of the list, so it won’t affect the others.

Sure, I don’t see why we shouldn’t have both RTP_depth and RTP_depth_stencil.

David

I have a similar error when I use shadows (pandas built in shadow shader) “:display:gsg:glgsg(error): c:\buildslave\release_sdk_win32\build\panda3d\panda\src\glstuff\glGraphicsBuffer_src.cxx, line 911: GL error 1282”.Shadows work but this error still bothers me.What does it mean?