Devel: T_INT, F_R32i confirmed to work?

First: Sorry for being all over this place. :slight_smile:

So, I can’t give you much of working example code, but the relevant tidbits I can copy out.

Buffer creation:


def createBuffer(w,h):
	b = base.win.makeTextureBuffer("",w,h)
	b.setClearColor(Vec4(0, 0, 0, 0.0))
	b.setClearColorActive(False)
	b.clearDeleteFlag()
	return b


def configTexture(b,ttype=None, tformat=None):
	t = b.getTexture()
	t.setMinfilter(Texture.FTNearest)
	t.setMagfilter(Texture.FTNearest)
	if (ttype != None): t.setComponentType(ttype)
	if (tformat != None): t.setFormat(tformat)

	return t

ObjectBuffer = createBuffer(1920/4,1025/4)
ObjectTexture = configTexture(ObjectBuffer,Texture.T_int,Texture.F_r32i)

Nothing fancy. This seems to work fine as it is with multiple floating point textures and a regular one, where I don’t set a format or component type. The T_int/F_r32i combination, though, gives me nothing.

The vertex shader is the one of the FullScreenTriangle and irrelevant for this case.

First Fragment Shader:

#version 130

out int gl_FragColor;

void main
{
    gl_FragColor = 1;
}

In case you’ve noticed: gl_FragColor is deprecated. The quickest way to get rid of the super-annoying, horrible and constant nagging about it (i put up with it for a long time, can you tell? :laughing: ) was to simply define gl_FragColor myself. :mrgreen:

Second Fragment Shader:

#version 130

uniform isampler2D Objects;
out vec4 gl_FragColor;

void main
{
    int object = texelFetch(Objects,ivec2(gl_FragCoord.xy * 4.0),0).r;

    gl_FragColor = vec4( vec3(object), 1.0) ;
}

This yields nothing. Null. Black.
“show-buffers” is set to True, equally showing Black.

And to be clear: The value is Zero. My first thought was that “1 is close to 0, so d’uh it’s dark” but no…

I could pass the texture as floats, but that’s really just a bandaid.
I can use both memory and bandwith for better things.

What am I doing wrong?

Firstly, you cannot define gl_FragColor yourself. You will have to call it differently. Anything will do, but I suggest p3d_FragColor.

Secondly, I don’t think it will work to change the format of a texture after it is bound as render-to-texture. When doing render-to-texture, anything that affects the format should be set up-front, before adding the texture to the buffer.

However, Panda actually overwrites the format of the texture to whatever has been specified in the FrameBufferProperties. So getting an r32 texture requires passing a FrameBufferProperties on which you called setRgbaBits(32, 0, 0, 0).

Now, unfortunately, it seems that Panda doesn’t have a way to request that the texture format be an integer format. I suppose that nobody’s ever asked for it before. This may be fixed in the future, either via the planned new render-to-texture system or via an additional flag on FrameBufferProperties. In the meantime, you could use the GLSL function that packs an integer into a floating-point variable.

That’s … weird.

I define gl_FragColor in several shaders, like this, and it works without problems. The driver stopped nagging and just keeps doing as it should. Is it written down anywhere that we’re not allowed to do that?

Then … well … I have several floating point textures I set exactly like above. When I remove the Texture.TFloat from one of them, I get the exactly expected results as if they weren’t floating point anymore… so this works too. I’m actually pretty sure I collected the above from various sources and they also just work… and the texture above IS a single channel texture, as I’ve specified. Trying using anything but “int” gives me a driver error.

edit: Ah, I realized I mistook ComponentType for Format… hm… okay.

And regarding support for integer textures … well, there’s the T_int format in the manuals, so …

I’m really confused about what’s going on now, not even talking about why something so essential is said to be not supported because no one asked for it. No offense intended, I’m just confused.

Any possible approaches I could go?

Do note that I haven’t tried rendering to integer framebuffers before, so I could be wrong. You could use apitrace to inspect the OpenGL state and verify your findings.

Yes, the GLSL specification. Quoting from the 1.30 spec:

Your shader compiler may happen to be lenient enough to allow it (NVIDIA’s compiler, in particular, is extremely lenient). It may fail to compile under a different driver.

Indeed, if I use the Khronos reference compiler, I do get an error:

$ glslangValidator glsl.frag 
glsl.frag
ERROR: 0:3: 'gl_FragColor' : identifiers starting with "gl_" are reserved 
ERROR: 1 compilation errors.  No code generated.

Please note that T_int is the ComponentType. It indicates how the data is stored in memory, not what type of values the texture produces when sampled.

The F_r32i format is what indicates that it’s sampled as an integer format, as opposed to F_r32 in combination with T_int, which would be signed normalised floating-point data, according to the OpenGL rules.

Well, you could just go for a normalised format and divide by 255 (or 2**32-1 if you choose F_r32) when writing the output. That would yield the same result in the texture data. You would have to use a regular sampler to read the values again, of course. There are the GLSL packing functions for packing an integer into a floating-point channel, but that seems irrelevant to your example.