[SOLVED] Problems with Firefly Demo w/ ATI HD series card

Hello all,

I am new to these forums and Panda. I am checking out Panda for a game I am producing currently where we plan on using deferred rendering but I am having a problem with the firefly demo. For some reason everything works correctly and the fireflies are created on the screen but when the light hits an object it isn’t drawn properly. It is kind of hard to explain what it is doing so I took a screenshot of it and the image is below. If anyone can tell me what is going on and any suggestions on how to fix this I would appreciate it. By the way I am using an ATI Radeon HD 3450 video card with the latest drivers from ATI.

wow i love your shader effect!

On general note Firefly demo does not run very well because it relies on hardware features that can be incompatible on variety of graphics cards, i’d say its normal of it not to work. So unless you need to use the differed shading methods in your game its nothing to worry about and use the rest of the engine.

Is it just something with Panda because I have been using Ogre 3D and MRT is working just fine on it with deferred rendering it is just running slow at the moment with what I am testing on it. I am checking out Panda to see if I can switch over to it from Ogre because I just found out about Panda and it seems a lot more polished than Ogre and has some nice features.

Hey guys just wanted to let everyone know I finally figured out the problem with the deferred renderer demo and that is it seems the newer ATI video cards don’t like shaders accessing the depth buffer directly. I am not sure what is causing this but I am believing it may have something to do with changes in the HD series cards related to their hyper Z and related technologies.

What leads me to believe this is that my work desktop with its ATI Radeon 1100 Xpress works with the demo just fine reading from the depth buffer directly in the light.sha shader. This same thing however is broken on my ATI Radeon HD 3450 I have in my desktop at home and I read on ATI’s site about some of the changes they made in the HD 2000 series and higher cards. This is just a guess though.

For the fix I basically created another texture for the depth buffer and attached it using GraphicsOutput.RTMCopyTexture on the lightbuffer. So the texture and addRenderTexture calls look as follows now.

self.texDepth = Texture()
self.texDepthShader = Texture()
self.texAlbedo = Texture()
self.texNormal = Texture()
self.texFinal = Texture()

self.modelbuffer.addRenderTexture(self.texDepth,
     GraphicsOutput.RTMBindOrCopy, GraphicsOutput.RTPDepth)
self.modelbuffer.addRenderTexture(self.texAlbedo,
     GraphicsOutput.RTMBindOrCopy, GraphicsOutput.RTPColor)
self.modelbuffer.addRenderTexture(self.texNormal,
     GraphicsOutput.RTMBindOrCopy, GraphicsOutput.RTPAuxRgba0)

self.lightbuffer.addRenderTexture(self.texDepth,
     GraphicsOutput.RTMBindOrCopy, GraphicsOutput.RTPDepth)
self.lightbuffer.addRenderTexture(self.texDepthShader,
     GraphicsOutput.RTMCopyTexture, GraphicsOutput.RTPDepth)
self.lightbuffer.addRenderTexture(self.texFinal,
     GraphicsOutput.RTMBindOrCopy, GraphicsOutput.RTPColor)

After doing this I had to change one line on the shader inputs for the light.sha shader to use the new texture copy.

tempnode = NodePath(PandaNode("temp node"))
tempnode.setShader(Shader.load(MYDIR+"/light.sha"))
tempnode.setShaderInput("texnormal",self.texNormal)
tempnode.setShaderInput("texalbedo",self.texAlbedo)
tempnode.setShaderInput("texdepth",self.texDepthShader)
tempnode.setShaderInput("proj",Vec4(proj_x,proj_y,proj_z,proj_w))
tempnode.setAttrib(ColorBlendAttrib.make(ColorBlendAttrib.MAdd,
     ColorBlendAttrib.OOne, ColorBlendAttrib.OOne))
tempnode.setAttrib(CullFaceAttrib.make(CullFaceAttrib.MCullCounterClockwise))
tempnode.setAttrib(DepthTestAttrib.make(RenderAttrib.MGreaterEqual))
tempnode.setAttrib(DepthWriteAttrib.make(DepthWriteAttrib.MOff))
self.lightcam.node().setInitialState(tempnode.getState())

Once I made these changes everything started working properly. The only downfall to this workaround is that it uses more video memory since it has to make a copy of the depth buffer for the shader to read from. One thing to note though is I didn’t see any loss in framerate implementing this workaround. I was still getting 22 FPS with around 300 lights using a 4 MRT setup using the Firefly tutorial code as a basis.

Of course this may not be the best way to fix this so if anyone can find a better way then please let me know as I am still pretty new to OpenGL graphics programming and Panda3D.