Excellent job, Zhao!
Just a few things:
I get this flickering effect too (Linux).
On the EagleView function there is a typo on the else part:
Original:
else:
base.cam.node().setActive(0)
camEagle.node().setActive(1)
Modified:
else:
base.cam.node().setActive(1)
camEagle.node().setActive(0)
Another thing is that the lighting creation is totally useless, as we’re not using it at any time on the shader, so you can comment it of (all shader calculations use “tdir”):
# Create some lighting
ambientLight = AmbientLight("ambientLight")
ambientLight.setColor(Vec4(.3, .3, .3, 1))
directionalLight = DirectionalLight("directionalLight")
directionalLight.setDirection(Vec3(-5, -5, -5))
directionalLight.setColor(Vec4(.8, .8, 1, 1))
directionalLight.setSpecularColor(Vec4(1, 1, 1, 1))
render.setLight(render.attachNewNode(ambientLight))
render.setLight(render.attachNewNode(directionalLight))
Now, a question: I wanted to show the depth buffers on screen, but they appear all blank. I guess that happens because i’m not sending them the returned shader values. How would i do that? (If you didn’t undertand my question, i’d like to show the buffer for all cameras just as when you press “v” on the Tut-Shadow-Mapping-Advanced.py included on the shadows demo of panda3d)
What i’m currently using:
First i create this buffer (because i don’t know how to access the current ones):
def createBuffer(self):
# creating the offscreen buffer.
winprops = WindowProperties.size(1024,1024)
props = FrameBufferProperties()
props.setRgbColor(1)
props.setAlphaBits(1)
props.setDepthBits(1)
self.LBuffer = base.graphicsEngine.makeOutput(
base.pipe, "offscreen buffer", -2,
props, winprops,
GraphicsPipe.BFRefuseWindow,
base.win.getGsg(), base.win)
if (self.LBuffer == None):
self.t=addTitle("Shadow Problem: Video driver cannot create an offscreen buffer.")
return
Lcolormap = Texture()
self.LBuffer.addRenderTexture(Lcolormap, GraphicsOutput.RTMBindOrCopy, GraphicsOutput.RTPColor)
Ldepthmap = Texture()
self.LBuffer.addRenderTexture(Ldepthmap, GraphicsOutput.RTMBindOrCopy, GraphicsOutput.RTPDepthStencil)
if (base.win.getGsg().getSupportsShadowFilter()):
Ldepthmap.setMinfilter(Texture.FTShadow)
Ldepthmap.setMagfilter(Texture.FTShadow)
return self.LBuffer
Then i create a camera with that buffer on the GUI part and set it to look at my character:
camHere = w.createBuffer()
global camEagle2 = base.makeCamera(camHere , sort = 4, camName = 'Eagle Buffer' )
camEagle2.lookAt(w.character)
camEagle2.node().setActive(1)
camEagle2.setPos( 0, 0, 100)
Then, when i allow buffer view on screen with:
self.accept("v", base.bufferViewer.toggleEnable)
I receive exactly what the eagle camera views, as expected. But i have no idea of what to do now to make it receive the depthbuffer from the shader. On pandas’ examples, when i comment off:
self.LCam.node().setInitialState(lci.getState())
I loose the buffer view, so i guess i need to use it somehow on code, but i didn’t really understand what this does.
In fact, what i really wanted is a way to store all 3 depthBuffer results from the shader in a texture that will then be displayed on screen and used on later calculations.
So, does anyone have an idea of how to do that?
Thanks, and sorry for the long text .