Cg Tutorial Part 11

Don't mind the mess!

We're currently in the process of migrating the Panda3D Manual to a new service. This is a temporary layout in the meantime.


This page is not in the table of contents.

Cg Tutorial Part 11: Per pixel lighting with one point light

#Lesson 11
 
"""
Nothing new here. It is the same as 9.py.
"""

 
import sys
import math
 
import direct.directbase.DirectStart
from direct.interval.LerpInterval import LerpFunc
from panda3d.core import PointLight
 
base.setBackgroundColor(0.0, 0.0, 0.3)
base.disableMouse()
 
base.camLens.setNearFar(1.0, 50.0)
base.camLens.setFov(45.0)
 
camera.setPos(0.0, -20.0, 10.0)
camera.lookAt(0.0, 0.0, 0.0)
 
root = render.attachNewNode("Root")
 
light = render.attachNewNode("Light")
modelLight = loader.loadModel("misc/Pointlight.egg.pz")
modelLight.reparentTo(light)
 
"""
After reading 10.sha replace cube.egg with cube-smooth.egg once more. Because we
now have per pixel lighting, you see a great visual difference if a light only
covers a small portion of face.
The smaller the faces are, the less you see a difference between per pixel and
per vertex lighting. But if you have large triangles, that may cover your whole
screen, you can see a great difference between this two lighting methods.
"""

modelCube = loader.loadModel("cube.egg")
 
cubes = []
for x in [-3.0, 0.0, 3.0]:
cube = modelCube.copyTo(root)
cube.setPos(x, 0.0, 0.0)
cubes += [ cube ]
 
shader = loader.loadShader("lesson11.sha")
for cube in cubes:
cube.setShader(shader)
cube.setShaderInput("light", light)
 
base.accept("escape", sys.exit)
base.accept("o", base.oobe)
 
def animate(t):
radius = 4.3
angle = math.radians(t)
x = math.cos(angle) * radius
y = math.sin(angle) * radius
z = math.sin(angle) * radius
light.setPos(x, y, z)
 
def intervalStartPauseResume(i):
if i.isStopped():
i.start()
elif i.isPaused():
i.resume()
else:
i.pause()
 
interval = LerpFunc(animate, 10.0, 0.0, 360.0)
 
base.accept("i", intervalStartPauseResume, [interval])
 
def move(x, y, z):
root.setX(root.getX() + x)
root.setY(root.getY() + y)
root.setZ(root.getZ() + z)
 
base.accept("d", move, [1.0, 0.0, 0.0])
base.accept("a", move, [-1.0, 0.0, 0.0])
base.accept("w", move, [0.0, 1.0, 0.0])
base.accept("s", move, [0.0, -1.0, 0.0])
base.accept("e", move, [0.0, 0.0, 1.0])
base.accept("q", move, [0.0, 0.0, -1.0])
 
run()
//Cg
/* lesson11.sha */

/*
Beside that we have moved most of the source from the vertex shader to the
fragment shader, there is nothing new here. I like to talk about a generic
problem about normalization of normals (not about direction vectors). As I
already explained, one reason why normalization, is because the dot product only
yields useful results for lighting equations if vectors are normalized.

We start with our first problem, that we sometimes need to do our calculations in
world space. It is possible that a cube has is rotated. Therefore we need to
rotate the normals too. The most obvious, but wrong, idea is to transform the
normals with the modelview matrix, like the vertices. Open the figures.svg or
figures.png once more. In figure 10-1 we have a face that consists of two
vertices. Both vertices have a normal. Given that you apply a non uniform
setScale to your model. With non uniform we mean, that you do not scale your
model in each direction with the same scale. In the second picture in figure
10-1 we have a modelview matrix that only scale our model in one direction. If
you know apply the same matrix to your model, the normals look like the red
normals. Even if you normalize them, they look at the wrong direction. If you
apply a better matrix to the normals, they would look like in the third image in
figure 10-1.
There are two possibilities. Never apply non uniform scaling to your models. If
you can avoid using setScale at all, you maybe even do not need a call to
normalize if you like to use your modelview matrix. It is a bit to much math
here if I like to explain how you have to build our own correct normalization
matrix. Google for the term gl_NormalMatrix. The OpenGL Shading Language GLSL
has matrix that solves exactly this problems, and it is not too much magic to
recreate it in Panda3D after you understand the math behind it. The main reason
why I have added this chapter is that you are warned. It is easy to add subtle
(the normals are not completely wrong in the figure even with a non uniform
scaling) errors to your shader if you forget it.

Now to our second problem. If you look closer at the the vertex shader and the
fragment shader you may see that we normalize the normal twice. When you look at
figure 10-2 you see two blue normals in the first image. The left most normal is
not normalized, while the right most is. If the GPU now sends this normals from
the vertex shader to the pixel shader, the normals are linearly interpolated.
But even after normalizing them in the fragment shader you can see that they are
still facing in the wrong direction. If we first normalize both blue normals,
the result may look like in the third image. The normals are not normalized, but
after we do this, the are pointing in the intended direction.

Maybe whenever you design your own shaders, try to draw different cases on a
sheet of paper. Think of some worst cases, and do the interpolation on your own.
A simple graphic program may be helpful, because if you scale your models, most
vector drawing programs, scale your drawings also linearly.
*/

/*
Because we do our calculations in the pixel shader we need the vertex position
and the vertex normal in the pixel shader.
*/
void vshader(
    uniform float4x4 mat_modelproj,
    in float4 vtx_position : POSITION,
    in float3 vtx_normal : NORMAL,
    in float4 vtx_color : COLOR,
    out float4 l_color : COLOR,
    out float3 l_myposition : TEXCOORD0,
    out float3 l_mynormal : TEXCOORD1,
    out float4 l_position : POSITION)
{
    l_position = mul(mat_modelproj, vtx_position);

    l_myposition = vtx_position.xyz;

    /*
    DIRTY
    As already written, remove normalize in the following line only if your
    normals in the egg file are normalized.
    Without a modification all normals in the cube.egg have an exact length of
    1.0. Try to modify the six face normale in the cube.egg file and assign
    different lengths. With the call to normalize you should not see any visual
    difference. After modifying the normals remove the call to normalize.
    */
    l_mynormal = normalize(vtx_normal);

    l_color = vtx_color;
}

/*
We do the same calculations here as in the vertex shader in the previous sample.
You may ask: "We have the same equations here, and only moved around some
code?". We are a bit lazy here. The calculation of the direction vector
introduces no nonlinearity so it is possible to move this calculation entirely
to the vertex shader (try it on your own, I have not done it here because it
makes things more complex). The problem here is the dot function with the
cosine. If we we calculate the cosine in the vertex shader, the GPU than
interpolate it and we finally use it in the fragment shader, the result is not
exactly the cosine, as if we calculate it in the fragment shader.

If you are able to move only the direction calculation to the vertex shader,
without introducing any visual artifacts, I probably can not teach you anything
more about Cg itself. Congratulation if your attempt was successful.
*/
void fshader(
    uniform float4 mspos_light,
    in float3 l_myposition : TEXCOORD0,
    in float3 l_mynormal : TEXCOORD1,
    in float4 l_color : COLOR,
    out float4 o_color : COLOR)
{
    float3 lightposition = mspos_light.xyz;
    float3 modelposition = l_myposition;
    float3 normal = normalize(l_mynormal);
    float3 direction = normalize(lightposition - modelposition);
    float brightness = saturate(dot(normal, direction));

    o_color = l_color * brightness;
}

/*
Some notes about normalizing normals. If you create an sphere e.g. in Blender
with a length of 1.0 then all vertices should have coordinates, as if the lay on
a perfect sphere, although the whole sphere is only an approximation. Because
the sphere has a radius of 1.0 the length of each vertex should be exactly 1.0.
The same is true for normals. On a smooth sphere every normal should exactly
point outwards and should have a length of 1.0. If you think about it, you come
to the conclusion that the vertex and the normal should have equal coordinates.
But if I export the sphere with Blender there are some differences between the
normal and vertex coordinates. The next problem is that the egg File is human
readable and numbers therefore are stored in decimal system. An FPU internally
stores a float in a binary system, so we loose some precision here. What I like
to say: Maybe it is sometimes not a bad idea to normalize your normals (only in
this unmodified example, sometimes, you throw away accuracy if you normalize a
vector), even if you think they should be already normalized.

Here is a small hint if you were brave enough to implement your own vertex
shader that passes around the direction instead of the light position and the
model position. For this special case you should not normalize your direction
vector in the vertex shader. Try to draw two 2D directions vector on paper, one
4 times longer than the other one. First interpolate between these two vectors
without normalization, then with normalization, maybe you see that the direction
that lays in between is not the same in both cases.
This vertex shader has one drawback that needs a note. If you have eight lights,
you need to pass around eight direction vectors. But every GPU has its own
limitations about the maximum TEXCOORDS that may be used. It is a tradeoff
between speed and GPU requirements.

You may start to see that the greatest problem in writing shaders is not Cg. The
problem is that even simple shaders, often need a lot of brain power to craft
them more or less correctly.
*/
Top