about how models are transformed to viewspace

this may be unnoticed by most people, but it can make a big difference in some case. let me explain.
currently models are transformed to viewspace in this sequence:
rotate the model by the matrix of camera, get the pos of model, then subtract this pos by camera pos.
what i would suggest is: sutract the pos by cam pos before rotation by cam matrix.
what’s the difference in these 2 sequence?
say a camera is at pos (0,100000,0) and a model is at (1,100005,0) and the cam has HPR of (0,1,2), the model is in view.
in current way of transformation, the model pos value 100005 is rotated by cam matrix. because a matrix is stored as float values in computer, the rotation angles are inprecise to some degree no matter what, then the pos value 100005 magnifies the angle error and produces an inprecise pos.
finally this inprecise pos is subtract by camera pos. if the camera is rotated by 0.001 degree the next frame, the model pos has a new error which can be on the other side( positive value vs negative value) . with a normal fov (say 60 degree), you can see the model jump about the screen or even out of the screen.
let’s do it the other way: 1st subtract the pos value 100005 by cam pos 100000, resulting 5, then rotate the pos by cam matrix. again, rotation leads to error, but this time the angle error is magnified by value 5. see the difference?
value 100000 is quite big for single precision float, when the model or camera moves, you can still see the model jumping on screen, this can be improved by using double precision float to store a position. but this is another cause of error , not this topic is about.
with the current way of transforming models into viewspace, even if a matrix is stored as double precision floats instead of single float, the problem of error becomes notice-able to some point, if not at pos 100000, then at pos1000000, i bet.
let’s check the other way of transforming, value error will be notice-able to some point, if not at pos 1000000, then what about 10000000?
because a camera has a farclip, which means a model far away from camera is not seen, so any model in view is close to camera, say 100000 at max (if the app is simulating something on earth). the rotation error would be magified by 100000 at max, not 1000000, not 10000000. so this source of error will not grow infinitely. whether the position and matrix values are stored as single or double, the usable range of the scene is increased by a large degree.
for example, with the suggested way of transforming, with single float position and single float matrix values, when camera rotates, a model at 5 units away from camera would not be seen jumping even if they are at position 1000000. when camera or model moves, the model will be seen jumping, this might be fixed by using double float position values.
on the other hand, with the current way of transforming, when the camera (at pos 1000000) rotates, the model 5 units away will be jumping, (even if matrix is stored as double float, i bet).

The usual way to get around this is to create a bunch of nodes for each “sector”. Eg. you have a sector node at (0, 10000), to which the scene model is parented, and whenever you enter that sector, you reparent the camera to it.

This has the simple effect that Panda is only dealing with very small relative transformations when figuring out where the object is relative to the camera, even if the transformation is very large relative to render.