compressed-textures requirements?

Does compressed-textures rely on any optional 3d party libraries? I’ve compiled Panda myself, and setting this option on doesn’t seem to improve performance at all.

Thanks

compressed-textures isn’t a performance issue, per se, in the sense that it won’t improve your frame rate at all, except in the specific case that your frame rate is suffering because you’ve exceeded your available texture memory.

All compressed-textures does is make your textures use less space in texture memory. You can see this difference in the PStats graphs–click on the “graphics memory” graph–but if you weren’t exceeding your available texture memory before you did it, it’s not going to help you. If anything, it might hurt you, a tiny bit at least.

David

I notice the reverse effect of speed up using compressed. I did not notice it in my game because i already mip map all my textures. But if the textures are not mip mapped the compression forces them to be. Mip map textures render much faster then their default counter parts, so i get big speed ups :slight_smile:

Unfortunately, I cannot confirm for sure that texture memory is being exceeded, I’m running on Mac OS X and compiling gtk-stats has proved to be a pain. However, I highly suspect this is the case. My overall framerate is high according to the meter (it’s always above 150fps), but I get noticeable choppiness every so often. I’m guessing this is because few (1 or 2) frames render slowly, and that the time spent is in texture transfer. Setting “texture-scale 0.5” eliminates the choppiness, which would seem to support this theory.

That being said, it seems setting compressed-textures on doesn’t do anything in my environment. I picked a random texture, and getCompression() returns false. The texture is of type FRgb and base.win.getGsg().getSupportsCompressedTextureFormat(Texture.CMDxt1) returns true. Any ideas on what may be going on?

Thanks again.

David, is there an equivalent var for DX of this :

?

You can always run text-stats, a little less useful than the interactive gtk-stats, but still better than nothing, and it includes all of the same information. Do something like this:

text-stats >& output.log

Or, if you are using bash instead of tcsh, do:

text-stats >output.log 2>&1

Also, if you have access to a Windows box (borrow someone’s laptop?), you can run pstats on that box and point your OSX machine to it. Kind of handy to have a pair of side-by-side computers for this purpose.

But anyway. tex.getCompression() isn’t returning false, it’s returning 0, which is Texture.CMDefault. But this only reports what you set, it doesn’t report what’s actually being done on the graphics card. If you really want to ask whether it’s been compressed on the graphics card, it’s convoluted, but you can do this:

tc = base.win.getGsg().getPreparedObjects().prepareTextureNow(tex, base.win.getGsg())
print tc.getDataSizeBytes()

This will report a smaller number if texture compression is in effect.

This probably isn’t your problem, though. In fact, turning on texture compression will likely increase the duration of the chugs you get from loading textures, because the textures get compressed on-the-fly as they are loaded. But if you’re experiencing chugs at all the first time textures are visible, it’s probably because you didn’t do:

render.prepareScene(base.win.getGsg())

to preload all of the textures under render.

No, I don’t think so, sorry.

This isn’t always true. It sometimes is, depending on the performance characteristics of your particular graphics card, and of the nature of your scene. But there are also cases in which mipmaps render more slowly. It depends on where the bottlenecks happen to be: enabling mipmaps means more work in one part of the graphics pipeline, in return for less work in another part.

I don’t think enabling compressed textures has anything to do with enabling mipmapping. It’s certainly possible to have unmipmapped compressed textures.

David

Looks like compression is indeed working, the sizes differ with the option on and off. But… the choppiness is still present. I was already calling prepareScene… does it matter where this is called? I’m calling it after loading models/textures, but before starting the main loop.

text-stats does show “texture transfer” as a bottleneck in several frames. After I spin around 360 degrees (the game is first-person), the choppiness disappears. If I don’t spin around, the choppiness persists during navigation. Is it possible that prepareScene is not doing what it’s supposed to? Presumably spinning around forces it to load all textures, even if they are loaded on demand, which is why this eliminates further choppiness? Or is something else going on?

I have another texture-related question as well, apologies for so many. As I turn, I notice a lot of flicker (in the form of horizontal lines moving up and down) on nearby textures. This is independent of framerate and the other issue, and is especially visible on more detailed textures. Is this just an artifact of the graphics engine? Are there any guidelines for minimizing this effect?

Many thanks.

Since you observe choppiness the first time you spin around, it means that your textures weren’t loaded until you looked at them. This is on-demand loading, and it means that your prepareScene() call didn’t work.

It follows that your scene must not yet have been populated at the time you called prepareScene(). What prepareScene() does is recursively walk the scene graph from the node you specified, looking for texture references. It immediately forces any textures it finds onto the graphics card. This causes a chug up front, of course, but then it means that when you look around the scene, all of the textures are already loaded, so there are no more chugs.

In order for this to work, you have to have everything you will be looking at already attached as a child (or indirect child) of the node you called prepareScene() on, for instance, render. If you call prepareScene() before you attach your scene to render, nothing’s going to happen.

This is called tearing, and is an artifact of rendering faster than your monitor’s video sync rate. By default, Panda asks the video graphics driver to hold each frame until the monitor is ready to display it (this is controlled by the sync-video Config.prc variable). Unfortunately, the Mac graphics drivers seem to universally ignore this request, and render frames as fast as they can without respect to the sync rate. So you always get this tearing. Short answer: it’s an artifact of the Mac graphics drivers. Complain to Apple about it.

You might be able to minimize the effect by putting:

clock-mode limited
clock-frame-rate 75

in your Config.prc file. This will limit your frame rate to no faster than 75 fps, which is a likely guess for what your monitor’s refresh rate is. It still won’t be synchronized with your monitor’s refresh rate, but at least your chance of happening to render a frame in the middle of video sync will be somewhat reduced.

David

Apparently vsync is available on Mac OS X, but not turned on by default. The suggestions in the following post got rid of the problem: idevgames.com/forum/showthread.php?t=14509, might be worth integrating directly into a future version of Panda.

Ahh, nice research. Thanks!

David