Despite the pace of development for Panda3D having been slowed through the year 2021, a lot of great progress has been made. Countless bug fixes and minor improvements have culminated in the release of three new Panda3D versions, though these changes won’t be repeated here, as they are already highlighted in the blog posts for the respective releases: 1.10.9, 1.10.10, and 1.10.11. Rather, this post will highlight a few developments that are intended for the upcoming major release, 1.11.0. There is still some work left before this release is finished, but it is getting ever closer.
The new shader pipeline has already been mentioned in the 2020 update, and as it is pivotal for providing good support for mobile platforms, the web, arm64-based systems, and generally improving the portability of shaders, this effort is considered a cornerstone of the 1.11 release. The development of this pipeline is proceeding on the shaderpipeline branch, and it is expected that it is ready to be merged soon, with only some minor tasks left to do.
For all the benefits of the new system, the main detriment is that it can take longer for shaders to load and compile. This has now been mitigated by leveraging the same caching system as is used for loading models and compressing textures. The first time a shader is loaded and compiled, the compiled shader is written back to the cache (in the form of a .sho or .smo file, depending on the type of shader), so that the next time a particular shader is loaded, Panda3D uses the cached version instead. When distributing a game, all of the shaders can be pre-compiled into .sho and .smo files, which are then shipped instead of the original GLSL or Cg source files. The compiler front-end can now even be excluded from the distributed game entirely, if it is not necessary to compile shaders on the fly.
However, this didn’t solve the problem for shaders that are generated and compiled on the fly based on states in the scene graph. Compilation latency is an especially significant problem for generated shaders, as this can cause lag when objects come into view for the first time. To address this, a second cache has been introduced, specifically for generated shaders. This cache contains a map of all the generated shaders that is loaded at the beginning of the application and written out when it is closed. This cache makes the shader generator blazingly fast (even significantly faster than with the previous system!), except for the first time the shaders are used. This first-time-only cost can be hidden using a loading screen that only appears the first time a game is launched, but it is also possible for a developer to pre-generate this cache and then ship it with the compiled game, so that the shaders never have to be compiled on the end-user’s computer.
Building for Android
What was quite possibly the largest chunk of work remaining for the Android port to become viable (aside from the aforementioned shader pipeline) has now been finished: the distribution system. Using only a single command, the distribution system can now build apps that can be directly uploaded to the Google Play Store! There is no compilation step needed, as precompiled builds of Panda3D can be used instead. If you are feeling adventurous, you can try it today using the instructions in this forum thread.
A word of caution, though: this is still very experimental and should not be used for production. There is still one very major stability bug related to handling of configuration changes, and without the new shader pipeline, use of shaders is limited, and the multi-touch support is still unfinished. However, it is starting to look quite likely that these issues can be addressed before the 1.11.0 release.
Speaking of multi-touch support, more changes have been made on the multitouch branch to better support this mode of input in Panda3D. For example, it is now possible to interact with DirectGUI elements using touch inputs, such as to pan around a
DirectScrolledFrame by dragging with the finger. Support for touch events on Linux has also been added by implementing the XInput2 API. Implementation of the new “pointer events” proposal is also nearly complete; these make it possible to handle touch and mouse input using the same events. After that is finished, and the system has been tested, this effort will be merged into the master branch.
Mounting ZIP Files
It is now possible to mount ZIP files to the virtual file system, in the same way as could already be done with multifiles. This change was primarily added for the Android port, to support loading assets directly from .apk files (which are really just ZIP files under the hood), but it can be used in other situations as well, where .zip files may be preferred over .mf files.
Asset Loading on the Web
The experimental WebGL port (on the webgl-port branch) has received some changes as well. Notably, support for mounting web URLs to the virtual file system using a VirtualFileMountHTTP has been added. As was already possible on the desktop, it is now possible to load assets directly from a web server, without requiring preloading or extra download steps. For example, if your app is hosted on https://rdb.name/, then calling
loader.loadTexture("/example.jpg") will automatically perform an HTTP request for https://rdb.name/example.jpg and load the downloaded file.
This is particularly useful in conjunction with on-demand texture loading, in which case the model is first rendered using a very low-resolution version of the texture that is included with the model’s .bam file while the high-resolution version is downloaded and decoded in the background when the model comes into view. Another likely use case is to reduce the initial load time by bundling together the assets into .mf (or .zip!) files, such as one for each level or zone, which can be loaded and mounted to the VFS as they are needed.
If you are feeling adventurous and want to try out the WebGL port, there is now a set of notes and instructions available on this page. Keep in mind that the process is not yet nearly as polished as for the Android port and will require compiling Panda3D from source as well as writing some C code.
Coroutines and Async Support
The new features relating to coroutines and asynchronous programming have been greatly appreciated, and more feature requests relating to them have come in. Many improvements have been made, including the ability to await Sequence intervals, pass arbitrary Python objects as the result of an
gather() automatically schedule coroutines with the task manager, allowing the use of a bare
yield statement to “skip” a frame, and more.
One particularly notable change is that FSM classes are now able to define state changes as async methods. This makes it easy to handle an annoying “gotcha” when implementing game state transitions: let’s say you want to fade in a menu when the player presses the Esc key, and fade it out when pressing Esc again. This is easy to do by having an event handler switch to a Pause state, using a
LerpColorScaleInterval in the
enterPause method to fade in the menu, and having the
exitPause start an interval to fade it out again. This will work, but a problem occurs if the player presses Esc again while the fade-in interval is still playing: you will end up with two competing intervals both trying to modify the menu’s alpha value!
To fix this, you would need to keep track of any current fade-in interval and stop it when pressing Esc again, but this means you are adding extra state variables and need to add more code to keep track of them. Alternatively, you would need to unbind the Esc key and re-bind it when the interval is done, requiring you to put the interval into a
Sequence with a
We can now do better: by defining the state transition as a coroutine, it is possible to await the interval inside the enter function. Now, the FSM knows that the transition isn’t done yet. If you try to request a new state while the previous transition is not yet done, the FSM will patiently wait for the old transition to be finished before switching to the new state (though it is still possible to force a state transition if necessary). This makes it possible to solve this problem without adding extra state and while keeping the code clean and readable, for example:
class Game(FSM): async def enterPause(self): # Show the menu, and wait for fade-in to be done... menu.unstash() await menu.colorScaleInterval(1, (1, 1, 1, 1)) # Then allow returning to the previous state self.accept('escape', self.request, [self.oldState]) async def exitPause(self): # Unbind the escape key self.ignore('escape') # Wait for fade-out to be done... await menu.colorScaleInterval(1, (1, 1, 1, 0)) # Remove the menu from the scene graph menu.stash() # Re-bind escape to entering the pause menu self.accept('escape', self.request, ['Pause'])
A new “depth bias” attribute has been added, replacing the now-deprecated “depth offset” attribute. Instead of a single offset parameter, the new attribute provides a separate slope-based factor and constant factor, as well as a new parameter to clamp the bias to a maximum value, if supported by the driver. The parameters are now also specified as floating-point values instead of integers. All these changes allow more accurate fine-tuning of the depth bias for reducing shadowing artifacts in a scene.
If you are updating your code, please note that the sign has been inverted, matching OpenGL and Vulkan conventions for this parameter. That is, a depth offset of 1 is equivalent to a depth bias of
(-1.0, -1.0, 0.0). Also note that the depth range settings, which were also part of
DepthOffsetAttrib, have not been added to the new
DepthBiasAttrib. These parameters are now specified on the