Posts Tagged ‘rendering’
Sunday, November 11th, 2018 by fireclaw
Two weeks ago, the semi-annual PyWeek challenge was held. In case you are not familiar with PyWeek, it’s a challenge to create a game within only seven days. The contestants are free to use any engine as long the game is written mostly in Python. Before the challenge begins, contestants vote on a theme in order to inspire and challenge the entrants’ creativity. The theme for the October challenge was “Flow”.
In this 26th PyWeek, two contestants have used Panda3D for their submissions. We’d like to highlight some of the challenges they faced and show how Panda3D helped them to finish their games in such a short time. They both used development builds of the upcoming Panda3D 1.10, which made for a great opportunity to test the new features, and the new deployment system in particular. Being able to quickly install Panda3D using
pip has also been a very welcome feature for PyWeek, since installing Panda3D has been a stumbling block for PyWeek users in the past.
This entry was created by our community member wezu. It is an original take on the dungeon crawler game using only three keyboard buttons where possible moves are represented with a flowchart diagram at the bottom of the screen.
One feature from the Panda3D engine that helped wezu getting this game done was the intervals. These are functions to manipulate values—such as an object’s position and rotation—over a given period of time without the need to use tasks to manage those updates manually. Using intervals, he was able to quickly develop convincing walking animations with head bob and tilting when turning.
The game’s impressive graphical style was achieved thanks to a shader framework wezu developed, which is publicly available at GitHub. It has been made as a drop-in replacement for the built-in Cg-based ShaderGenerator, implementing various modern effects using GLSL 1.30. This proved to be a bit of a challenge in testing: while GLSL 1.30 shaders are typically usable on most platforms, OpenGL support has always been a bit behind on macOS, and has in fact even been deprecated by Apple as discussed in a previous blog post. This forced him to implement a special “potato mode” for low end and macOS systems that wouldn’t utilize these shaders. In this mode, the game would still be playable, just without those fancy graphics.
Working on an improved shader pipeline that avoids some of the driver-specific compatibility issues is high on the agenda for the Panda3D developers, which will help making shader-based game work reliably on multiple platforms without requiring extensive testing and tweaking the shaders for different drivers.
Chart of Flowrock won silver rating in the Production category and ranked sixth overall among the individual entries.
Our second entry that has been made for PyWeek was created by our main engine developer rdb. This game is a kind of strategic puzzle building game where you have to build an energy grid in order to supply cities with power and enable them to grow. The energy demand of the cities grows over time so you have to carefully plan your network so that your wires don’t overheat. For this game rdb decided to take a less heavily graphics-focused approach, instead going for a minimalistic art style with flat colors, low-poly models and simple lighting. The game employed numpy for calculating the currents through the nodes of the energy grid.
As the power lines were one of the most important elements of the game, some attention to detail was given with how they are rendered. The LineSegs class made it easy to draw nice catenaries between the pylons. Panda3D made the math rather easy to ensure that the wires connected up properly to the pylons regardless of their orientation: dummy points were added to the pylon models indicating the points where the wires should attach, so it was only necessary to use
getRelativePoint() to determine the correct two points to draw the arc between.
As the player interacts with the game primarily using the mouse, extra attention was given to ensure a good user interface so that it is clear to the player what to do. Panda3D’s text nodes and billboard effects made it particularly easy to add labels in the 3D scene to clearly communicate where the user needs to click and what will happen when they click it, and intervals made it easy to give the important labels a nice animation to draw the eye. The free Font Awesome icon font was used to easily render crisp icons.
The sound and music had to be pushed off to the last few hours of the final day, but fortunately loading and playing spatialized sounds in Panda3D takes only a few lines of code. Having sound effects can really help with the user experience like this by giving direct feedback to the player’s actions. Smoothly adjusting the play rate of the music depending on the game speed was a nice touch appreciated by the reviewers, which took only a few minutes to implement using Panda3D.
Let There Be Light won gold ratings in the Fun and Production categories, Silver in the Innovation category and became the winner of PyWeek 26 among the individual entries.
Despite the distraction of PyWeek, development of the engine has not remained idle. Plenty of bugfixes and stability improvements bring the engine ever closer towards the upcoming 1.10 release. We’ll give you a small selection of the many changes that have happened.
If you have ever tried loading models in a model format other than Panda3D’s own bam or egg formats, you may have already come across the Assimp loader, which has been in the Panda3D source for a while but was considered experimental. This uses the Assimp library, which supports most of the 3D formats in common use. This importer is now enabled by default and can be used in recent development builds.
On the graphical side were a few fixes and enhancements too. A big change was made in the rendering system to resolve some inconsistencies in how colors and material were being applied in the absence of lighting. It is normally the case that materials are not visible unless you have a light applied to the model, but in some cases (such as in the presence of a color scale), the material colors could show up anyway. It was also not always clear when vertex colors would show up or when they would be suppressed by the material. These behaviors have now been made consistent between the different rendering back-ends and the shader generator. If these changes are affecting your game, just let us know and we can help you resolve these problems.
As of 1.9.0, Panda3D has a GLSL preprocessor which among other things allows
#include statements to include other files within the current shader. This is crucial for games that rely heavily on shaders, as functions often need to be shared between different shaders or just separated for better code organisation. It has recently seen some improvements, such as the ability to use it with procedurally generated shaders that are passed into
Shader.make(), some optimizations to reduce the preprocessed size, and some parsing fixes.
There also have been some improvements on the Windows front. It is now possible to change the
fixed-size attributes of already-opened windows. There have also been some improvements in the build time when compiling with Eigen support on Windows.
deploy-ng system has received a new feature: it can now use an optimized version of the Panda3D binaries, stripped of debug information and safety checks, when deploying a game. deploy-ng can make use of an optimized version of the Panda3D wheels that are available on our pip mirror under a special
+opt tag. This allows deployed games to be smaller and faster while still providing the benefit of all the debug features and safety checks in the SDK.
Sunday, August 26th, 2018 by fireclaw
Despite the vacation period, the developers have not remained idle in July. Here is an update on some of the new developments.
While the internal collision system provides many useful tests between various collision solids, the CollisionTube solid (representing a capsule shape) in particular was only really useful as an “into” collision shape. Many of you have requested for more tests to be added so that it can also be used as a “from” shape, since many see it as a good fit for use with character bodies. Earlier, we had already added tube-into-plane and tube-into-sphere tests. We have now also extended this to include tube-into-tube tests and tube-into-box tests. We have also added a line-into-box test to complete the suite of CollisionBox tests.
For those who are using Bullet physics instead of the internal collision system, we have also extended the ability to convert collision solids from the Panda3D representation to the Bullet representation to include CollisionTube and CollisionPlane as well. These solids can now be easily converted to a BulletCapsuleShape and BulletPlaneShape, respectively. This way you can add these shapes directly to your .egg models and load them into your application without needing custom code to convert them to Bullet shapes.
Depth buffer precision
As most Panda3D programmers will know, two important variables to define when configuring a camera in any game are the “near” and “far” distances. These determine the range of the depth buffer; objects at the near distance have a value in the depth buffer of 0.0, whereas objects at the far plane have a value of 1.0. As such, they also determine the drawing range: objects that fall outside this range cannot be rendered. This is fundamental to how perspective rendering works in graphics APIs.
As it happens, because of the way the projection matrix is defined, it is actually possible to set the “far” distance to infinity. Panda3D added support for this a while ago already. Because of the reciprocal relationship between the distance to the camera and the generated depth value, the near distance is far more critical to the depth precision than the far distance. If it is too low, then objects in the distance will start to flicker as the differences in depth values between different objects becomes 0; the video card can no longer tell the difference between their respective distances and gets confused about which surface to render in front of the other. This is usually known as “Z-fighting”.
This is a real problem in games that require a very large drawing distance, while still needing to render objects close to the camera. There are a few ways to deal with this.
One way people usually try to resolve this is by increasing the precision of the depth buffer. Instead of the default 24 bits of depth precision, we can request a floating-point depth buffer, which has 32 bits of depth precision. However, since 32-bit floating-point numbers still have a 24-bit mantissa, this does not actually improve the precision by that much. Furthermore, due to the exponential nature of floating-point numbers, most precision is actually concentrated near 0.0, whereas we actually need precision in the distance.
As it turns out, there is a really easy way to solve this: just invert the depth range! By setting the near distance to infinity, and the far distance to our desired near distance, we get an inverted depth range whereby a value of 1.0 is close to the camera and 0.0 is infinitely far away. This turns out to radically improve the precision of the depth buffer, as further explained by this NVIDIA article, since the exponential precision curve of the floating-point numbers now complements the inverse precision curve of the depth buffer. We also need to swap the depth comparison function so that objects that are behind other objects won’t appear in front of them instead.
There is one snag, though. While the technique above works quite well in DirectX and Vulkan, where the depth is defined to range from 0.0 to 1.0, OpenGL actually uses a depth range of -1.0 to 1.0. Since floating-point numbers are most precise near 0.0, this actually puts all our precision uselessly in the middle of the depth range:
This is not very helpful, since we want to improve depth precision in the distance. Fortunately, the OpenGL authors have remedied this in OpenGL 4.5 (and with the
GL_ARB_clip_control extension for earlier versions), where it is possible to configure OpenGL to use a depth range of 0.0 to 1.0. This is accomplished by setting the
gl-depth-zero-to-one configuration variable to `true`. There are plans to make this the default Panda3D convention in order to improve the precision of projection matrix calculation inside Panda3D as well.
All the functionality needed to accomplish this is now available in the development builds. If you wish to play with this technique, check out this forum thread to see what you need to do.
Double precision vertices in shaders
For those who need the greatest level of numerical precision in their simulations, it has been possible to compile Panda3D with double-precision support. This makes Panda3D perform all transformation calculations with 64-bit precision instead of the default 32-bit precision at a slight performance cost. However, by default, all the vertex information of the models are still uploaded as 32-bit single-precision numbers, since only recent video cards natively support operations on 64-bit precision numbers. By setting the
vertices-float64 variable, the vertex information is uploaded to the GPU as double-precision.
This worked well for the fixed-function pipeline, but was not supported when using shaders, or when using an OpenGL 3.2+ core-only profile. This has now been remedied; it is possible to use double-precision vertex inputs in your shaders, and Panda3D will happily support this in the default shaders when
vertices-float64 is set.
The system we use to provide Python bindings for Panda3D’s C++ codebase now has limited support for exposing C++11 enum classes to Python 2 as well by emulating support for Python 3 enums. This enables Panda3D developers (and any other users of Interrogate) to use C++11 enum classes in order to better wrap enumerations in the Panda3D API.
We have continued to improve the thread safety of the engine in order to make it easier to use the multi-threaded rendering pipeline. Mutex lock have been added to the X11 window code, which enables certain window calls to be safely made from the App thread. Furthermore, a bug was fixed that caused a crash when taking a screenshot from a thread other than the draw thread.
Sunday, June 24th, 2018 by rdb
With the work on the new input system and the new deployment system coming to a close, it is high time we shift gears and focus our efforts on bundling all this into a shiny new release. So with an eye toward a final release of Panda3D 1.10, most of the work in May has centered around improving the engine’s stability and cleaning up the codebase.
As such, many bugs and regressions have been fixed that are too numerous to name. I’m particularly proud to declare the multithreaded render pipeline significantly more stable than it was in 1.9. We have also begun to make better use of compiler warnings and code-checking tools. This has led us find bugs in the code that we did not even know existed!
We announced two months ago that we were switching the minimum version of the Microsoft Visual C++ compiler from 2010 to 2015. No objections to this have come in, so this move has been fully implemented in the past month. This has cleared the way for us to make use of C++11 to the fullest extent, allowing us to write more robust code and spend less of our time writing compiler-specific code or maintaining our own threading library, which ultimately results in a better engine for you.
Behind the scenes, many design discussions have been taking place regarding our plans for the Panda3D release that will follow 1.10. In particular, I’d like to highlight a proposed new abstraction for describing multi-pass rendering that has begun to take shape.
Multi-pass rendering is a technique to render a scene in multiple ways before compositing it back together into a single rendered image. The current way to do this in Panda3D hinges on the idea of a “graphics buffer” being similar to a regular on-screen window, except of course that it does not appear on screen. At the time this feature was added, this matched the abstractions of the underlying graphics APIs quite well. However, it is overly cumbersome to set up for some of the most common use cases, such as adding a simple post-processing effect to the final rendered image. More recent additions like FilterManager and the RenderPipeline’s RenderTarget system have made this much easier, but these are high-level abstractions that simply wrap around the same underlying low-level C++ API, which still does not have an ideal level of control over the rendering pipeline.
That last point is particularly relevant in our efforts to provide the most optimal level of support for Oculus Rift and for the Vulkan rendering API. For reasons that go beyond the scope of this post, implementing these in the most optimal fashion will require Panda3D to have more complete knowledge of how all the graphics buffers in the application fit together to produce the final render result, which the current API makes difficult.
To remedy this, the proposed approach is to let the application simply describe all the rendering passes up-front in a high-level manner. You would graph out how the scene is rendered by connecting the inputs and outputs of all the filters and shaders that should affect it, similar to Blender’s compositing nodes. You would no longer need to worry about setting up all the low-level buffers, attachments, cameras and display regions. This would all be handled under the hood, enabling Panda3D to optimize the setup to make better use of hardware resources. We could even create a file format to allow storing such a “render blueprint” in a file, so something like loading and enabling a deferred rendering pipeline would be possible in only a few lines of code!
This is still in the early design stages, so we will talk about these things in more detail as we continue to iron out the design. If you have ideas of your own to contribute, please feel free to share them with us!
In the meantime, we will continue to work towards a final release of 1.10. And this is the time when you can shine! If you wish to help, you are encouraged to check out a development build of Panda3D from the download page (or installed via our custom pip index) and try it with your projects. If you encounter an issue, please go to the issue tracker on GitHub and let us know!
Tuesday, February 13th, 2018 by fireclaw
The new year has brought with it new developments to the Panda3D engine, some of which we would like to present to you today. This is however by no means a comprehensive listing of the improvements we’re working on. Stay tuned for more posts, as we’ve got some exciting plans for 2018 ahead!
RenderPipeline light system
The light manager of Tobias Springer’s excellent RenderPipeline project has made its way into the Panda3D codebase. This is a light system designed to be used in conjunction with the GPU light culling and deferred rendering methods provided by the RenderPipeline, and is implemented in C++ for optimal performance. Now that it is included with Panda3D, it is even easier to use the RenderPipeline in your projects as it is no longer necessary to compile any C++ modules to do so—you now simply put the Python module into your project and follow the usual steps from there on out.
This feature is mainly useful from a RenderPipeline setup, but we are continuing to work on bringing the built-in lighting system more closely in line with the RenderPipeline lights. Examples of this are additional light types such as sphere and rectangle area lights and the possibility to set a light’s color temperature.
Input device support
The input-overhaul branch has received plenty of changes again and is day by day getting closer to a state where it can be merged into the master branch. The latest improvements include more devices being supported as well as overall improved handling and mapping of devices’ buttons and axes, such as for joysticks. For Windows users, there is also a new input manager available based on the Windows raw input system, which is used for devices that are not supported under the existing XInput implementation. Panda3D automatically chooses the right implementation to use for a device, so this all happens seamlessly to the developer.
Support for 3D mice has also been added. This is a class of devices that allow movement in six degrees of freedom (thrice as many as a regular mouse), and is particularly used by 3D artists for intuitively navigating a camera around a model or through a scene. This may be of particular use for the various CAD programs that are built around Panda3D.
Android support is actively being worked on and great strides have already been made in this area. Stay tuned for the next post, in which we will have some exciting announcements to make on this front!
Monday, January 22nd, 2018 by fireclaw
Hello everyone to our heavily delayed December post. Even though we’re quite late with this one due to lack of time, we wish you all a happy new year! Much has happened during the winter holidays, so read on to see what’s new.
What happened in the last month
The work on the input overhaul branch is almost complete and needs some more polish to finalize the API before we will merge it into the development trunk. In addition, we started to add a mapping table for known devices to have them work as expected. For other devices, the mapping is provided by the device driver with the help of some heuristics to detect the device type. Currently the input overhaul is still in heavy development and API changes will occur, though for those who are interested in testing it, sample applications are available and some manual entries with more or less accurate instructions have been created and will be finalized as soon as the API is stable.
Some long-standing bugs with the multithreaded pipeline were finally resolved. These issues caused deadlocks that occurred whenever geometry data was being manipulated on another thread, or when shadows were enabled using the automatic shader generator; as such, they were a significant barrier that prevented the multithreaded pipeline from being useful for some applications. However, more testing is needed before we can be completely confident that all the threading issues are resolved.
On macOS, it is now possible to get an offscreen rendering buffer without opening a window. This lets you render to a buffer on a headless mac server, which can be useful for plenty of things. Aside from scientific simulations where no immediate output is necessary or even desirable, another example is to send a frame rendered by Panda3D over a socket or network to display it elsewhere. This technique is used in the BlenderPanda project to render a Panda3D frame into a Blender viewport and thereby get a live display of how a model will look when used with the engine.
Looking into the crystal ball
In the coming months some of the newly developed features (input-overhaul, deploy-ng) will be polished off and merged into the master branch of panda3d. More work is also planned on the introduction of a new physically-based material model as well as support for the glTF 2.0 format. Stay tuned for more updates!
Monday, November 20th, 2017 by fireclaw
Much has happened in Panda3D development for the upcoming 1.10 version. To bring you up-to-date with the latest developments, we will summarize some of the new changes here. Also, to further keep you informed about new and upcoming features, we’ll start a regular blog post series highlighting new developments.
Aside from a lot of optimization changes to improve various parts of Panda’s performance, as well as numerous bugfixes to improve stability and compatibility, there were some larger changes as well.
The first thing we’d like to highlight is the ability for Python users to install Panda3D via the pip package manager. No more fiddling with platform dependent installers—it takes only a single command to install the right version of Panda3D for your platform and Python version:
pip install panda3d
As a bonus feature, this allows you to install Panda into a virtualenv environment, allowing you to try out the latest development version in isolation without fear of contaminating your existing setup.
Furthermore, Panda3D has been updated to be compatible with the latest Python 3 versions. This includes interoperability with the pathlib module and the Python 3.6 path protocol, as well as fixes for the upcoming Python 3.7.
The Shader Generator
If you are using the shader generator in your application, you may significantly benefit from upgrading to 1.10. It has been overhauled to address a major performance concern for applications with complex scenes containing a large amount of render states, which could cause lag due to an excessive amount of shaders being generated.
Some new features have been added as well, such as support for hardware skinning and multiple normal maps.
Text rendering updates
The text rendering subsystem has been improved significantly. Panda’s text assembler used to perform well mainly for smaller texts, whereas frequently updating large blocks of text could cause considerable lag. But the improved text assembler code is up to 75 times as fast, making assembling large swaths of text a non-issue.
A comparison with HarfBuzz disabled and enabled. Of note is the spacing between the A and V, the “fi” ligature. The Arabic text doesn’t render correctly at all without HarfBuzz.
Furthermore, the HarfBuzz library can now be utilized to implement text shaping, which not only enables support for ligatures and correct kerning but also allows us to better support languages with more complex shaping requirements, such as Arabic. This includes support for right-to-left text rendering, with automatic language detection enabled by default. Although bidirectional text is not yet fully supported, you can explicitly switch or re-detect direction for specific text segments using embedded TextProperties.
If Panda3D has been compiled with HarfBuzz support, it can be enabled using the
text-use-harfbuzz variable. Otherwise, more basic kerning support can be enabled using
text-kerning true, although many fonts will only kern correctly with HarfBuzz enabled.
Panda3D now directly supports the Opus audio codec, a high-quality open standard designed to efficiently encode both speech and other audio. This is implemented via the opusfile library, so that it doesn’t require pulling in the heavier and more restrictively licensed FFmpeg libraries.
The FFmpeg plug-in now also supports loading video files with an embedded alpha channel, such as is possible with WebM files encoded with the VP8 codec. However, FFmpeg offers both a preferred native implementation and a decoder based on libvpx. The default is the native implementation, so if you wish to play VP8 videos with alpha channel, you should set the
ffmpeg-prefer-libvpx configuration variable to true, to force FFmpeg to use the libvpx implementation.
We’d also like to highlight ongoing work outside the main Panda3D development branch. These things have been developed for Panda3D and will be merged into the main branch when they have reached maturity. But until then, they can be checked out from their respective branches on GitHub.
First off, significant progress has been made on a new deployment system thanks to invaluable contributions by the community. The project is tentatively named “deploy-ng” and intends to make it easier more reliable to package and distribute your finished application, and as such it stands to replace the current deployment system entirely.
This new deployment system builds upon the existing Python setuptools, adding an extra plug-in to easily package your Panda3D applications. It already is quite usable, but still needs some love and testing until it’s production ready.
A significant amount of work has been done on the effort to support two new graphics back-ends. The first of these is the WebGL back-end, happening on the webgl-port branch. This allows us to run Panda3D applications in the browser without requiring the use of a browser plug-in. The bulk of the work on the renderer itself has already been done, but there remains work to be done to make it easier to package up a Panda application for the web. Check out the proof-of-concept demos or the online editor demo if you’re curious about the possibilities.
On the vulkan branch, a prototype renderer for the new Vulkan graphics API has materialized as well. Like OpenGL, Vulkan is a cross-platform open graphics standard developed by Khronos. Unlike OpenGL, however, Vulkan offers a more low-level interface to the underlying graphics hardware, enabling a reduction in driver overhead for CPU-bound applications. Before you get too excited though, it’s not yet capable of running much more than a few of the sample programs. There is a lot more work to be done before it will reach feature-parity with or performance benefits over the OpenGL renderer, and it is unlikely to be a priority for the next release.
Behind the curtains there also is work going on to support glTF 2.0. This is a new JSON-based model transmission format recently standardized by the Khronos Group, the consortium that is also responsible for OpenGL, and plug-ins are already available to export it from various content creation tools. Importantly, glTF 2.0 defines a modern standard for physically-based materials, and as such is considered a milestone in the development of a physically-based rendering pipeline in Panda3D.
Gamepad support is something that many in the community have been asking about for a long time. The input framework is receiving a significant overhaul to allow us to support game controllers, while also laying the groundwork for exposing commercial virtual reality hardware using a straightforward API. This work is happening on the input-overhaul branch and will be merged into the master branch soon.
That’s all for now, but keep an eye open for upcoming blog posts with all new and interesting updates in the coming months. In the meantime we encourage you to try the latest version for yourself and let us know how it works for you.