CommonFilters - some new filters, and the future

Thanks for the comments! Some responses below.

Maybe I should explain what I was trying to achieve. :slight_smile:

The idea was that it should be easy to learn to use the CommonFilters system by reading the API documentation. At least I have learned a lot about Panda by searching the API docs.

If all the modules are placed in the same directory as CommonFilters itself, there will be lots of modules in the same place, and finding the interesting one becomes difficult.

I agree that flexibility is desirable.

Adding it where? In their local copy of the Panda source tree?

Hmm, this would make it easier to contribute new filters to Panda, which is nice.

Yes, that is part of the solution. But there are two separate issues here:

First is where to store the sort values. If I understood correctly, we seem to agree that this information belongs in the Filter subclasses.

Secondly, there are some filter combinations that cannot be applied in a single pass. BlurSharpen and anything else is one such combination - the blur will not see the processing from the other filters applied during the same pass.

Yes, something like that.

Thanks for asking (I’m sometimes very informal about terminology). By “stage of pipeline”, I meant a render pass.

But that doesn’t capture the idea strictly, either. From the viewpoint of the pipeline, the important thing to look at are the input textures needed by each filter.

Filters that share the same input textures (down to what should be in the pixels), and respect previous modifications to o_color in their fshader code, can work in the same pass. I think it’s a potentially important performance optimization to let them do so, so that enabling lots of filters does not necessarily imply lots of render passes.

Some filters may have internal render passes (such as blur), but to the pipeline this is irrelevant. Blur works, in a sense, as a single unit that takes in a colour texture, and outputs a blurred version. The input colour texture is the input to that pass in the pipeline where the blur filter has been set.

If the aim is to blur everything that is on the screen, the blur filter must come at a later render pass in the pipeline, so that it can use the postprocessed image as its input.

My proposal was that the core synthesizes code for a single “pipeline render pass”, so that the pipeline setup can occur in a higher layer (creating several, differently configured instances of the core).

Yes, we can change the name to something sensible :slight_smile:

Any internal stages (passes) (e.g. blur-x and blur-y) are indeed meant to be handled by each subclass of Filter.

About sharing passes in general, I agree. That is the reason to have a code generator that combines applicable filters to a single pass in the pipeline.

About blur and bloom specifically, I think they belong to different passes, because the effects they reproduce happen at different stages in the image-forming process.

I would like to set up the ordering of the filters as follows:

  • full-scene antialiasing (if added later)
  • CartoonInk, to simulate a completely drawn cel
  • optical effects in the scene itself (local reflection (if added later), ambient occlusion, volumetric lighting in that order)
  • optical effects in the lens system (bloom, lens flare)
  • film or detector effects (tinting, desaturation, colour inversion)
  • computer-based postprocessing (blur)
  • display device (scanlines)
  • debug helpers (ViewGlow)

Keep in mind that e.g. chromatic aberration in the lens should occur regardless of whether the result is recorded on colour or monochrome film.

Also note that these categories might not be exhaustive, might not correspond directly to render passes, and in some cases it can be unclear which category a given filter belongs to. For example, I tend to think of blur as a computer-generated postprocessing effect (requiring a complete “photograph” as input), but it could also represent the camera being out of focus, in which case it would come earlier in the pipeline (but definitely after CartoonInk and scene optical effects). I’m not sure what to do about such cases.

(Bloom, likewise, may be considered as a lens effect (the isotropic component of glare), or as a detector effect (CCD saturation). Maybe it is more appropriate to think of it as a lens effect.)

Finally, note that currently, only lens flare supports chromatic aberration. I think I’ll add full-screen chromatic aberration and vignetting to my to-do list, to approach a system that can simulate lens imperfections.

There are two use cases I’m thinking of.

First is daisy-chaining custom filters with CommonFilters. People sometimes use FilterManager to set up custom shaders, but the problem is that if you do that, it is not easy to apply CommonFilters on top of the result (or conversely, to apply your own shaders on top of what is produced by CommonFilters). When you apply either of these, you lose the camera, and can no longer easily set up the other one to continue where the other left off.

For a thought experiment, consider the original lens flare code by ninth (attached in Lens flare postprocess filter), and how you would go about applying CommonFilters to the same scene either before or after the lens flare. If I haven’t missed anything, currently it is not trivial to do this.

The second case is a scene with two render buffers doing different things, which are both postprocessed using CommonFilters, then rendered onto a quad (using a custom shader to combine them), and then the final quad is postprocessed using CommonFilters. There is a code example in my experiment on VolumetricLighting with cartoon-shaded objects: [Sample program] God rays with cartoon-shaded objects which probably explains better what I mean.

The thing is that at least in 1.8.1, setting up the combine step is overly complicated:

quadscene = NodePath("filter-quad-scene")
quadcamera = base.makeCamera2d(base.win, sort=7)
quadcamera.reparentTo(quadscene)
cm = CardMaker("filter-quad-card")
cm.setFrameFullscreenQuad()
self.quadNodePath = NodePath(cm.generate())
finaltex = Texture()
self.quadNodePath.setTexture(finaltex)
self.quadNodePath.reparentTo(quadcamera)

…when compared to the case where the original scene render does not need any postprocessing:

from direct.filter import FilterManager
manager = FilterManager.FilterManager(base.win, base.cam)
scenetex = Texture()
self.quadNodePath = manager.renderSceneInto(colortex=scenetex)

If you have a camera, it is just one line to call FilterManager to set up the render-into-quad, but if you don’t (because CommonFilters took it), you need to do more API acrobatics to create one and set up the render-into-quad manually.

EDIT: Also, then FilterManager (or CommonFilters when it calls FilterManager internally) goes on to obsolete the manually created quad and camera, creating another quad and another camera. It would be nice to avoid the unnecessary duplication. I don’t know if it affects performance, but at least it would make for a cleaner design.

Then, in both cases, we set up the combining shader

self.quadNodePath.setShader(Shader.make(SHADER_ADDITIVE_BLEND))
self.quadNodePath.setShaderInput("txcolor", scenetex)
self.quadNodePath.setShaderInput("txvl", vltex)
self.quadNodePath.setShaderInput("strength", 1.0)

and finally postprocess

self.finalfilters = CommonFilters(base.win, quadcamera)
self.finalfilters.setBlurSharpen()  # or whatever

though here, now that I think of it, I’m not sure how to get the quad camera in the case where FilterManager internally creates it.

In summary, what I’m trying to say is that I think these kinds of use cases need to be more convenient to set up :slight_smile:

Maybe.

The difficulty in that approach is that the user needs to understand the internals of CommonFilters, in order to be able to set up the pipeline pass number and sort-within-pipeline-pass priority correctly, in order to make CommonFilters insert the shader at the desired step in the process. Especially, the user must know which pipeline pass the shader can be inserted into (so that it won’t erase postprocessing by other filters; consider the blur case).

In addition, the user-defined shader must then respect the limitation that within the same pipeline pass, each fshader snippet must respect any previous changes to o_color. I think it is error-prone to require that of arbitrary user code, and especially, this makes it harder just to experiment with shaders copied from the internet.

Also, the user then needs to conform to the Filter API. If the user wants to contribute to CommonFilters, that is the way to go. But for quick experiments and custom in-house shaders, I think FilterManager and daisy-chaining would be much easier to use, as then any valid shader can be used and there are no special conventions or APIs to follow.

Maybe :wink:

As mentioned above, I was speaking of a render pass (but with the caveats mentioned).

Of the code for different filters in the compositing shader, I used the term “snippet” as I didn’t have anything better in my mind :slight_smile:

Good point.

That is another way to do it. May be cleaner.

Does this bring overhead? Or does the compiler inline them?

Also - while I’m not planning to go that route now - Cg is no longer being maintained, so is it ok to continue using it, or should we switch completely to GLSL at some point?

That’s one way of applying the default.

But how likely is the default to be wrong, i.e. do we need to take this case into account?

EDIT: Aaaa! Now I think I understand. If the default is wrong, then override this default filter stage somehow? E.g. sort=-1 means the output colour initialization stage, and if a stage with that sort value is provided by the user, that one is used, but if not, then the default one is used.

Ok.

Ok. I’ll put together an example.