CommonFilters - some new filters, and the future

Hi all,

I’ve been extending my copy of CommonFilters and will be submitting it for consideration for 1.9.0 soon.

Current new functionality

  • Lens flare based on the algorithm by John Chapman. The credit goes to ninth (Lens flare postprocess filter); I’ve just cleaned up the code a bit and made it to work together with CommonFilters.
  • Desaturation (monochrome) filter with perceptual weighting, and configurable strength, tinting and hue bandpass. Original.
  • Scanlines (CRT TV) filter (original). Both static and dynamic modes, configurable strength and which field to process, and the thickness of the desired scanlines (in pixels). Original.

Screenshots below in separate posts.

I’ve also rearranged the sequence in the main filter as follows: CartoonInk > AmbientOcclusion > VolumetricLighting > Bloom > LensFlare > Desaturation > Inverted > BlurSharpen > Scanlines > ViewGlow.

The idea is to imitate the physical process that produces the picture. First we have effects that correspond to optical processes in the scene itself (AO, VL), then effects corresponding to lens imperfections (bloom, LF), then effects corresponding to the behaviour of the film or detector (desat, inv), and finally computer-generated effects that need the complete “photograph” as input (blur, scanlines). The debug filter ViewGlow goes last.

Note that with this ordering, e.g. Desaturation takes into account the chromatic aberration in LensFlare; the wavelength-based aberration occurs in the lens regardless of whether the result is recorded on colour or monochrome film.

In case the scene being rendered is a cartoon, CartoonInk goes first to imitate a completely drawn cel (including the ink outlines) that is then used as input for computer-generated effects. This imitates the production process of anime.

The code changes are unfortunately not completely orthogonal to those made in the cartoon shader improvements patch, so I fear that needs to be processed first. I can, however, provide the current code for review purposes.

The future of CommonFilters?

I think there are still several filters that are “common” enough - in the sense of generally applicable - that it would be interesting to include them into CommonFilters (probably after 1.9.0 is released). Specifically, I’m thinking of:

  • Screen-space local reflections (SSLR). A Panda implementation has already been made by ninth (Screen Space Local Reflections v2), so pending his permission, I could add this one next.
  • Fast approximate antialiasing (FXAA). In my opinion, fast fullscreen antialiasing to remove jagged edges would be a killer feature to have.

As for FXAA, there have already been some attempts to create this shader for Panda. In the forums there are at least two versions floating around (one written in Cg, the other in GLSL). But I’m not sure how to obtain the necessary permissions. At least by a quick look at the code, it seems the existing implementations are heavily based on, if not selectively copy and pasted, from NVIDIA’s original by Timothy Lottes, and the original header file says “all rights reserved”. On the other hand it’s publicly available in NVIDIA’s SDK, as is the whitepaper documenting the algorithm. There is another version of the code at Geeks3D (based on the initial version of the algorithm), but that looks very similar, and I could not find any information on the license. I think I need someone more experienced to help in getting the legal matters right - once that’s done, I can then do the necessary coding :slight_smile:

Another important issue is the architecture of CommonFilters. As is said in the comment at the start of the file (already in 1.8.1), it’s monolithic and clunky. I’ve been playing around with the idea that instead of one Monolithic Shader of Doom ™, we could have several “stages” that would form a postprocessing pipeline. Multipassing is in any case required to apply blur correctly (i.e. such that it sees the output from the other postprocessing filters) - this is something that’s bugging me, so I’d like to fix it one way or another.

On the other hand, a pipeline is bureaucratic to set up, and reduces performance for those filters that could be applied in a single pass using a monolithic shader. It could be designed to create one monolithic shader per stage, based on some kind of priority system when the shaders are registered to the pipeline, but that’s starting to sound pretty complicated - I’m not sure whether it would solve problems or just create more.

One should of course keep in mind the performance aspect - most of the effects in CommonFilters are such that they can be applied in a single pass using a monolithic shader, so maybe that part of the design should be kept afterall.

For example, even though SSAO uses intermediate stages, the result only modulates the output, so e.g. cartoon outlines will remain. Volumetric lighting is additive, so it preserves whatever has been added by the other filters. (The normal use case is to render it in a separate occlusion pass anyway, but its placement in the filter sequence may matter if someone uses it as a type of additive radial blur, such as in Problem with volumetric lighting .)

Bloom doesn’t see cartoon outlines, but that doesn’t matter much - the bloom map is half resolution anyway, and the look of the bloom effect is such that it doesn’t produce visible artifacts even if it gets blended onto the ink. The same applies to the lens flare.

The most obvious offender is blur, since at higher blend strengths (which are useful e.g. when a game is paused and a menu opened) it will erase the processing from the other postprocessing filters.

If the monolithic design is kept, I think blur should be special-cased so that if blur is enabled, the rest of the effects render into an interQuad, and then only blur is applied to the finalQuad, using the interQuad as input. Otherwise (when blur disabled) all effects render onto finalQuad, like in the current version.

As for the clunkiness, I think it would be useful to define each effect in a separate Python module that provides code generators for the vshader and fshader. This would remove the need to have all the shader code pasted at the top of CommonFilters.py.

The reason to prefer Python modules instead of .sha files is, of course, the “Radeon curse” - the support of AMD cards in Cg is pretty much limited to the arbvp1 and arbfp1 profiles, which do not support e.g. variable-length loops. Hence, to have compile-time configurable-length loops, we need a shader generator that hardcodes the configuration parameter when the shader is created. This solution is already in use in CommonFilters, but the architecture would become much cleaner if it was split into several modules.

One further thing that is currently bugging me is that CommonFilters, and FilterManager, only make it possible to capture a window (or buffer) that has a camera. For maximum flexibility considering daisy-chaining of filters, it would be nice if FilterManager could capture a texture buffer. This can be done manually (as I did for the god rays with cartoon shading [Sample program] God rays with cartoon-shaded objects), but this is the sort of thing that really, really needs a convenience function.

The last idea would play especially well with user-side pipelining. Several instances of CommonFilters could be created to process the same scene (later instances using the output from the previous stage as their input), and the user would control which effects to enable at each stage. This would also allow mixing custom filters (added using FilterManager) with CommonFilters in the same pipeline. At least to me this sounds clean and simple.

Thoughts anyone?

Screenshot from upcoming lens flare basic tutorial:


With the changes to CommonFilters, the minimal code needed to enable a lens flare is now:

self.filters = CommonFilters(base.win, base.cam)
self.filters.setLensFlare()

Optionally, it is possible to set parameters:

self.filters.setLensFlare(numsamples=self.numsamples, dispersal=self.dispersal,
                                  halo_width=self.halo_width,
                                  chroma_distort=(self.chroma_distort_r,
                                                  self.chroma_distort_g,
                                                  self.chroma_distort_b),
                                  threshold=self.threshold)

This includes also those parameters that were included as compile-time constants in ninth’s original code. They can be changed at any time by another call to setLensFlare(). When a compile-time parameter (which is any parameter except threshold) changes, this invokes a shader recompile automatically.

I think ninth’s original code could be included as an “advanced” tutorial.

Desaturation (monochrome) filter.

Original unprocessed scene is shown at the end of my post on god rays with cartoon shading: [Sample program] God rays with cartoon-shaded objects

Luma is computed using the perceptual weightings from ITU-R Rec. 709 (colloquially HDTV).






Desaturation (monochrome) filter continued.

Perhaps the most interesting feature of the desaturation filter is the hue bandpass. An arbitrary reference RGB colour is given to the filter, and it extracts the HSL hue from this colour. Then, when the filter is running, it computes the HSL hue of each pixel, and compares it to the reference hue. If it is “close enough” (the falloff is configurable), the filter weakens the desaturation for that pixel.

The result is that it is possible to keep, for example, only the reds, while desaturating the rest:


In this filter all other parameters are genuine runtime parameters (enabling realtime fades), but enabling or disabling the hue bandpass invokes a shader recompile. (The reference colour can be changed without invoking a recompile.)

Finally, the desaturation filter with tinting combines with the scanlines filter to produce an old computer monitor look:


The scanline thickness is configurable (in pixels, integer), as is the choice of whether to keep the top or bottom field. The field that is processed (not kept) is darkened by a configurable factor, which generally looks much better than making it completely black.

That’s all folks!

Nice work!

Perhaps it would be a good idea to modularise CommonFilters a bit more at this point, like creating a Filter class that filters can inherit from and giving at least the more complicated filters their own file. That might make it easier for people to add their own filters as well.

The code is now posted in the bug tracker:

bugs.launchpad.net/panda3d/+bug/1374393

Thanks!

Yes. I agree.

I’ll have to think about the interface and get back to this.

Is there still time before 1.9.0? The actual changes won’t be difficult, but desiging a future-proof interface may take a few days.

That’ s fine. Panda 1.9 won’t be released within the next week.

Ok.

What to do about the code review of the cartoon shader patch? If I modularize CommonFilters at this point, by far the easiest and most error-free approach would be to include all changes into one monolithic patch - but that violates the general good-programming-practice guideline of “one commit, one feature”.

On the other hand, separating the modularization, cartoon shading improvements and these new filters into three separate patches would offer no real practical benefit. These changes are not orthogonal anyway, so separating them wouldn’t help even in the unlikely case that someone wants to backport only a subset of the new functionality into some old version.

What’s your opinion?

Separate patches is the way to go, that makes it far easier to review the individual changes. It’s OK if one patch depends on the other.

Ok.

In the meantime, could you check this one first?

bugs.launchpad.net/panda3d/+bug/1214782

It’s a very small patch that only fixes the bug that prevented double-threshold light ramps from working in 1.8.1. It would be ideal to base the new features on a source tree that has this fixed.

After the bugfix, my plan of action:

  • Separate the getTexCoordSemantic() change from the rest
  • Modularize CommonFilters (refactor only; no functional changes at this point)
  • Add new cartoon shader (includes changes to both shader generator and CommonFilters)
  • Add these new filters to CommonFilters (maybe one by one?)

Thanks for checking the bug and checking in the patch.

The getTexCoordSemantic() change is now separated and posted in the bug tracker:

bugs.launchpad.net/panda3d/+bug/1374594

Next up, the actual refactoring step once I figure out the design for the filter interface.

Great! Feel free to bring up a proposal and I’ll give you my comments on them.

I just checked in a solution for the texcoord clutter. :slight_smile:

Here’s a proposal. There are some details I haven’t worked out yet, but it’s about 90% complete, and I think comments would be useful at this point.

Filters inherit from a Filter class, which defines the API that CommonFilters talks to. Each subclass describes a particular type of filter (CartoonInkFilter, BlurSharpenFilter, …). These can be implemented in their own Python modules. Very short and simple filters, for which a separate module would be overkill, can be collected into one MiscFilters module.

The new modules will be placed in a subdirectory to avoid confusing the user, as CommonFilters and FilterManager will remain the only classes meant for public use. For example, the filter modules could reside in direct.filter.internal, direct.filter.impl or some such.

A pipeline will be added, in order to correctly apply filters that need the postprocessed (or post-postprocessed etc.) scene as input*. Currently BlurSharpen is the only one that needs this, but that will likely change in the future.

(* Strictly speaking, the critical property is whether the filter needs lookups in its input colour texture at locations other than the pixel being processed.)

To implement the pipeline, the control logic of CommonFilters itself will be split into two modules. The first is CommonFiltersCore, which performs the task of the low-level logic in the current CommonFilters, providing shader synthetization for a single stage of the pipeline. The synthesis will be implemented in a modular manner, querying the configured Filters. (Details further below.)

The second module is a backward-compatible CommonFilters, which provides the user API (the high-level part of current CommonFilters), and takes care of creating the necessary stages and assigning the configured filters to them. That is, the user configures all filters in a monolithic manner (in the same way as in the current version), and then CommonFilters creates the necessary CommonFiltersCore instances and splits parts of the configuration to each of them in the appropriate manner. This allows keeping the core logic simple, as it does not need to know about the pipeline.

To support multiple stages, CommonFilters and FilterManager will be extended to capture buffers, in addition to the current mode of operation, where they capture a window with a camera.

Adding the buffer capture feature has a desirable side effect: the user will be able to pipe together CommonFilters (the high-level object) instances with custom FilterManager filters. For example, the scene may first be processed by some CommonFilters, then by some custom filters, and finally more CommonFilters. This gives an extremely flexible modular design also from the user’s perspective, making CommonFilters and the custom filter mechanism complement each other (instead of being alternatives, as in the current version).

Pipeline architecture:

  • The pipeline consists of stages. Roughly speaking, a stage is an ordered collection of filters that can be applied in one pass.
  • Stages are represented in the high-level CommonFilters class by CommonFiltersCore objects kept in an ordered list.
  • Each stage has an input color texture. Depth and aux textures are always taken from their initial source. (It would be possible to support processing these, too, by allowing the fshader to output multiple textures. Currently it’s not needed.)
  • Each stage has an output color texture.
  • The input to the first stage is the input scene or texture that was provided by the user to CommonFilters.
  • For subsequent stages, the pipeline connects the output of stage k to the input of stage k+1.
  • The output from CommonFilters is the output of the last stage.

Each filter derived from the Filter class:

  • must provide a list of names for internal textures, if it needs any. CommonFiltersCore will use this to manage the actual texture objects.
  • must define which textures (color, depth, aux, any internals) it needs as input in the compositing fshader, and for which of those it needs texpix.
  • must declare any custom parameters and their types (the k_variables that can be configured at runtime using setShaderInput()). These are appended by CommonFiltersCore to the parameter list of the compositing fshader.
  • must declare a sort value, which determines the filter’s placement within the pipeline stage it is placed in. This determines the placement of the fshader code snippet within the compositing fshader. (Note that filters that do not require internal intermediate stages, or texture lookups other than the pixel being processed, can be implemented using only an fshader code snippet.)
  • must provide a function that, given a FilterConfig for this particular type of filter, synthesizes its code snippet for the compositing fshader. This function compiles in the given values of compile-time parameters. The fragment shader code must respect any previous modifications to o_color to allow the filter to work together with others in the same compositing fshader.
  • must provide a function that compares oldconfig and newconfig, and returns whether a shader recompile is needed. (Only each filter type itself knows which of the parameters are runtime and which are compile-time.)
  • must provide a function to apply values from newconfig to runtime parameters. This is called at the end of reconfigure() in CommonFiltersCore.
  • may optionally implement a function to set up or reconfigure any internal processing stages. This includes synthesizing shaders for the internal intermediate textures. (The default implementation is blank.)

One detail I haven’t decided on is the initial value of o_color before the first filter in a stage is applied. There is a design choice here: either always make the compositing fshader to initialize o_color = tex2D( k_txcolor, l_texcoord.xy ), or alternatively, require exactly one of the filters registered to a stage (in high-level CommonFilters) to declare itself to be “first”, in which case that filter must in its fshader snippet write the initial value to o_color. The latter approach is more general (allowing for optimality when the default is wrong), but the first one is simpler, and often sufficient.

Another open question is where to declare which filter belongs to which stage in the high-level logic. The simplest possibility is to identify stages by names, a valid list of which would be provided in the high-level CommonFilters. This would allow the information to reside in the Filter subclasses themselves. CommonFilters itself would only need to be updated if a new stage is required. The stage information would be spread out across the individual Filters, which can be considered as both an advantage (everything related to a particular type of filter in the same place) and as a drawback (hard to get the big picture about filter ordering, because that requires checking each Filter subclass module and keeping notes).

The other possibility I have thought of so far is to hard-code the stage for each known subclass of Filter in the high-level CommonFilters. This would keep that information in one place (making it easy to understand the overall ordering), but this solution requires updating CommonFilters whenever a new type of filter is added. Also arguably, the stage information is something that logically belongs inside each type of filter.

HalfPixelShift is a special case, which does not conform to this filter model. It could be implemented as a half-pixel shift option to CommonFiltersCore. Enabling this would cause CommonFiltersCore to emit the code for HalfPixelShift in the compositing vshader. It would be enabled for the first stage only (in the high-level CommonFilters).

So, that’s the current plan. Comments welcome.

Regarding the buffer capture, so far I’ve gotten CommonFilters to initialize from a buffer by manually creating a buffer and setting up a quad and a camera for it, but I’m not sure whether this is the right way to do it. Probably not - it seems FilterManager already internally creates a quad and a camera. To prevent duplication, it needs to be able to read a render-into-texture input buffer (with color, aux, depth textures) directly.

I’d appreciate any pointers as to the correct approach to do this :slight_smile:

These are some good ideas. I have some comments.

This makes sense.

I don’t see why. That seems unnecessarily restrictive. People should be able to create their own instances of the individual Filter classes, inherit from them, etc. Part of the flexibility that this overhaul would offer is that it would allow people to customise CommonFilters with their own filters by creating their own Filter class and adding it.

I imagine that CommonFilters can store a list of filters with a sort value each that would determine in which order they get applied in the final compositing stage…

I imagine methods like setVolumetricLighting() to become simple stubs that call something like addFilter(VolumetricLightingFilter(*args, **kwargs)).

Makes sense. By a “single stage”, you mean a part of the final compositing filter, or an individual render pass (like the blur-x and blur-y passes of the blur filter)? These are two separate concepts, though I suppose they don’t strictly need to be.

Ah, so CommonFiltersCore represents a single filter pass? (Can we call it something like FilterPass or FilterStage then?) Can we let the Filter class set up and manage its stages rather than the CommonFilters? It does sound like something that would be managed by each filter, although I suppose some filters might share a filter pass (ie. blur and bloom may share an initial blur pass). Hmm.

I don’t understand what you mean by “capturing a buffer”, could you please explain that? You can already use FilterManager with a buffer, if that’s what you meant, but I don’t quite understand the necessity of that.

Could the user achieve the same thing by subclassing Filter and adding this Filter to the same CommonFilters object?

Then I think that FilterStage would be a far more representative term, don’t you think? :wink:

One thing I don’t quite understand - is a stage a render pass by itself, or a stage in the final compositing shader?

Not all stages need an input color texture. SSAO, for instance, does not.

I think FilterConfig is obsoleted by the new Filter design, since each Filter can just take all of the properties in the constructor via keyword arguments, and have properties or setters that invalidate the shader when they are modified depending on the property. Each setter of a particular Filter could either update a shader input or mark the shader as needing to be regenerated.

I think that each filter could possibly be a Cg function with the arguments it needs passed to it for better organisation.

You could have a filter stage that’s added by default with a negative sort value with its only purpose being to set o_color, which is always applied first.

I agree that this probably belongs in the the individual Filter classes.

I think HalfPixelShift should be a global setting in CommonFilters and not a filter at all.

I think at this point it would help to hack up some pseudo-code that kind of shows how the systems work together and perhaps showing a example filter while skipping over the details. It would help to get a good overview and help me to understand your design better.

Thanks for the comments! Some responses below.

Maybe I should explain what I was trying to achieve. :slight_smile:

The idea was that it should be easy to learn to use the CommonFilters system by reading the API documentation. At least I have learned a lot about Panda by searching the API docs.

If all the modules are placed in the same directory as CommonFilters itself, there will be lots of modules in the same place, and finding the interesting one becomes difficult.

I agree that flexibility is desirable.

Adding it where? In their local copy of the Panda source tree?

Hmm, this would make it easier to contribute new filters to Panda, which is nice.

Yes, that is part of the solution. But there are two separate issues here:

First is where to store the sort values. If I understood correctly, we seem to agree that this information belongs in the Filter subclasses.

Secondly, there are some filter combinations that cannot be applied in a single pass. BlurSharpen and anything else is one such combination - the blur will not see the processing from the other filters applied during the same pass.

Yes, something like that.

Thanks for asking (I’m sometimes very informal about terminology). By “stage of pipeline”, I meant a render pass.

But that doesn’t capture the idea strictly, either. From the viewpoint of the pipeline, the important thing to look at are the input textures needed by each filter.

Filters that share the same input textures (down to what should be in the pixels), and respect previous modifications to o_color in their fshader code, can work in the same pass. I think it’s a potentially important performance optimization to let them do so, so that enabling lots of filters does not necessarily imply lots of render passes.

Some filters may have internal render passes (such as blur), but to the pipeline this is irrelevant. Blur works, in a sense, as a single unit that takes in a colour texture, and outputs a blurred version. The input colour texture is the input to that pass in the pipeline where the blur filter has been set.

If the aim is to blur everything that is on the screen, the blur filter must come at a later render pass in the pipeline, so that it can use the postprocessed image as its input.

My proposal was that the core synthesizes code for a single “pipeline render pass”, so that the pipeline setup can occur in a higher layer (creating several, differently configured instances of the core).

Yes, we can change the name to something sensible :slight_smile:

Any internal stages (passes) (e.g. blur-x and blur-y) are indeed meant to be handled by each subclass of Filter.

About sharing passes in general, I agree. That is the reason to have a code generator that combines applicable filters to a single pass in the pipeline.

About blur and bloom specifically, I think they belong to different passes, because the effects they reproduce happen at different stages in the image-forming process.

I would like to set up the ordering of the filters as follows:

  • full-scene antialiasing (if added later)
  • CartoonInk, to simulate a completely drawn cel
  • optical effects in the scene itself (local reflection (if added later), ambient occlusion, volumetric lighting in that order)
  • optical effects in the lens system (bloom, lens flare)
  • film or detector effects (tinting, desaturation, colour inversion)
  • computer-based postprocessing (blur)
  • display device (scanlines)
  • debug helpers (ViewGlow)

Keep in mind that e.g. chromatic aberration in the lens should occur regardless of whether the result is recorded on colour or monochrome film.

Also note that these categories might not be exhaustive, might not correspond directly to render passes, and in some cases it can be unclear which category a given filter belongs to. For example, I tend to think of blur as a computer-generated postprocessing effect (requiring a complete “photograph” as input), but it could also represent the camera being out of focus, in which case it would come earlier in the pipeline (but definitely after CartoonInk and scene optical effects). I’m not sure what to do about such cases.

(Bloom, likewise, may be considered as a lens effect (the isotropic component of glare), or as a detector effect (CCD saturation). Maybe it is more appropriate to think of it as a lens effect.)

Finally, note that currently, only lens flare supports chromatic aberration. I think I’ll add full-screen chromatic aberration and vignetting to my to-do list, to approach a system that can simulate lens imperfections.

There are two use cases I’m thinking of.

First is daisy-chaining custom filters with CommonFilters. People sometimes use FilterManager to set up custom shaders, but the problem is that if you do that, it is not easy to apply CommonFilters on top of the result (or conversely, to apply your own shaders on top of what is produced by CommonFilters). When you apply either of these, you lose the camera, and can no longer easily set up the other one to continue where the other left off.

For a thought experiment, consider the original lens flare code by ninth (attached in Lens flare postprocess filter), and how you would go about applying CommonFilters to the same scene either before or after the lens flare. If I haven’t missed anything, currently it is not trivial to do this.

The second case is a scene with two render buffers doing different things, which are both postprocessed using CommonFilters, then rendered onto a quad (using a custom shader to combine them), and then the final quad is postprocessed using CommonFilters. There is a code example in my experiment on VolumetricLighting with cartoon-shaded objects: [Sample program] God rays with cartoon-shaded objects which probably explains better what I mean.

The thing is that at least in 1.8.1, setting up the combine step is overly complicated:

quadscene = NodePath("filter-quad-scene")
quadcamera = base.makeCamera2d(base.win, sort=7)
quadcamera.reparentTo(quadscene)
cm = CardMaker("filter-quad-card")
cm.setFrameFullscreenQuad()
self.quadNodePath = NodePath(cm.generate())
finaltex = Texture()
self.quadNodePath.setTexture(finaltex)
self.quadNodePath.reparentTo(quadcamera)

…when compared to the case where the original scene render does not need any postprocessing:

from direct.filter import FilterManager
manager = FilterManager.FilterManager(base.win, base.cam)
scenetex = Texture()
self.quadNodePath = manager.renderSceneInto(colortex=scenetex)

If you have a camera, it is just one line to call FilterManager to set up the render-into-quad, but if you don’t (because CommonFilters took it), you need to do more API acrobatics to create one and set up the render-into-quad manually.

EDIT: Also, then FilterManager (or CommonFilters when it calls FilterManager internally) goes on to obsolete the manually created quad and camera, creating another quad and another camera. It would be nice to avoid the unnecessary duplication. I don’t know if it affects performance, but at least it would make for a cleaner design.

Then, in both cases, we set up the combining shader

self.quadNodePath.setShader(Shader.make(SHADER_ADDITIVE_BLEND))
self.quadNodePath.setShaderInput("txcolor", scenetex)
self.quadNodePath.setShaderInput("txvl", vltex)
self.quadNodePath.setShaderInput("strength", 1.0)

and finally postprocess

self.finalfilters = CommonFilters(base.win, quadcamera)
self.finalfilters.setBlurSharpen()  # or whatever

though here, now that I think of it, I’m not sure how to get the quad camera in the case where FilterManager internally creates it.

In summary, what I’m trying to say is that I think these kinds of use cases need to be more convenient to set up :slight_smile:

Maybe.

The difficulty in that approach is that the user needs to understand the internals of CommonFilters, in order to be able to set up the pipeline pass number and sort-within-pipeline-pass priority correctly, in order to make CommonFilters insert the shader at the desired step in the process. Especially, the user must know which pipeline pass the shader can be inserted into (so that it won’t erase postprocessing by other filters; consider the blur case).

In addition, the user-defined shader must then respect the limitation that within the same pipeline pass, each fshader snippet must respect any previous changes to o_color. I think it is error-prone to require that of arbitrary user code, and especially, this makes it harder just to experiment with shaders copied from the internet.

Also, the user then needs to conform to the Filter API. If the user wants to contribute to CommonFilters, that is the way to go. But for quick experiments and custom in-house shaders, I think FilterManager and daisy-chaining would be much easier to use, as then any valid shader can be used and there are no special conventions or APIs to follow.

Maybe :wink:

As mentioned above, I was speaking of a render pass (but with the caveats mentioned).

Of the code for different filters in the compositing shader, I used the term “snippet” as I didn’t have anything better in my mind :slight_smile:

Good point.

That is another way to do it. May be cleaner.

Does this bring overhead? Or does the compiler inline them?

Also - while I’m not planning to go that route now - Cg is no longer being maintained, so is it ok to continue using it, or should we switch completely to GLSL at some point?

That’s one way of applying the default.

But how likely is the default to be wrong, i.e. do we need to take this case into account?

EDIT: Aaaa! Now I think I understand. If the default is wrong, then override this default filter stage somehow? E.g. sort=-1 means the output colour initialization stage, and if a stage with that sort value is provided by the user, that one is used, but if not, then the default one is used.

Ok.

Ok. I’ll put together an example.

Here’s a more concrete proposal. It’s about 90% Python, with 10% pseudocode in comments.

It’s in one file for now to ease browsing - I’ll split it to modules in the actual implementation. I zipped the .py because the forum does not allow posting .py files.

Currently this contains a Filter interface, a couple of simple example filters trying to cover as much of Filter API use cases as possible, and a work-in-progress FilterStage.

FilterPipeline and CommonFilters are currently covered just by a few lines of comments.

Comments welcome.
filterinterface_proposal.zip (8.56 KB)

Wow, that’s quite a bit more than some simple pseudo-code. :stuck_out_tongue: Thanks.
It looks great to me! A few minor comments.

Instead of getNeededTextures, I would suggest that there is instead a setup() method in which the Filter classes can call registerInputTexture() or something of the sort. The advantage of this is that we can later extend which things are stored about a texture input by adding keyword arguments to that method, without having to change the behaviour in all existing Filter implementations. It seems a bit cleaner as well. The same goes for getCustomParameters.

getNeedGlow seems a bit specific. Can we instead store a bitmask of AuxBitplaneAttrib flags?

I’m not quite sure I understand this stage idea. Is the “stage” string one of a number of fixed built-in stages? Are the different stages hard-coded? Can you explain to me in simple terms what exact purpose the stage concept serves?

I’m not sure if all of those methods need getters - it seems that some of them can simply remain a public member, like sort and needs_compile. I think sort can be a member with a filter-specific default value, but that can be changed by the user.

I think the strange inspection logic in setFilter has to go. We should keep it simply by either allowing someone to add a filter of a certain type more than once (even if that doesn’t make sense), or raising an error, or removing the old one entirely.

Just FYI, cmp= in sort() is deprecated and no longer supported in Python 3. Instead, you should do this:

self.filters.sort(key=lambda f: f.sort)

where Filter stores a self.sort value.

I think there is no reason to keep CommonFilters an old-style class. Perhaps CommonFilters should inherit from FilterPipeline?

I think more clearly when actually coding :stuck_out_tongue:

Thanks for the comments!

Ah, this indeed sounds more extensible. Let’s do that.

Yes, why not.

The other day, I was actually thinking that SSLR will need gloss map support from the main render stage, and this information needs to be somehow rendered from the material properties into a fullscreen texture… so, a general mechanism sounds good :slight_smile:

In this initial design, yes and yes, but the idea is that it is easy to add more (when coding new filters) if needed.

I’m not completely satisfied by this solution, but I haven’t yet figured out a better alternative which does not involve unnecessary bureaucracy at call time.

In short, the stage concept is a general solution to the problem of blur erasing the output of other postprocessing filters that are applied before it.

Observe that the simplest solution of applying blur first does not do what is desired, because then the scene itself will be blurred, but all postprocessing (e.g. cartoon ink) will remain sharp.

The expected result is that blur should apply to pretty much everything rendered before lens imperfections (or alternatively, to pretty much everything except scanlines, if blur is interpreted as a computer-based postprocess).

As for the why and how:

As you know, a fragment shader is basically an embarrassingly parallel computation kernel, i.e. it must run independently for each pixel (technically, fragment). All the threads get the same input texture, and they cannot communicate with each other while computing. The only way to pass information between pixels is to split the computation into several render passes, with each pass rendering the information to be communicated into an intermediate texture, which is then used as input in the next pass.

The problem is that with such a strictly local approach, some algorithms are inherently unable to play along with others - they absolutely require up-to-date information also from the neighbouring pixels.

Blur is a prime example of this. Blurring requires access to the colour of the neighbouring pixels as well as the pixel being processed, and this colour information must be fully up to date, to avoid erasing the output of other postprocessing algorithms that are being applied.

I’m not mathematically sure that blur is the only one that needs this, and also, several postprocessing algorithms (for example, the approximate depth-of-field postprocess described in http.developer.nvidia.com/GPUGem … _ch28.html) require blurring as a component anyway. Thus, a general solution seems appropriate.

The property, which determines whether another stage is needed, is the following: if a filter needs to access its input texture at locations other than the pixel being rendered, and it must preserve the output of previous postprocessing operations also at those locations, then it needs a new stage. This sounds a lot like blur, but dealing with mathematics has taught me to remain cautious about making such statements :slight_smile:

(For example, it could be that some algorithm needs to read the colour texture at the neighbouring pixels just to make decisions, instead of blurring that colour information into the current pixel.)

One more note about stages - I’m thinking of adding automatic stage consolidation, i.e. the pipeline would only create as many stages as are absolutely needed. For example, if blur is not enabled, there is usually no reason for the post-blur filters to have their own stage.

More about this later.

Ok. May be cleaner.

On this note, I’ve played around with the idea of making the filter parameters into Python properties. This would have a couple of advantages.

First, we can get rid of boilerplate argument-reading code in the derived classes. The Filter base class constructor can automatically populate any properties (that are defined in the derived class) from kwargs, and raise an exception if the user is trying to set a parameter that does not exist for that filter (preventing typos). This requires only the standard Python convention that the derived class calls super(self.class, self).init(**kwargs) in its init.

Secondly, as a bonus, this allows for automatically extracting parameter names - by simply runtime-inspecting the available properties - and human-readable descriptions (from the property getter docstrings).

That sounds good. Let’s do that.

Maybe stage should be user-changeable, too. (Referring here to the fact that for some filters (e.g. blur), the interpretation of what the filter is trying to simulate, affects which stage it should go into.)

Ok.

The only purpose here was to support the old API, which has monolithic setThisAndThatFilter() methods that are supposed to update the current configuration.

If this can be done is some smarter way, then I’m all for eliminating the strange inspection logic :slight_smile:

Ok. Personally I’m pretty particular about Python 2.x (because of line_profiler, which is essential for optimizing scientific computing code), but I agree that Panda shouldn’t be. :slight_smile:

I’ll change this to use the forward-compatible approach.

Maybe. This way, it could simply add a backward-compatible API on top of FilterPipeline, while all of the functionality of the new FilterPipeline API would remain directly accessible. That sounds nice.

I’ll have to think about this part in some more detail.

In the meantime while I’m working on the new CommonFilters architecture, here are screenshots from one more upcoming filter: lens distortion.

The filter supports barrel/pincushion distortion, chromatic aberration and vignetting. Optionally, the barrel/pincushion distortion can also radially blur the image to simulate a low-quality lens.




This filter will be available once the architecture changes are done.