CommonFilters - some new filters, and the future

Here’s a proposal. There are some details I haven’t worked out yet, but it’s about 90% complete, and I think comments would be useful at this point.

Filters inherit from a Filter class, which defines the API that CommonFilters talks to. Each subclass describes a particular type of filter (CartoonInkFilter, BlurSharpenFilter, …). These can be implemented in their own Python modules. Very short and simple filters, for which a separate module would be overkill, can be collected into one MiscFilters module.

The new modules will be placed in a subdirectory to avoid confusing the user, as CommonFilters and FilterManager will remain the only classes meant for public use. For example, the filter modules could reside in direct.filter.internal, direct.filter.impl or some such.

A pipeline will be added, in order to correctly apply filters that need the postprocessed (or post-postprocessed etc.) scene as input*. Currently BlurSharpen is the only one that needs this, but that will likely change in the future.

(* Strictly speaking, the critical property is whether the filter needs lookups in its input colour texture at locations other than the pixel being processed.)

To implement the pipeline, the control logic of CommonFilters itself will be split into two modules. The first is CommonFiltersCore, which performs the task of the low-level logic in the current CommonFilters, providing shader synthetization for a single stage of the pipeline. The synthesis will be implemented in a modular manner, querying the configured Filters. (Details further below.)

The second module is a backward-compatible CommonFilters, which provides the user API (the high-level part of current CommonFilters), and takes care of creating the necessary stages and assigning the configured filters to them. That is, the user configures all filters in a monolithic manner (in the same way as in the current version), and then CommonFilters creates the necessary CommonFiltersCore instances and splits parts of the configuration to each of them in the appropriate manner. This allows keeping the core logic simple, as it does not need to know about the pipeline.

To support multiple stages, CommonFilters and FilterManager will be extended to capture buffers, in addition to the current mode of operation, where they capture a window with a camera.

Adding the buffer capture feature has a desirable side effect: the user will be able to pipe together CommonFilters (the high-level object) instances with custom FilterManager filters. For example, the scene may first be processed by some CommonFilters, then by some custom filters, and finally more CommonFilters. This gives an extremely flexible modular design also from the user’s perspective, making CommonFilters and the custom filter mechanism complement each other (instead of being alternatives, as in the current version).

Pipeline architecture:

  • The pipeline consists of stages. Roughly speaking, a stage is an ordered collection of filters that can be applied in one pass.
  • Stages are represented in the high-level CommonFilters class by CommonFiltersCore objects kept in an ordered list.
  • Each stage has an input color texture. Depth and aux textures are always taken from their initial source. (It would be possible to support processing these, too, by allowing the fshader to output multiple textures. Currently it’s not needed.)
  • Each stage has an output color texture.
  • The input to the first stage is the input scene or texture that was provided by the user to CommonFilters.
  • For subsequent stages, the pipeline connects the output of stage k to the input of stage k+1.
  • The output from CommonFilters is the output of the last stage.

Each filter derived from the Filter class:

  • must provide a list of names for internal textures, if it needs any. CommonFiltersCore will use this to manage the actual texture objects.
  • must define which textures (color, depth, aux, any internals) it needs as input in the compositing fshader, and for which of those it needs texpix.
  • must declare any custom parameters and their types (the k_variables that can be configured at runtime using setShaderInput()). These are appended by CommonFiltersCore to the parameter list of the compositing fshader.
  • must declare a sort value, which determines the filter’s placement within the pipeline stage it is placed in. This determines the placement of the fshader code snippet within the compositing fshader. (Note that filters that do not require internal intermediate stages, or texture lookups other than the pixel being processed, can be implemented using only an fshader code snippet.)
  • must provide a function that, given a FilterConfig for this particular type of filter, synthesizes its code snippet for the compositing fshader. This function compiles in the given values of compile-time parameters. The fragment shader code must respect any previous modifications to o_color to allow the filter to work together with others in the same compositing fshader.
  • must provide a function that compares oldconfig and newconfig, and returns whether a shader recompile is needed. (Only each filter type itself knows which of the parameters are runtime and which are compile-time.)
  • must provide a function to apply values from newconfig to runtime parameters. This is called at the end of reconfigure() in CommonFiltersCore.
  • may optionally implement a function to set up or reconfigure any internal processing stages. This includes synthesizing shaders for the internal intermediate textures. (The default implementation is blank.)

One detail I haven’t decided on is the initial value of o_color before the first filter in a stage is applied. There is a design choice here: either always make the compositing fshader to initialize o_color = tex2D( k_txcolor, l_texcoord.xy ), or alternatively, require exactly one of the filters registered to a stage (in high-level CommonFilters) to declare itself to be “first”, in which case that filter must in its fshader snippet write the initial value to o_color. The latter approach is more general (allowing for optimality when the default is wrong), but the first one is simpler, and often sufficient.

Another open question is where to declare which filter belongs to which stage in the high-level logic. The simplest possibility is to identify stages by names, a valid list of which would be provided in the high-level CommonFilters. This would allow the information to reside in the Filter subclasses themselves. CommonFilters itself would only need to be updated if a new stage is required. The stage information would be spread out across the individual Filters, which can be considered as both an advantage (everything related to a particular type of filter in the same place) and as a drawback (hard to get the big picture about filter ordering, because that requires checking each Filter subclass module and keeping notes).

The other possibility I have thought of so far is to hard-code the stage for each known subclass of Filter in the high-level CommonFilters. This would keep that information in one place (making it easy to understand the overall ordering), but this solution requires updating CommonFilters whenever a new type of filter is added. Also arguably, the stage information is something that logically belongs inside each type of filter.

HalfPixelShift is a special case, which does not conform to this filter model. It could be implemented as a half-pixel shift option to CommonFiltersCore. Enabling this would cause CommonFiltersCore to emit the code for HalfPixelShift in the compositing vshader. It would be enabled for the first stage only (in the high-level CommonFilters).

So, that’s the current plan. Comments welcome.

Regarding the buffer capture, so far I’ve gotten CommonFilters to initialize from a buffer by manually creating a buffer and setting up a quad and a camera for it, but I’m not sure whether this is the right way to do it. Probably not - it seems FilterManager already internally creates a quad and a camera. To prevent duplication, it needs to be able to read a render-into-texture input buffer (with color, aux, depth textures) directly.

I’d appreciate any pointers as to the correct approach to do this :slight_smile: