CommonFilters - some new filters, and the future

Return to Panda Features in Development

CommonFilters - some new filters, and the future

Postby Technologicat » Fri Sep 26, 2014 3:59 am

Hi all,

I've been extending my copy of CommonFilters and will be submitting it for consideration for 1.9.0 soon.


Current new functionality

  • Lens flare based on the algorithm by John Chapman. The credit goes to ninth (viewtopic.php?t=15231); I've just cleaned up the code a bit and made it to work together with CommonFilters.
  • Desaturation (monochrome) filter with perceptual weighting, and configurable strength, tinting and hue bandpass. Original.
  • Scanlines (CRT TV) filter (original). Both static and dynamic modes, configurable strength and which field to process, and the thickness of the desired scanlines (in pixels). Original.
Screenshots below in separate posts.

I've also rearranged the sequence in the main filter as follows: CartoonInk > AmbientOcclusion > VolumetricLighting > Bloom > LensFlare > Desaturation > Inverted > BlurSharpen > Scanlines > ViewGlow.

The idea is to imitate the physical process that produces the picture. First we have effects that correspond to optical processes in the scene itself (AO, VL), then effects corresponding to lens imperfections (bloom, LF), then effects corresponding to the behaviour of the film or detector (desat, inv), and finally computer-generated effects that need the complete "photograph" as input (blur, scanlines). The debug filter ViewGlow goes last.

Note that with this ordering, e.g. Desaturation takes into account the chromatic aberration in LensFlare; the wavelength-based aberration occurs in the lens regardless of whether the result is recorded on colour or monochrome film.

In case the scene being rendered is a cartoon, CartoonInk goes first to imitate a completely drawn cel (including the ink outlines) that is then used as input for computer-generated effects. This imitates the production process of anime.

The code changes are unfortunately not completely orthogonal to those made in the cartoon shader improvements patch, so I fear that needs to be processed first. I can, however, provide the current code for review purposes.


The future of CommonFilters?

I think there are still several filters that are "common" enough - in the sense of generally applicable - that it would be interesting to include them into CommonFilters (probably after 1.9.0 is released). Specifically, I'm thinking of:

  • Screen-space local reflections (SSLR). A Panda implementation has already been made by ninth (viewtopic.php?f=8&t=15742), so pending his permission, I could add this one next.
  • Fast approximate antialiasing (FXAA). In my opinion, fast fullscreen antialiasing to remove jagged edges would be a killer feature to have.
As for FXAA, there have already been some attempts to create this shader for Panda. In the forums there are at least two versions floating around (one written in Cg, the other in GLSL). But I'm not sure how to obtain the necessary permissions. At least by a quick look at the code, it seems the existing implementations are heavily based on, if not selectively copy and pasted, from NVIDIA's original by Timothy Lottes, and the original header file says "all rights reserved". On the other hand it's publicly available in NVIDIA's SDK, as is the whitepaper documenting the algorithm. There is another version of the code at Geeks3D (based on the initial version of the algorithm), but that looks very similar, and I could not find any information on the license. I think I need someone more experienced to help in getting the legal matters right - once that's done, I can then do the necessary coding :)


Another important issue is the architecture of CommonFilters. As is said in the comment at the start of the file (already in 1.8.1), it's monolithic and clunky. I've been playing around with the idea that instead of one Monolithic Shader of Doom (TM), we could have several "stages" that would form a postprocessing pipeline. Multipassing is in any case required to apply blur correctly (i.e. such that it sees the output from the other postprocessing filters) - this is something that's bugging me, so I'd like to fix it one way or another.

On the other hand, a pipeline is bureaucratic to set up, and reduces performance for those filters that could be applied in a single pass using a monolithic shader. It could be designed to create one monolithic shader per stage, based on some kind of priority system when the shaders are registered to the pipeline, but that's starting to sound pretty complicated - I'm not sure whether it would solve problems or just create more.

One should of course keep in mind the performance aspect - most of the effects in CommonFilters are such that they can be applied in a single pass using a monolithic shader, so maybe that part of the design should be kept afterall.

For example, even though SSAO uses intermediate stages, the result only modulates the output, so e.g. cartoon outlines will remain. Volumetric lighting is additive, so it preserves whatever has been added by the other filters. (The normal use case is to render it in a separate occlusion pass anyway, but its placement in the filter sequence may matter if someone uses it as a type of additive radial blur, such as in viewtopic.php?t=13936 .)

Bloom doesn't see cartoon outlines, but that doesn't matter much - the bloom map is half resolution anyway, and the look of the bloom effect is such that it doesn't produce visible artifacts even if it gets blended onto the ink. The same applies to the lens flare.

The most obvious offender is blur, since at higher blend strengths (which are useful e.g. when a game is paused and a menu opened) it will erase the processing from the other postprocessing filters.

If the monolithic design is kept, I think blur should be special-cased so that if blur is enabled, the rest of the effects render into an interQuad, and then only blur is applied to the finalQuad, using the interQuad as input. Otherwise (when blur disabled) all effects render onto finalQuad, like in the current version.


As for the clunkiness, I think it would be useful to define each effect in a separate Python module that provides code generators for the vshader and fshader. This would remove the need to have all the shader code pasted at the top of CommonFilters.py.

The reason to prefer Python modules instead of .sha files is, of course, the "Radeon curse" - the support of AMD cards in Cg is pretty much limited to the arbvp1 and arbfp1 profiles, which do not support e.g. variable-length loops. Hence, to have compile-time configurable-length loops, we need a shader generator that hardcodes the configuration parameter when the shader is created. This solution is already in use in CommonFilters, but the architecture would become much cleaner if it was split into several modules.


One further thing that is currently bugging me is that CommonFilters, and FilterManager, only make it possible to capture a window (or buffer) that has a camera. For maximum flexibility considering daisy-chaining of filters, it would be nice if FilterManager could capture a texture buffer. This can be done manually (as I did for the god rays with cartoon shading viewtopic.php?f=9&t=17230), but this is the sort of thing that really, really needs a convenience function.

The last idea would play especially well with user-side pipelining. Several instances of CommonFilters could be created to process the same scene (later instances using the output from the previous stage as their input), and the user would control which effects to enable at each stage. This would also allow mixing custom filters (added using FilterManager) with CommonFilters in the same pipeline. At least to me this sounds clean and simple.

Thoughts anyone?
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Fri Sep 26, 2014 5:07 am

Screenshot from upcoming lens flare basic tutorial:

Tut-LensFlare_py_screenshot_00001.jpg
Lens flare in CommonFilters.
Tut-LensFlare_py_screenshot_00001.jpg (102 KiB) Viewed 1491 times

With the changes to CommonFilters, the minimal code needed to enable a lens flare is now:

Code: Select all
self.filters = CommonFilters(base.win, base.cam)
self.filters.setLensFlare()

Optionally, it is possible to set parameters:

Code: Select all
self.filters.setLensFlare(numsamples=self.numsamples, dispersal=self.dispersal,
                                  halo_width=self.halo_width,
                                  chroma_distort=(self.chroma_distort_r,
                                                  self.chroma_distort_g,
                                                  self.chroma_distort_b),
                                  threshold=self.threshold)

This includes also those parameters that were included as compile-time constants in ninth's original code. They can be changed at any time by another call to setLensFlare(). When a compile-time parameter (which is any parameter except threshold) changes, this invokes a shader recompile automatically.

I think ninth's original code could be included as an "advanced" tutorial.
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Fri Sep 26, 2014 5:17 am

Desaturation (monochrome) filter.

Original unprocessed scene is shown at the end of my post on god rays with cartoon shading: viewtopic.php?f=9&t=17230

Luma is computed using the perceptual weightings from ITU-R Rec. 709 (colloquially HDTV).

bw_.jpg
Basic desaturation into black and white.
bw_.jpg (17.01 KiB) Viewed 1491 times

muted_colors_.jpg
Partial desaturation (less than full strength), like turning down the colors on a TV.
muted_colors_.jpg (18.4 KiB) Viewed 1491 times

sepia_.jpg
Panda's grandfather? Sepia toned image using tinting.
sepia_.jpg (17.58 KiB) Viewed 1491 times
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Fri Sep 26, 2014 5:28 am

Desaturation (monochrome) filter continued.

Perhaps the most interesting feature of the desaturation filter is the hue bandpass. An arbitrary reference RGB colour is given to the filter, and it extracts the HSL hue from this colour. Then, when the filter is running, it computes the HSL hue of each pixel, and compares it to the reference hue. If it is "close enough" (the falloff is configurable), the filter weakens the desaturation for that pixel.

The result is that it is possible to keep, for example, only the reds, while desaturating the rest:

hue_bandpass_.jpg
Hue bandpass with reference RGB = (1,0,0).
hue_bandpass_.jpg (16.98 KiB) Viewed 1491 times

In this filter all other parameters are genuine runtime parameters (enabling realtime fades), but enabling or disabling the hue bandpass invokes a shader recompile. (The reference colour can be changed without invoking a recompile.)

Finally, the desaturation filter with tinting combines with the scanlines filter to produce an old computer monitor look:

old_monitor_.jpg
Desaturation with green tint, and scanlines.
old_monitor_.jpg (25.04 KiB) Viewed 1491 times

The scanline thickness is configurable (in pixels, integer), as is the choice of whether to keep the top or bottom field. The field that is processed (not kept) is darkened by a configurable factor, which generally looks much better than making it completely black.

That's all folks!
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby rdb » Fri Sep 26, 2014 5:45 am

Nice work!

Perhaps it would be a good idea to modularise CommonFilters a bit more at this point, like creating a Filter class that filters can inherit from and giving at least the more complicated filters their own file. That might make it easier for people to add their own filters as well.
rdb
 
Posts: 10145
Joined: Mon Dec 04, 2006 5:58 am
Location: Netherlands

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Fri Sep 26, 2014 6:00 am

The code is now posted in the bug tracker:

https://bugs.launchpad.net/panda3d/+bug/1374393
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Fri Sep 26, 2014 6:03 am

rdb wrote:Nice work!

Thanks!

rdb wrote:Perhaps it would be a good idea to modularise CommonFilters a bit more at this point, like creating a Filter class that filters can inherit from and giving at least the more complicated filters their own file. That might make it easier for people to add their own filters as well.

Yes. I agree.

I'll have to think about the interface and get back to this.

Is there still time before 1.9.0? The actual changes won't be difficult, but desiging a future-proof interface may take a few days.
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby rdb » Fri Sep 26, 2014 6:08 am

That' s fine. Panda 1.9 won't be released within the next week.
rdb
 
Posts: 10145
Joined: Mon Dec 04, 2006 5:58 am
Location: Netherlands

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Fri Sep 26, 2014 6:21 am

rdb wrote:That' s fine. Panda 1.9 won't be released within the next week.

Ok.

What to do about the code review of the cartoon shader patch? If I modularize CommonFilters at this point, by far the easiest and most error-free approach would be to include all changes into one monolithic patch - but that violates the general good-programming-practice guideline of "one commit, one feature".

On the other hand, separating the modularization, cartoon shading improvements and these new filters into three separate patches would offer no real practical benefit. These changes are not orthogonal anyway, so separating them wouldn't help even in the unlikely case that someone wants to backport only a subset of the new functionality into some old version.

What's your opinion?
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby rdb » Fri Sep 26, 2014 6:52 am

Separate patches is the way to go, that makes it far easier to review the individual changes. It's OK if one patch depends on the other.
rdb
 
Posts: 10145
Joined: Mon Dec 04, 2006 5:58 am
Location: Netherlands

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Fri Sep 26, 2014 10:06 am

rdb wrote:Separate patches is the way to go, that makes it far easier to review the individual changes. It's OK if one patch depends on the other.

Ok.

In the meantime, could you check this one first?

https://bugs.launchpad.net/panda3d/+bug/1214782

It's a very small patch that only fixes the bug that prevented double-threshold light ramps from working in 1.8.1. It would be ideal to base the new features on a source tree that has this fixed.


After the bugfix, my plan of action:

  • Separate the getTexCoordSemantic() change from the rest
  • Modularize CommonFilters (refactor only; no functional changes at this point)
  • Add new cartoon shader (includes changes to both shader generator and CommonFilters)
  • Add these new filters to CommonFilters (maybe one by one?)
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Fri Sep 26, 2014 1:51 pm

Thanks for checking the bug and checking in the patch.

The getTexCoordSemantic() change is now separated and posted in the bug tracker:

https://bugs.launchpad.net/panda3d/+bug/1374594

Next up, the actual refactoring step once I figure out the design for the filter interface.
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby rdb » Fri Sep 26, 2014 2:26 pm

Great! Feel free to bring up a proposal and I'll give you my comments on them.

I just checked in a solution for the texcoord clutter. :)
rdb
 
Posts: 10145
Joined: Mon Dec 04, 2006 5:58 am
Location: Netherlands

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Sun Sep 28, 2014 5:23 am

Here's a proposal. There are some details I haven't worked out yet, but it's about 90% complete, and I think comments would be useful at this point.


Filters inherit from a Filter class, which defines the API that CommonFilters talks to. Each subclass describes a particular type of filter (CartoonInkFilter, BlurSharpenFilter, ...). These can be implemented in their own Python modules. Very short and simple filters, for which a separate module would be overkill, can be collected into one MiscFilters module.

The new modules will be placed in a subdirectory to avoid confusing the user, as CommonFilters and FilterManager will remain the only classes meant for public use. For example, the filter modules could reside in direct.filter.internal, direct.filter.impl or some such.

A pipeline will be added, in order to correctly apply filters that need the postprocessed (or post-postprocessed etc.) scene as input*. Currently BlurSharpen is the only one that needs this, but that will likely change in the future.

(* Strictly speaking, the critical property is whether the filter needs lookups in its input colour texture at locations other than the pixel being processed.)

To implement the pipeline, the control logic of CommonFilters itself will be split into two modules. The first is CommonFiltersCore, which performs the task of the low-level logic in the current CommonFilters, providing shader synthetization for a single stage of the pipeline. The synthesis will be implemented in a modular manner, querying the configured Filters. (Details further below.)

The second module is a backward-compatible CommonFilters, which provides the user API (the high-level part of current CommonFilters), and takes care of creating the necessary stages and assigning the configured filters to them. That is, the user configures all filters in a monolithic manner (in the same way as in the current version), and then CommonFilters creates the necessary CommonFiltersCore instances and splits parts of the configuration to each of them in the appropriate manner. This allows keeping the core logic simple, as it does not need to know about the pipeline.

To support multiple stages, CommonFilters and FilterManager will be extended to capture buffers, in addition to the current mode of operation, where they capture a window with a camera.

Adding the buffer capture feature has a desirable side effect: the user will be able to pipe together CommonFilters (the high-level object) instances with custom FilterManager filters. For example, the scene may first be processed by some CommonFilters, then by some custom filters, and finally more CommonFilters. This gives an extremely flexible modular design also from the user's perspective, making CommonFilters and the custom filter mechanism complement each other (instead of being alternatives, as in the current version).

Pipeline architecture:

  • The pipeline consists of stages. Roughly speaking, a stage is an ordered collection of filters that can be applied in one pass.
  • Stages are represented in the high-level CommonFilters class by CommonFiltersCore objects kept in an ordered list.
  • Each stage has an input color texture. Depth and aux textures are always taken from their initial source. (It would be possible to support processing these, too, by allowing the fshader to output multiple textures. Currently it's not needed.)
  • Each stage has an output color texture.
  • The input to the first stage is the input scene or texture that was provided by the user to CommonFilters.
  • For subsequent stages, the pipeline connects the output of stage k to the input of stage k+1.
  • The output from CommonFilters is the output of the last stage.

Each filter derived from the Filter class:

  • must provide a list of names for internal textures, if it needs any. CommonFiltersCore will use this to manage the actual texture objects.
  • must define which textures (color, depth, aux, any internals) it needs as input in the compositing fshader, and for which of those it needs texpix.
  • must declare any custom parameters and their types (the k_variables that can be configured at runtime using setShaderInput()). These are appended by CommonFiltersCore to the parameter list of the compositing fshader.
  • must declare a sort value, which determines the filter's placement within the pipeline stage it is placed in. This determines the placement of the fshader code snippet within the compositing fshader. (Note that filters that do not require internal intermediate stages, or texture lookups other than the pixel being processed, can be implemented using only an fshader code snippet.)
  • must provide a function that, given a FilterConfig for this particular type of filter, synthesizes its code snippet for the compositing fshader. This function compiles in the given values of compile-time parameters. The fragment shader code must respect any previous modifications to o_color to allow the filter to work together with others in the same compositing fshader.
  • must provide a function that compares oldconfig and newconfig, and returns whether a shader recompile is needed. (Only each filter type itself knows which of the parameters are runtime and which are compile-time.)
  • must provide a function to apply values from newconfig to runtime parameters. This is called at the end of reconfigure() in CommonFiltersCore.
  • may optionally implement a function to set up or reconfigure any internal processing stages. This includes synthesizing shaders for the internal intermediate textures. (The default implementation is blank.)
One detail I haven't decided on is the initial value of o_color before the first filter in a stage is applied. There is a design choice here: either always make the compositing fshader to initialize o_color = tex2D( k_txcolor, l_texcoord.xy ), or alternatively, require exactly one of the filters registered to a stage (in high-level CommonFilters) to declare itself to be "first", in which case that filter must in its fshader snippet write the initial value to o_color. The latter approach is more general (allowing for optimality when the default is wrong), but the first one is simpler, and often sufficient.

Another open question is where to declare which filter belongs to which stage in the high-level logic. The simplest possibility is to identify stages by names, a valid list of which would be provided in the high-level CommonFilters. This would allow the information to reside in the Filter subclasses themselves. CommonFilters itself would only need to be updated if a new stage is required. The stage information would be spread out across the individual Filters, which can be considered as both an advantage (everything related to a particular type of filter in the same place) and as a drawback (hard to get the big picture about filter ordering, because that requires checking each Filter subclass module and keeping notes).

The other possibility I have thought of so far is to hard-code the stage for each known subclass of Filter in the high-level CommonFilters. This would keep that information in one place (making it easy to understand the overall ordering), but this solution requires updating CommonFilters whenever a new type of filter is added. Also arguably, the stage information is something that logically belongs inside each type of filter.

HalfPixelShift is a special case, which does not conform to this filter model. It could be implemented as a half-pixel shift option to CommonFiltersCore. Enabling this would cause CommonFiltersCore to emit the code for HalfPixelShift in the compositing vshader. It would be enabled for the first stage only (in the high-level CommonFilters).


So, that's the current plan. Comments welcome.

Regarding the buffer capture, so far I've gotten CommonFilters to initialize from a buffer by manually creating a buffer and setting up a quad and a camera for it, but I'm not sure whether this is the right way to do it. Probably not - it seems FilterManager already internally creates a quad and a camera. To prevent duplication, it needs to be able to read a render-into-texture input buffer (with color, aux, depth textures) directly.

I'd appreciate any pointers as to the correct approach to do this :)
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby rdb » Sun Sep 28, 2014 9:24 am

These are some good ideas. I have some comments.

Technologicat wrote:Filters inherit from a Filter class, which defines the API that CommonFilters talks to. Each subclass describes a particular type of filter (CartoonInkFilter, BlurSharpenFilter, ...). These can be implemented in their own Python modules. Very short and simple filters, for which a separate module would be overkill, can be collected into one MiscFilters module.

This makes sense.

Technologicat wrote:The new modules will be placed in a subdirectory to avoid confusing the user, as CommonFilters and FilterManager will remain the only classes meant for public use. For example, the filter modules could reside in direct.filter.internal, direct.filter.impl or some such.

I don't see why. That seems unnecessarily restrictive. People should be able to create their own instances of the individual Filter classes, inherit from them, etc. Part of the flexibility that this overhaul would offer is that it would allow people to customise CommonFilters with their own filters by creating their own Filter class and adding it.

Technologicat wrote:A pipeline will be added, in order to correctly apply filters that need the postprocessed (or post-postprocessed etc.) scene as input*. Currently BlurSharpen is the only one that needs this, but that will likely change in the future.,

I imagine that CommonFilters can store a list of filters with a sort value each that would determine in which order they get applied in the final compositing stage..

I imagine methods like setVolumetricLighting() to become simple stubs that call something like addFilter(VolumetricLightingFilter(*args, **kwargs)).

Technologicat wrote:To implement the pipeline, the control logic of CommonFilters itself will be split into two modules. The first is CommonFiltersCore, which performs the task of the low-level logic in the current CommonFilters, providing shader synthetization for a single stage of the pipeline. The synthesis will be implemented in a modular manner, querying the configured Filters. (Details further below.),

Makes sense. By a "single stage", you mean a part of the final compositing filter, or an individual render pass (like the blur-x and blur-y passes of the blur filter)? These are two separate concepts, though I suppose they don't strictly need to be.

Technologicat wrote:The second module is a backward-compatible CommonFilters, which provides the user API (the high-level part of current CommonFilters), and takes care of creating the necessary stages and assigning the configured filters to them. That is, the user configures all filters in a monolithic manner (in the same way as in the current version), and then CommonFilters creates the necessary CommonFiltersCore instances and splits parts of the configuration to each of them in the appropriate manner. This allows keeping the core logic simple, as it does not need to know about the pipeline.,

Ah, so CommonFiltersCore represents a single filter pass? (Can we call it something like FilterPass or FilterStage then?) Can we let the Filter class set up and manage its stages rather than the CommonFilters? It does sound like something that would be managed by each filter, although I suppose some filters might share a filter pass (ie. blur and bloom may share an initial blur pass). Hmm.

Technologicat wrote:To support multiple stages, CommonFilters and FilterManager will be extended to capture buffers, in addition to the current mode of operation, where they capture a window with a camera.

I don't understand what you mean by "capturing a buffer", could you please explain that? You can already use FilterManager with a buffer, if that's what you meant, but I don't quite understand the necessity of that.

Technologicat wrote:Adding the buffer capture feature has a desirable side effect: the user will be able to pipe together CommonFilters (the high-level object) instances with custom FilterManager filters. For example, the scene may first be processed by some CommonFilters, then by some custom filters, and finally more CommonFilters. This gives an extremely flexible modular design also from the user's perspective, making CommonFilters and the custom filter mechanism complement each other (instead of being alternatives, as in the current version).

Could the user achieve the same thing by subclassing Filter and adding this Filter to the same CommonFilters object?

Technologicat wrote:
  • The pipeline consists of stages. Roughly speaking, a stage is an ordered collection of filters that can be applied in one pass.
  • Stages are represented in the high-level CommonFilters class by CommonFiltersCore objects kept in an ordered list.

Then I think that FilterStage would be a far more representative term, don't you think? ;-)

One thing I don't quite understand - is a stage a render pass by itself, or a stage in the final compositing shader?

Technologicat wrote:
  • Each stage has an input color texture. Depth and aux textures are always taken from their initial source. (It would be possible to support processing these, too, by allowing the fshader to output multiple textures. Currently it's not needed.)

Not all stages need an input color texture. SSAO, for instance, does not.

Technologicat wrote:
  • must provide a function that, given a FilterConfig for this particular type of filter, synthesizes its code snippet for the compositing fshader. This function compiles in the given values of compile-time parameters. The fragment shader code must respect any previous modifications to o_color to allow the filter to work together with others in the same compositing fshader.
  • must provide a function that compares oldconfig and newconfig, and returns whether a shader recompile is needed. (Only each filter type itself knows which of the parameters are runtime and which are compile-time.)
  • must provide a function to apply values from newconfig to runtime parameters. This is called at the end of reconfigure() in CommonFiltersCore.


I think FilterConfig is obsoleted by the new Filter design, since each Filter can just take all of the properties in the constructor via keyword arguments, and have properties or setters that invalidate the shader when they are modified depending on the property. Each setter of a particular Filter could either update a shader input or mark the shader as needing to be regenerated.

I think that each filter could possibly be a Cg function with the arguments it needs passed to it for better organisation.

Technologicat wrote:One detail I haven't decided on is the initial value of o_color before the first filter in a stage is applied. There is a design choice here: either always make the compositing fshader to initialize o_color = tex2D( k_txcolor, l_texcoord.xy ), or alternatively, require exactly one of the filters registered to a stage (in high-level CommonFilters) to declare itself to be "first", in which case that filter must in its fshader snippet write the initial value to o_color. The latter approach is more general (allowing for optimality when the default is wrong), but the first one is simpler, and often sufficient.

You could have a filter stage that's added by default with a negative sort value with its only purpose being to set o_color, which is always applied first.

Technologicat wrote:The other possibility I have thought of so far is to hard-code the stage for each known subclass of Filter in the high-level CommonFilters. This would keep that information in one place (making it easy to understand the overall ordering), but this solution requires updating CommonFilters whenever a new type of filter is added. Also arguably, the stage information is something that logically belongs inside each type of filter.

I agree that this probably belongs in the the individual Filter classes.

Technologicat wrote:HalfPixelShift is a special case, which does not conform to this filter model. It could be implemented as a half-pixel shift option to CommonFiltersCore. Enabling this would cause CommonFiltersCore to emit the code for HalfPixelShift in the compositing vshader. It would be enabled for the first stage only (in the high-level CommonFilters).

I think HalfPixelShift should be a global setting in CommonFilters and not a filter at all.

I think at this point it would help to hack up some pseudo-code that kind of shows how the systems work together and perhaps showing a example filter while skipping over the details. It would help to get a good overview and help me to understand your design better.
rdb
 
Posts: 10145
Joined: Mon Dec 04, 2006 5:58 am
Location: Netherlands

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Sun Sep 28, 2014 11:27 am

rdb wrote:These are some good ideas. I have some comments.

Thanks for the comments! Some responses below.

rdb wrote:
Technologicat wrote:The new modules will be placed in a subdirectory to avoid confusing the user, as CommonFilters and FilterManager will remain the only classes meant for public use. For example, the filter modules could reside in direct.filter.internal, direct.filter.impl or some such.

I don't see why. That seems unnecessarily restrictive. People should be able to create their own instances of the individual Filter classes, inherit from them, etc.

Maybe I should explain what I was trying to achieve. :)

The idea was that it should be easy to learn to use the CommonFilters system by reading the API documentation. At least I have learned a lot about Panda by searching the API docs.

If all the modules are placed in the same directory as CommonFilters itself, there will be lots of modules in the same place, and finding the interesting one becomes difficult.

I agree that flexibility is desirable.

rdb wrote:Part of the flexibility that this overhaul would offer is that it would allow people to customise CommonFilters with their own filters by creating their own Filter class and adding it.

Adding it where? In their local copy of the Panda source tree?

Hmm, this would make it easier to contribute new filters to Panda, which is nice.

rdb wrote:
Technologicat wrote:A pipeline will be added, in order to correctly apply filters that need the postprocessed (or post-postprocessed etc.) scene as input*. Currently BlurSharpen is the only one that needs this, but that will likely change in the future.,

I imagine that CommonFilters can store a list of filters with a sort value each that would determine in which order they get applied in the final compositing stage..

Yes, that is part of the solution. But there are two separate issues here:

First is where to store the sort values. If I understood correctly, we seem to agree that this information belongs in the Filter subclasses.

Secondly, there are some filter combinations that cannot be applied in a single pass. BlurSharpen and anything else is one such combination - the blur will not see the processing from the other filters applied during the same pass.

rdb wrote:I imagine methods like setVolumetricLighting() to become simple stubs that call something like addFilter(VolumetricLightingFilter(*args, **kwargs)).

Yes, something like that.

rdb wrote:
Technologicat wrote:To implement the pipeline, the control logic of CommonFilters itself will be split into two modules. The first is CommonFiltersCore, which performs the task of the low-level logic in the current CommonFilters, providing shader synthetization for a single stage of the pipeline. The synthesis will be implemented in a modular manner, querying the configured Filters. (Details further below.),

Makes sense. By a "single stage", you mean a part of the final compositing filter, or an individual render pass (like the blur-x and blur-y passes of the blur filter)? These are two separate concepts, though I suppose they don't strictly need to be.

Thanks for asking (I'm sometimes very informal about terminology). By "stage of pipeline", I meant a render pass.

But that doesn't capture the idea strictly, either. From the viewpoint of the pipeline, the important thing to look at are the input textures needed by each filter.

Filters that share the same input textures (down to what should be in the pixels), and respect previous modifications to o_color in their fshader code, can work in the same pass. I think it's a potentially important performance optimization to let them do so, so that enabling lots of filters does not necessarily imply lots of render passes.

Some filters may have internal render passes (such as blur), but to the pipeline this is irrelevant. Blur works, in a sense, as a single unit that takes in a colour texture, and outputs a blurred version. The input colour texture is the input to that pass in the pipeline where the blur filter has been set.

If the aim is to blur everything that is on the screen, the blur filter must come at a later render pass in the pipeline, so that it can use the postprocessed image as its input.

My proposal was that the core synthesizes code for a single "pipeline render pass", so that the pipeline setup can occur in a higher layer (creating several, differently configured instances of the core).


rdb wrote:
Technologicat wrote:The second module is a backward-compatible CommonFilters, which provides the user API (the high-level part of current CommonFilters), and takes care of creating the necessary stages and assigning the configured filters to them. That is, the user configures all filters in a monolithic manner (in the same way as in the current version), and then CommonFilters creates the necessary CommonFiltersCore instances and splits parts of the configuration to each of them in the appropriate manner. This allows keeping the core logic simple, as it does not need to know about the pipeline.,

Ah, so CommonFiltersCore represents a single filter pass? (Can we call it something like FilterPass or FilterStage then?) Can we let the Filter class set up and manage its stages rather than the CommonFilters? It does sound like something that would be managed by each filter, although I suppose some filters might share a filter pass (ie. blur and bloom may share an initial blur pass). Hmm.

Yes, we can change the name to something sensible :)

Any internal stages (passes) (e.g. blur-x and blur-y) are indeed meant to be handled by each subclass of Filter.

About sharing passes in general, I agree. That is the reason to have a code generator that combines applicable filters to a single pass in the pipeline.

About blur and bloom specifically, I think they belong to different passes, because the effects they reproduce happen at different stages in the image-forming process.

I would like to set up the ordering of the filters as follows:

  • full-scene antialiasing (if added later)
  • CartoonInk, to simulate a completely drawn cel
  • optical effects in the scene itself (local reflection (if added later), ambient occlusion, volumetric lighting in that order)
  • optical effects in the lens system (bloom, lens flare)
  • film or detector effects (tinting, desaturation, colour inversion)
  • computer-based postprocessing (blur)
  • display device (scanlines)
  • debug helpers (ViewGlow)
Keep in mind that e.g. chromatic aberration in the lens should occur regardless of whether the result is recorded on colour or monochrome film.

Also note that these categories might not be exhaustive, might not correspond directly to render passes, and in some cases it can be unclear which category a given filter belongs to. For example, I tend to think of blur as a computer-generated postprocessing effect (requiring a complete "photograph" as input), but it could also represent the camera being out of focus, in which case it would come earlier in the pipeline (but definitely after CartoonInk and scene optical effects). I'm not sure what to do about such cases.

(Bloom, likewise, may be considered as a lens effect (the isotropic component of glare), or as a detector effect (CCD saturation). Maybe it is more appropriate to think of it as a lens effect.)

Finally, note that currently, only lens flare supports chromatic aberration. I think I'll add full-screen chromatic aberration and vignetting to my to-do list, to approach a system that can simulate lens imperfections.


rdb wrote:
Technologicat wrote:To support multiple stages, CommonFilters and FilterManager will be extended to capture buffers, in addition to the current mode of operation, where they capture a window with a camera.

I don't understand what you mean by "capturing a buffer", could you please explain that? You can already use FilterManager with a buffer, if that's what you meant, but I don't quite understand the necessity of that.

There are two use cases I'm thinking of.

First is daisy-chaining custom filters with CommonFilters. People sometimes use FilterManager to set up custom shaders, but the problem is that if you do that, it is not easy to apply CommonFilters on top of the result (or conversely, to apply your own shaders on top of what is produced by CommonFilters). When you apply either of these, you lose the camera, and can no longer easily set up the other one to continue where the other left off.

For a thought experiment, consider the original lens flare code by ninth (attached in viewtopic.php?t=15231), and how you would go about applying CommonFilters to the same scene either before or after the lens flare. If I haven't missed anything, currently it is not trivial to do this.

The second case is a scene with two render buffers doing different things, which are both postprocessed using CommonFilters, then rendered onto a quad (using a custom shader to combine them), and then the final quad is postprocessed using CommonFilters. There is a code example in my experiment on VolumetricLighting with cartoon-shaded objects: viewtopic.php?f=9&t=17230 which probably explains better what I mean.

The thing is that at least in 1.8.1, setting up the combine step is overly complicated:

Code: Select all
quadscene = NodePath("filter-quad-scene")
quadcamera = base.makeCamera2d(base.win, sort=7)
quadcamera.reparentTo(quadscene)
cm = CardMaker("filter-quad-card")
cm.setFrameFullscreenQuad()
self.quadNodePath = NodePath(cm.generate())
finaltex = Texture()
self.quadNodePath.setTexture(finaltex)
self.quadNodePath.reparentTo(quadcamera)

...when compared to the case where the original scene render does not need any postprocessing:

Code: Select all
from direct.filter import FilterManager
manager = FilterManager.FilterManager(base.win, base.cam)
scenetex = Texture()
self.quadNodePath = manager.renderSceneInto(colortex=scenetex)

If you have a camera, it is just one line to call FilterManager to set up the render-into-quad, but if you don't (because CommonFilters took it), you need to do more API acrobatics to create one and set up the render-into-quad manually.

EDIT: Also, then FilterManager (or CommonFilters when it calls FilterManager internally) goes on to obsolete the manually created quad and camera, creating another quad and another camera. It would be nice to avoid the unnecessary duplication. I don't know if it affects performance, but at least it would make for a cleaner design.

Then, in both cases, we set up the combining shader

Code: Select all
self.quadNodePath.setShader(Shader.make(SHADER_ADDITIVE_BLEND))
self.quadNodePath.setShaderInput("txcolor", scenetex)
self.quadNodePath.setShaderInput("txvl", vltex)
self.quadNodePath.setShaderInput("strength", 1.0)

and finally postprocess

Code: Select all
self.finalfilters = CommonFilters(base.win, quadcamera)
self.finalfilters.setBlurSharpen()  # or whatever

though here, now that I think of it, I'm not sure how to get the quad camera in the case where FilterManager internally creates it.

In summary, what I'm trying to say is that I think these kinds of use cases need to be more convenient to set up :)


rdb wrote:
Technologicat wrote:Adding the buffer capture feature has a desirable side effect: the user will be able to pipe together CommonFilters (the high-level object) instances with custom FilterManager filters. For example, the scene may first be processed by some CommonFilters, then by some custom filters, and finally more CommonFilters. This gives an extremely flexible modular design also from the user's perspective, making CommonFilters and the custom filter mechanism complement each other (instead of being alternatives, as in the current version).

Could the user achieve the same thing by subclassing Filter and adding this Filter to the same CommonFilters object?

Maybe.

The difficulty in that approach is that the user needs to understand the internals of CommonFilters, in order to be able to set up the pipeline pass number and sort-within-pipeline-pass priority correctly, in order to make CommonFilters insert the shader at the desired step in the process. Especially, the user must know which pipeline pass the shader can be inserted into (so that it won't erase postprocessing by other filters; consider the blur case).

In addition, the user-defined shader must then respect the limitation that within the same pipeline pass, each fshader snippet must respect any previous changes to o_color. I think it is error-prone to require that of arbitrary user code, and especially, this makes it harder just to experiment with shaders copied from the internet.

Also, the user then needs to conform to the Filter API. If the user wants to contribute to CommonFilters, that is the way to go. But for quick experiments and custom in-house shaders, I think FilterManager and daisy-chaining would be much easier to use, as then any valid shader can be used and there are no special conventions or APIs to follow.


rdb wrote:
Technologicat wrote:
  • The pipeline consists of stages. Roughly speaking, a stage is an ordered collection of filters that can be applied in one pass.
  • Stages are represented in the high-level CommonFilters class by CommonFiltersCore objects kept in an ordered list.

Then I think that FilterStage would be a far more representative term, don't you think? ;-)

Maybe ;)

rdb wrote:One thing I don't quite understand - is a stage a render pass by itself, or a stage in the final compositing shader?

As mentioned above, I was speaking of a render pass (but with the caveats mentioned).

Of the code for different filters in the compositing shader, I used the term "snippet" as I didn't have anything better in my mind :)

rdb wrote:
Technologicat wrote:
  • Each stage has an input color texture. Depth and aux textures are always taken from their initial source. (It would be possible to support processing these, too, by allowing the fshader to output multiple textures. Currently it's not needed.)

Not all stages need an input color texture. SSAO, for instance, does not.

Good point.

rdb wrote:I think FilterConfig is obsoleted by the new Filter design, since each Filter can just take all of the properties in the constructor via keyword arguments, and have properties or setters that invalidate the shader when they are modified depending on the property. Each setter of a particular Filter could either update a shader input or mark the shader as needing to be regenerated.

That is another way to do it. May be cleaner.

rdb wrote:I think that each filter could possibly be a Cg function with the arguments it needs passed to it for better organisation.

Does this bring overhead? Or does the compiler inline them?

Also - while I'm not planning to go that route now - Cg is no longer being maintained, so is it ok to continue using it, or should we switch completely to GLSL at some point?

rdb wrote:You could have a filter stage that's added by default with a negative sort value with its only purpose being to set o_color, which is always applied first.

That's one way of applying the default.

But how likely is the default to be wrong, i.e. do we need to take this case into account?

EDIT: Aaaa! Now I think I understand. If the default is wrong, then override this default filter stage somehow? E.g. sort=-1 means the output colour initialization stage, and if a stage with that sort value is provided by the user, that one is used, but if not, then the default one is used.

rdb wrote:
Technologicat wrote:HalfPixelShift is a special case, which does not conform to this filter model. It could be implemented as a half-pixel shift option to CommonFiltersCore. Enabling this would cause CommonFiltersCore to emit the code for HalfPixelShift in the compositing vshader. It would be enabled for the first stage only (in the high-level CommonFilters).

I think HalfPixelShift should be a global setting in CommonFilters and not a filter at all.

Ok.

rdb wrote:I think at this point it would help to hack up some pseudo-code that kind of shows how the systems work together and perhaps showing a example filter while skipping over the details. It would help to get a good overview and help me to understand your design better.

Ok. I'll put together an example.
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Tue Sep 30, 2014 3:10 am

Here's a more concrete proposal. It's about 90% Python, with 10% pseudocode in comments.

It's in one file for now to ease browsing - I'll split it to modules in the actual implementation. I zipped the .py because the forum does not allow posting .py files.

Currently this contains a Filter interface, a couple of simple example filters trying to cover as much of Filter API use cases as possible, and a work-in-progress FilterStage.

FilterPipeline and CommonFilters are currently covered just by a few lines of comments.

Comments welcome.
Attachments
filterinterface_proposal.zip
Proposal for new CommonFilters architecture.
(8.56 KiB) Downloaded 33 times
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby rdb » Thu Oct 02, 2014 2:17 pm

Wow, that's quite a bit more than some simple pseudo-code. :P Thanks.
It looks great to me! A few minor comments.

Instead of getNeededTextures, I would suggest that there is instead a setup() method in which the Filter classes can call registerInputTexture() or something of the sort. The advantage of this is that we can later extend which things are stored about a texture input by adding keyword arguments to that method, without having to change the behaviour in all existing Filter implementations. It seems a bit cleaner as well. The same goes for getCustomParameters.

getNeedGlow seems a bit specific. Can we instead store a bitmask of AuxBitplaneAttrib flags?

I'm not quite sure I understand this stage idea. Is the "stage" string one of a number of fixed built-in stages? Are the different stages hard-coded? Can you explain to me in simple terms what exact purpose the stage concept serves?

I'm not sure if all of those methods need getters - it seems that some of them can simply remain a public member, like sort and needs_compile. I think sort can be a member with a filter-specific default value, but that can be changed by the user.

I think the strange inspection logic in setFilter has to go. We should keep it simply by either allowing someone to add a filter of a certain type more than once (even if that doesn't make sense), or raising an error, or removing the old one entirely.

Just FYI, cmp= in sort() is deprecated and no longer supported in Python 3. Instead, you should do this:
Code: Select all
self.filters.sort(key=lambda f: f.sort)

where Filter stores a self.sort value.

I think there is no reason to keep CommonFilters an old-style class. Perhaps CommonFilters should inherit from FilterPipeline?
rdb
 
Posts: 10145
Joined: Mon Dec 04, 2006 5:58 am
Location: Netherlands

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Fri Oct 03, 2014 2:30 am

rdb wrote:Wow, that's quite a bit more than some simple pseudo-code. :P Thanks.
It looks great to me! A few minor comments.

I think more clearly when actually coding :P

Thanks for the comments!

rdb wrote:Instead of getNeededTextures, I would suggest that there is instead a setup() method in which the Filter classes can call registerInputTexture() or something of the sort. The advantage of this is that we can later extend which things are stored about a texture input by adding keyword arguments to that method, without having to change the behaviour in all existing Filter implementations. It seems a bit cleaner as well. The same goes for getCustomParameters.

Ah, this indeed sounds more extensible. Let's do that.

rdb wrote:getNeedGlow seems a bit specific. Can we instead store a bitmask of AuxBitplaneAttrib flags?

Yes, why not.

The other day, I was actually thinking that SSLR will need gloss map support from the main render stage, and this information needs to be somehow rendered from the material properties into a fullscreen texture... so, a general mechanism sounds good :)

rdb wrote:I'm not quite sure I understand this stage idea. Is the "stage" string one of a number of fixed built-in stages? Are the different stages hard-coded?

In this initial design, yes and yes, but the idea is that it is easy to add more (when coding new filters) if needed.

I'm not completely satisfied by this solution, but I haven't yet figured out a better alternative which does not involve unnecessary bureaucracy at call time.

rdb wrote:Can you explain to me in simple terms what exact purpose the stage concept serves?

In short, the stage concept is a general solution to the problem of blur erasing the output of other postprocessing filters that are applied before it.

Observe that the simplest solution of applying blur first does not do what is desired, because then the scene itself will be blurred, but all postprocessing (e.g. cartoon ink) will remain sharp.

The expected result is that blur should apply to pretty much everything rendered before lens imperfections (or alternatively, to pretty much everything except scanlines, if blur is interpreted as a computer-based postprocess).


As for the why and how:

As you know, a fragment shader is basically an embarrassingly parallel computation kernel, i.e. it must run independently for each pixel (technically, fragment). All the threads get the same input texture, and they cannot communicate with each other while computing. The only way to pass information between pixels is to split the computation into several render passes, with each pass rendering the information to be communicated into an intermediate texture, which is then used as input in the next pass.

The problem is that with such a strictly local approach, some algorithms are inherently unable to play along with others - they absolutely require up-to-date information also from the neighbouring pixels.

Blur is a prime example of this. Blurring requires access to the colour of the neighbouring pixels as well as the pixel being processed, and this colour information must be fully up to date, to avoid erasing the output of other postprocessing algorithms that are being applied.


I'm not mathematically sure that blur is the only one that needs this, and also, several postprocessing algorithms (for example, the approximate depth-of-field postprocess described in http://http.developer.nvidia.com/GPUGem ... _ch28.html) require blurring as a component anyway. Thus, a general solution seems appropriate.

The property, which determines whether another stage is needed, is the following: if a filter needs to access its input texture at locations other than the pixel being rendered, and it must preserve the output of previous postprocessing operations also at those locations, then it needs a new stage. This sounds a lot like blur, but dealing with mathematics has taught me to remain cautious about making such statements :)

(For example, it could be that some algorithm needs to read the colour texture at the neighbouring pixels just to make decisions, instead of blurring that colour information into the current pixel.)


One more note about stages - I'm thinking of adding automatic stage consolidation, i.e. the pipeline would only create as many stages as are absolutely needed. For example, if blur is not enabled, there is usually no reason for the post-blur filters to have their own stage.

More about this later.


rdb wrote:I'm not sure if all of those methods need getters - it seems that some of them can simply remain a public member, like sort and needs_compile.

Ok. May be cleaner.

On this note, I've played around with the idea of making the filter parameters into Python properties. This would have a couple of advantages.

First, we can get rid of boilerplate argument-reading code in the derived classes. The Filter base class constructor can automatically populate any properties (that are defined in the derived class) from kwargs, and raise an exception if the user is trying to set a parameter that does not exist for that filter (preventing typos). This requires only the standard Python convention that the derived class calls super(self.__class__, self).__init__(**kwargs) in its __init__.

Secondly, as a bonus, this allows for automatically extracting parameter names - by simply runtime-inspecting the available properties - and human-readable descriptions (from the property getter docstrings).


rdb wrote:I think sort can be a member with a filter-specific default value, but that can be changed by the user.

That sounds good. Let's do that.

Maybe stage should be user-changeable, too. (Referring here to the fact that for some filters (e.g. blur), the interpretation of what the filter is trying to simulate, affects which stage it should go into.)


rdb wrote:I think the strange inspection logic in setFilter has to go. We should keep it simply by either allowing someone to add a filter of a certain type more than once (even if that doesn't make sense), or raising an error, or removing the old one entirely.

Ok.

The only purpose here was to support the old API, which has monolithic setThisAndThatFilter() methods that are supposed to update the current configuration.

If this can be done is some smarter way, then I'm all for eliminating the strange inspection logic :)

rdb wrote:Just FYI, cmp= in sort() is deprecated and no longer supported in Python 3. Instead, you should do this:
Code: Select all
self.filters.sort(key=lambda f: f.sort)

where Filter stores a self.sort value.

Ok. Personally I'm pretty particular about Python 2.x (because of line_profiler, which is essential for optimizing scientific computing code), but I agree that Panda shouldn't be. :)

I'll change this to use the forward-compatible approach.

rdb wrote:I think there is no reason to keep CommonFilters an old-style class. Perhaps CommonFilters should inherit from FilterPipeline?

Maybe. This way, it could simply add a backward-compatible API on top of FilterPipeline, while all of the functionality of the new FilterPipeline API would remain directly accessible. That sounds nice.

I'll have to think about this part in some more detail.
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Sat Oct 04, 2014 7:40 am

In the meantime while I'm working on the new CommonFilters architecture, here are screenshots from one more upcoming filter: lens distortion.

The filter supports barrel/pincushion distortion, chromatic aberration and vignetting. Optionally, the barrel/pincushion distortion can also radially blur the image to simulate a low-quality lens.

Tut-LensDistortion_py_screenshot_00002.jpg
Lens distortion, basic generally usable settings. Look at the lower left corner to see the effect.
Tut-LensDistortion_py_screenshot_00002.jpg (61.09 KiB) Viewed 1375 times


Tut-LensDistortion_py_screenshot_00004.jpg
Lens distortion, extreme (generally unusable) settings to show the effect clearly.
Tut-LensDistortion_py_screenshot_00004.jpg (58.71 KiB) Viewed 1375 times


Tut-LensDistortion_py_screenshot_00003.jpg
Lens distortion, low-quality lens with fuzzy image near screen edges.
Tut-LensDistortion_py_screenshot_00003.jpg (59.31 KiB) Viewed 1375 times


This filter will be available once the architecture changes are done.
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Mon Oct 13, 2014 7:55 pm

EDIT: updated the attached code. Code generation and HalfPixelShift are now done.
EDIT2: fixed bug in code enabling HalfPixelShift and some erroneous comments. Attachment updated.
EDIT3: fixed some comments and asserts.
EDIT4: update task mechanism added; it is now a registrable for each individual Filter. ScanlinesFilter provides an example.
EDIT5: the attachment in this post is the last version before the module split; it is now obsolete. See the later post, including code that has been split into modules.

A first version of the CommonFilters re-architecture is almost complete.

I still need to split the code into modules and add imports, and port most of the existing filters (including my new inker) over to the new architecture, but the infrastructure should now be in place.

I expect to get to the testing phase in a day or two.

Some highlights:

  • Multi-passing with automatic render pass generation based on filter properties. Filters are assigned to logical stages (corresponding to steps in the simulated image-forming process), and the pipeline figures out dynamically how many render passes to create and which stages to assign to each. This allows e.g. blur to see cartoon outlines, opening up new possibilities.
  • Allows mixing filters provided with Panda and custom filters in the same pipeline, as long as the custom filters are coded to the new Filter API (which is the same API the internal filters use). The API aims to be as simple as possible. This also makes it easier to contribute new filters to Panda.
  • Filters may define internal render passes, allowing filters with internal multi-pass processing. (This is just to say that the new architecture keeps this feature!)
  • Highly automated. Create run-time and compile-time filter properties with one-liners (or nearly; most of the length comes from the docstring). Assign a value to a filter property at run-time, and the necessary magic happens for the new value to take effect, whether the property represents a shader input or something affecting code generation.
  • Runtime-inspectable; filters have methods to extract parameter names, or parameter names and their docstrings. You can also get the current generated shader source code from each FilterStage by just reading a property.
  • Object-oriented architecture using new-style Python objects. Using inheritance it is possible to create specialized versions of filters (to some degree).
  • Exception-based error handling with descriptive error messages to ease debugging.

Comments would be appreciated :)
Attachments
filterinterface.zip
Work-in progress re-architecting of CommonFilters.
(48.68 KiB) Downloaded 24 times
Last edited by Technologicat on Fri Oct 17, 2014 2:33 am, edited 5 times in total.
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Wed Oct 15, 2014 5:27 pm

Code generation and HalfPixelShift done. Previous post edited to match the new version; the attachment contains the latest code.
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Fri Oct 17, 2014 2:13 am

Success!

The code is now split into modules and it runs! :)

It writes shaders that look like this:

Code: Select all
//Cg
//
//Cg profile arbvp1 arbfp1

// FilterPipeline generated shader for render pass:
//   [LensFocus]
//
// Enabled filters (in this order):
//   StageInitializationFilter
//   BlurSharpenFilter

void vshader( float4 vtx_position : POSITION,
              out float4 l_position : POSITION,
              out float2 l_texcoord : TEXCOORD0,
              uniform float4x4 mat_modelproj )
{
    l_position = mul(mat_modelproj, vtx_position);
    l_texcoord = (vtx_position.xz * float2(0.5, 0.5)) + float2(0.5, 0.5);
}

// initialize pixcolor
float4 initializeFilterStage( uniform sampler2D k_txcolor,
                              float2 l_texcoord,
                              float4 pixcolor )
{
    pixcolor = tex2D(k_txcolor, l_texcoord.xy);
    return pixcolor;
}

// Blur/sharpen blend pass
float4 blurSharpenFilter( uniform sampler2D k_txblur1,
                          float2 l_texcoord,
                          uniform float k_blur_amount,
                          float4 pixcolor )
{
    pixcolor = lerp(tex2D(k_txblur1, l_texcoord.xy), pixcolor, k_blur_amount.x);
    return pixcolor;
}

void fshader( float2 l_texcoord : TEXCOORD0,
              uniform sampler2D k_txcolor,
              uniform sampler2D k_txblur1,
              uniform float k_blur_amount,
              out float4 o_color : COLOR )
{
    float4 pixcolor = float4(0.0, 0.0, 0.0, 0.0);
   
    // initialize pixcolor
    pixcolor = initializeFilterStage( k_txcolor,
                                      l_texcoord,
                                      pixcolor );

    // Blur/sharpen blend pass
    pixcolor = blurSharpenFilter( k_txblur1,
                                  l_texcoord,
                                  k_blur_amount,
                                  pixcolor );

    o_color = pixcolor;
}

The texcoord handler is based on the latest version in CVS, but it now handles texpad and texpix separately (to cover the case where HalfPixelShift is enabled for non-padded textures; in this case the vshader needs texpix but no texpad).

This source code was retrieved from the framework by:

Code: Select all
for stage in mypipeline.stages:
    print stage.shaderSourceCode


In the Panda spirit, you can ls() the FilterPipeline to print a description:

Code: Select all
FilterPipeline instance at 0x7f9c4a655f50: <active>, 1 render pass, 1 filter total
  Scene textures: ['color']
  Render pass 1/1:
    FilterStage instance '[LensFocus]' at 0x7f9c3a3cac50: <2 filters>
      Textures registered to compositing shader: ["blur1 (reg. by ['BlurSharpenFilter'])", "color (reg. by ['StageInitializationFilter'])"]
      Custom inputs registered to compositing shader: ["float k_blur_amount (reg. by ['BlurSharpenFilter'])"]
        StageInitializationFilter instance at 0x7f9c3a3cac90
            isMergeable: None
            sort: -1
            stageName: None
        BlurSharpenFilter instance at 0x7f9c3a3caad0; 2 internal render passes
          Internal textures: ['blur0', 'blur1']
            amount: 0.0
            isMergeable: False
            sort: 0
            stageName: LensFocus

(If it looks like the framework can't count, rest assured it can - the discrepancy in the filter count is because StageInitializationFilter is not a proper filter in the pipeline, but something that is inserted internally at the beginning of each stage. Hence the pipeline sees only one filter, while the stage sees two.)


The legacy API is a drop-in replacement for CommonFilters - the calling code for this test was:

Code: Select all
from CommonFilters190.CommonFilters import CommonFilters

self.filters = CommonFilters(base.win, base.cam)
filterok = self.filters.setBlurSharpen()

(The nonstandard path for the import is because these are experimental files that are not yet in the Panda tree. It will change to the usual "from direct.filter.CommonFilters import CommonFilters" once everything is done - so existing scripts shouldn't even notice that anything has changed.)


Now, I only need to port all the existing filters to this framework, and then I can send it in for review :)

Latest sources attached. There shouldn't be any more upcoming major changes to the framework itself. What will change is that I'll add more Filter modules and update the legacy API (CommonFilters) to support them.
Attachments
CommonFilters190_initial_working_version.zip
Re-architecting CommonFilters for 1.9.0.
(112.14 KiB) Downloaded 26 times
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby rdb » Fri Oct 17, 2014 6:00 am

Excellent work! I'll try to find some time for this soon; sorry that I've not been giving it as much attention as it deserves, I've been absolutely swamped. :(
rdb
 
Posts: 10145
Joined: Mon Dec 04, 2006 5:58 am
Location: Netherlands

Re: CommonFilters - some new filters, and the future

Postby Thaumaturge » Fri Oct 17, 2014 12:03 pm

Impressive! I'm no expert in shader-usage, nor with CommonFilters, but at a glance that looks both elegant and useful. ^_^
MWAHAHAHAHA!!!

*ahem*

Sorry.
User avatar
Thaumaturge
 
Posts: 1366
Joined: Sat Jun 07, 2008 6:34 pm
Location: Cape Town, South Africa

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Fri Oct 17, 2014 12:06 pm

rdb wrote:Excellent work! I'll try to find some time for this soon; sorry that I've not been giving it as much attention as it deserves, I've been absolutely swamped. :(

I suppose that's on par for the course when a large new release is coming up :)

In the meantime, I can proceed with porting the filters (both the old and the new ones). I'll post a new version on the weekend.

There are a couple of things for which specifically I'd like comments - since this subsystem is pretty big, it might be easier to spell them out here:

  • Legacy API and new filters, and new options for old filters (CartoonInk)? Support them there (so that legacy scripts require only minimal changes to add support for new filters), or leave them as exclusive to the new API (from the user's perspective, the new API allows even simplifying the calling code, since it is now possible to change parameter values selectively instead of always sending a christmas tree; but this is more work for the user)?
  • Should FilterManager be modified to support partial cleanups, or is the current solution (that almost always rebuilds the whole pipeline) actually simpler? I'd like to have a system that rebuilds only the changed parts - hierarchical change tracking is already there, so for most cases this would be simple if only FilterManager allowed it. (But there are exotic cases, e.g. changing VolumetricLighting's "source" parameter, or toggling the depth-enabled flag of the new CartoonInk. These affect which textures are required - and if the scene texture or aux bits requirements change, then it's off to a pipeline rebuild anyway, because FilterManager.renderSceneInto() must be re-applied with the changed options.)
  • I'm trying to keep this as hack-free as possible, but there are some borderline cases. I'd like to prioritize power and ease of use - but if you see something that looks like a hack and have an idea how to achieve the same thing cleanly, I'd like to know :)

By the way, I got rid of the inspection logic in setFilter() - now the logic to apply recompiles only when necessary resides in the property setters, where I think it belongs. FilterPipeline still has a setFilter() that either adds or reconfigures a filter (depending on whether it already exists in the pipeline), but now its implementation is much simpler.

Ah, and for now, only one instance per filter type is supported in any given pipeline instance - supporting multiple instances of the same filter in the same pipeline is a bit tricky (details in the code comments and README; in short, this requires some kind of name mangling).
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Sat Oct 18, 2014 3:21 am

Thaumaturge wrote:Impressive! I'm no expert in shader-usage, nor with CommonFilters, but at a glance that looks both elegant and useful. ^_^

Thanks :)

I think the idea of the CommonFilters system was very good - having certain postprocessing operations available that can be simply switched on and configured, in any combination. I find it similar in spirit to the main shader generator: 99% of the time, there is no need to write custom shaders.

The aim of the new FilterPipeline framework is something similar for postprocessing filters. What it adds to the postprocessing system is maintainability (keeping code complexity in check as more filters are added) and extensibility (so that people in the community can write their own filters that plug in to the pipeline). Also, the automatic render pass generator significantly extends the degree how well the different filters play together.

From a user perspective, the interesting part will be new filters. As the first step, I'll be adding the ones I've already coded, i.e. desaturation, scanlines, and lens distortion.

I've also been eyeing ninth's SSLR (local reflection) implementation, which is very cool (and he's ok with including it to Panda). The latter may require changes to other parts of Panda to supply a fullscreen gloss map texture to the postprocessing subsystem, but it should be possible to do. We already have SSAO, so SSLR would be a nice addition to support more high-end visual effects out of the box :)

I also bumped into an independent implementation of an early FXAA (Fast approximate antialiasing) version that was "feel free to use in your own projects" ( http://horde3d.org/wiki/index.php5?titl ... que_-_FXAA ), so I think I'll be adding that, too. The fshader is just a screenful of code, so it's very simple. It would be a nice alternative for smoothing both light/dark transitions and object edges in cartoon-shaded scenes, as FXAA is basically an intelligent anisotropic blur filter. It may also be useful as a very cheap filter for general fullscreen antialiasing on low-end hardware.

There's also SMAA (Enhanced subpixel morphological antialiasing), which is available under a free license, but its implementation is much more complex, and I haven't yet investigated whether it's possible to integrate into the new pipeline. See http://www.iryoku.com/smaa and https://github.com/iryoku/smaa

I'd also very much like to add a depth-of-field (DoF) filter. While no perfect solution is possible using current hardware, the algorithm explained in GPU Gems 3 is pretty good for a relatively cheap realtime filter. See the article, which also contains a nice overview of possible techniques and references to papers discussing them: http://http.developer.nvidia.com/GPUGem ... _ch28.html

Then, there is something that could be improved in the existing filters - for example, I think blur would look more natural using a gaussian kernel. Also, it doesn't yet use the hardware linear interpolation to save GPU cycles, so the current implementation is not optimally efficient. This was covered in a link posted earlier, http://rastergrid.com/blog/2010/09/effi ... -sampling/

And further, the blur kernel size could be made configurable, to adjust the radius of the effect. For a small blur, it is possible to sample 17 pixels using just five texture lookups in a single pass - and that's including the center pixel. A diagram of the stencil can be found in the DoF article linked above, in subsection 28.5.2 - it's pretty obvious after you've seen it once.

So, there's still a lot of work to do :)
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Sat Oct 18, 2014 7:36 pm

Added most of the existing filters and fixed some bugs (notably, buffer creation order, and a corner case in the logical stage merging logic (old version failed if all stages were non-mergeable)). Bloom and AmbientOcclusion turned out to need some new features, too (the "paramproc" argument in Filter.makeRTParam(), and "internalOnly" scene textures, respectively).

New version attached.

Hopefully, I'll get CartoonInk (the old one) and VolumetricLighting done tomorrow; then this is ready for adding in the new filters. (Though I do need to comb through all the comments and docstrings to make sure everything is up to date.)
Attachments
CommonFilters190_more_filters_and_bugfixes.zip
Re-architecting CommonFilters for 1.9.0.
(136 KiB) Downloaded 27 times
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Re: CommonFilters - some new filters, and the future

Postby redpanda » Sun Oct 19, 2014 2:39 am

Just wanted to post to thank you for doing this.
redpanda
 
Posts: 430
Joined: Wed Aug 03, 2011 6:34 am

Re: CommonFilters - some new filters, and the future

Postby Technologicat » Sun Oct 19, 2014 4:36 pm

Technologicat wrote:Hopefully, I'll get CartoonInk (the old one) and VolumetricLighting done tomorrow; then this is ready for adding in the new filters. (Though I do need to comb through all the comments and docstrings to make sure everything is up to date.)

Well, those filters are not in yet, but I did make the code generator a bit cleaner, eliminating some corner cases where unneeded variables were passed to the filter functions. (E.g. ScanlinesFilter needs only texpix for the colour texture, not the actual colour texture (except for the current pixel in the "pixcolor" variable); and StageInitializationFIlter does not need the "pixcolor" argument, because it initializes pixcolor.)

Also, now the code handling the registerInputTexture() metadata should be easier to read. Updated version attached.


Finally, one question, mainly to rdb - how is the "source" parameter in VolumetricLighting (in CVS) supposed to work? I'm tempted to leave that parameter out, since if I understand the code in CVS correctly, its current implementation will only work if the referred texture happens to be one of those already passed to the compositing fshader for other reasons.

Also, related to this, look at line 170 of CommonFilters.py in CVS - I think it should be needtex.add(...), like the others, instead of needtex[...] = True?

Providing a general mechanism for supplying any user-given texture to the compositing fshader is something I haven't even considered yet, since none of the other filters have happened to need that. Might be possible to do by adding more options to registerInputTexture(), if needed.

But I'm not sure if doing that would solve volumetric lighting. If I've understood correctly based on my reading of GPU Gems 3, there are only two ways to produce correct results from this algorithm - a) use a stencil buffer; or b) render a black copy of the scene (with only the light source stand-in sprites rendered bright), invoke VolumetricLighting on that, render another copy of the scene normally, and finally blend those additively to get the final result.

Input on this particular issue would be appreciated.
Attachments
CommonFilters190_codegen_cleanup.zip
Re-architecting CommonFilters for 1.9.0.
(140.1 KiB) Downloaded 33 times
Technologicat
 
Posts: 133
Joined: Tue Aug 20, 2013 11:48 pm

Next

Return to Panda Features in Development

Who is online

Users browsing this forum: No registered users and 0 guests