CommonFilters - some new filters, and the future

I think more clearly when actually coding :stuck_out_tongue:

Thanks for the comments!

Ah, this indeed sounds more extensible. Let’s do that.

Yes, why not.

The other day, I was actually thinking that SSLR will need gloss map support from the main render stage, and this information needs to be somehow rendered from the material properties into a fullscreen texture… so, a general mechanism sounds good :slight_smile:

In this initial design, yes and yes, but the idea is that it is easy to add more (when coding new filters) if needed.

I’m not completely satisfied by this solution, but I haven’t yet figured out a better alternative which does not involve unnecessary bureaucracy at call time.

In short, the stage concept is a general solution to the problem of blur erasing the output of other postprocessing filters that are applied before it.

Observe that the simplest solution of applying blur first does not do what is desired, because then the scene itself will be blurred, but all postprocessing (e.g. cartoon ink) will remain sharp.

The expected result is that blur should apply to pretty much everything rendered before lens imperfections (or alternatively, to pretty much everything except scanlines, if blur is interpreted as a computer-based postprocess).

As for the why and how:

As you know, a fragment shader is basically an embarrassingly parallel computation kernel, i.e. it must run independently for each pixel (technically, fragment). All the threads get the same input texture, and they cannot communicate with each other while computing. The only way to pass information between pixels is to split the computation into several render passes, with each pass rendering the information to be communicated into an intermediate texture, which is then used as input in the next pass.

The problem is that with such a strictly local approach, some algorithms are inherently unable to play along with others - they absolutely require up-to-date information also from the neighbouring pixels.

Blur is a prime example of this. Blurring requires access to the colour of the neighbouring pixels as well as the pixel being processed, and this colour information must be fully up to date, to avoid erasing the output of other postprocessing algorithms that are being applied.

I’m not mathematically sure that blur is the only one that needs this, and also, several postprocessing algorithms (for example, the approximate depth-of-field postprocess described in http.developer.nvidia.com/GPUGem … _ch28.html) require blurring as a component anyway. Thus, a general solution seems appropriate.

The property, which determines whether another stage is needed, is the following: if a filter needs to access its input texture at locations other than the pixel being rendered, and it must preserve the output of previous postprocessing operations also at those locations, then it needs a new stage. This sounds a lot like blur, but dealing with mathematics has taught me to remain cautious about making such statements :slight_smile:

(For example, it could be that some algorithm needs to read the colour texture at the neighbouring pixels just to make decisions, instead of blurring that colour information into the current pixel.)

One more note about stages - I’m thinking of adding automatic stage consolidation, i.e. the pipeline would only create as many stages as are absolutely needed. For example, if blur is not enabled, there is usually no reason for the post-blur filters to have their own stage.

More about this later.

Ok. May be cleaner.

On this note, I’ve played around with the idea of making the filter parameters into Python properties. This would have a couple of advantages.

First, we can get rid of boilerplate argument-reading code in the derived classes. The Filter base class constructor can automatically populate any properties (that are defined in the derived class) from kwargs, and raise an exception if the user is trying to set a parameter that does not exist for that filter (preventing typos). This requires only the standard Python convention that the derived class calls super(self.class, self).init(**kwargs) in its init.

Secondly, as a bonus, this allows for automatically extracting parameter names - by simply runtime-inspecting the available properties - and human-readable descriptions (from the property getter docstrings).

That sounds good. Let’s do that.

Maybe stage should be user-changeable, too. (Referring here to the fact that for some filters (e.g. blur), the interpretation of what the filter is trying to simulate, affects which stage it should go into.)

Ok.

The only purpose here was to support the old API, which has monolithic setThisAndThatFilter() methods that are supposed to update the current configuration.

If this can be done is some smarter way, then I’m all for eliminating the strange inspection logic :slight_smile:

Ok. Personally I’m pretty particular about Python 2.x (because of line_profiler, which is essential for optimizing scientific computing code), but I agree that Panda shouldn’t be. :slight_smile:

I’ll change this to use the forward-compatible approach.

Maybe. This way, it could simply add a backward-compatible API on top of FilterPipeline, while all of the functionality of the new FilterPipeline API would remain directly accessible. That sounds nice.

I’ll have to think about this part in some more detail.