CommonFilters - some new filters, and the future

Ah, this one I wasn’t aware of.

This filter looks very useful as a source of ideas. Especially the blobs generated by bright light sources look attractive. It’s a good point by the author of that filter that the blobs make bokeh look distinctive, and without them it looks just like a regular blur. So, maybe we should queue up a bloom preprocessor for DoF, too :slight_smile:

The code seems surprisingly simple. There must be some corner cases that the algorithm can’t do correctly…?

Hmm. Looking at the first example render, there seems to be a slight “fringe” on top of the head, and at the “balcony” (or whatever it is - it’s out of focus :slight_smile: ) at the left. At the edge of the “balcony”, the yellow color bleeds where it shouldn’t, and at the top of the head, the blurring stops about two pixels before it reaches the head.

If I read the code correctly, it seems the blur size is approximated using the CoC (circle of confusion) at the fragment being rendered (which is easily available in a shader), instead of spreading each fragment by its own CoC (which would be physically correct). This might explain the artifacts. Bleeding of out-of-focus background objects onto in-focus midground objects (like at the balcony in the picture) is a typical issue of many realtime DoF algorithms.

Of course, classically it is thought that performing the CoC-based spreading correctly would require a scatter type of computation, instead of a gather, which shaders do. To some extent, it is possible to emulate scatter as gather, but this is typically very inefficient.

[size=70]Scientific computing terminology. Roughly speaking, a scatter computation answers the question “where does this data go?” (with possibly multiple target locations updated by one data item), and a gather answers “what data goes here?” (from possibly multiple source locations). From a parallel computing viewpoint, scatter is a disaster, because it requires write locking to ensure data integrity (so that all updates to the same data item are recorded correctly).

(It is well known that as the number of tasks increases, locking of data structures quickly becomes a bottleneck. For proper scalability, lock-free approaches are required.)

Gather is efficient, because with the additional rule that the computation kernel (shader) cannot modify its input, read locking is not needed (no race conditions). Because each gather task writes only to its own target data item, write synchronization is not required, either, and all the gather tasks can proceed in parallel.

This I think is the underlying reason for using the gather model for shaders, aside from the other useful property that if one wants to exactly fill some pixels, it is best to approach the problem from the viewpoint of “what goes into this pixel” (rather than “where does this data go” and hope that the data set hits all the pixels).[/size]

The main new idea in the approach of Kass et al. is precisely that they recast the CoC scatter problem in a new light (so to speak). The diffusion equation models the spreading of heat in a continuous medium. The heat conductivity coefficient (which may be a function of space coordinates) represents the local diffusivity at each point - which is a lot like the local CoC for each fragment.

The unknown quantity to be computed is the temperature field - or in this case, the pixel color (independently for R, G and B channels). Solving the diffusion equation exploits the physics/mathematics of heat diffusion to perform a scatter computation, while requiring only gather operations.

There have been other approaches to solve this, but at least according to Kass et al., there have always been limitations, either with computational efficiency, or with the ability of the algorithm to perform variable width blur. (Another useful look at the history of different approaches to realtime DoF is given in the GPU Gems article I linked in a previous post.)

(The other problem in DoF is the translucency of thin objects in the out-of-focus foreground, which requires an extra camera.)

In conclusion, thanks for the link! I’ll see what I can do :slight_smile: