Log in | Imprint
last updated 8/29/2010 | 18217 views
On Twitter
On Vimeo
On YouTube

Articles . Three not-so-cute Anti-Aliasing Tricks

Anti-aliasing is an absolute must for any modern real-time 3D application. In particular applications featuring lots of extremely thin objects such as ropes or pylons, that may easily shrink to objects barely covering one pixel in at least one direction when seen from a certain distance, will suffer from jagged edges, missing pixels and erratic edge fidgeting when proper anti-aliasing is lacking.

When multisample anti-aliasing first came up in real-time computer graphics, it was introduced into the major graphics APIs with solely the fixed-function pipeline in mind. With all objects being rendered straight to the back buffer before being blitted onto the screen, this new functionality worked splendidly, building multisample anti-aliasing into your own application was a matter of setting a bunch of states and flags. However, in the advent of shaders, when developers started to push rendered pixels back into the pipeline again and again, it also became the source of an entirely new set of problems.

The main failing of multisample anti-aliasing, as originally implemented and thus still present on piles of (DirectX-9-level) consumer hardware today, is the profound lack of access to information that is actually there, but unfortunately resides locked up on the GPU. Modern rendering techniques require access to this information, yet today's iterations of common graphics APIs are only slowly catching up with this need.

In fact, the only way to access multi-sampled pixel data on most current-generation hardware is by resolving multi-sampled render buffers into non-multi-sampled ones, thereby not only destroying valuable information on the origin of the data, but also generating new and entirely wrong data. Resolving means blending all the valuable sub-pixel data, the collection of which consumed both priceless time and memory, back into one data set per pixel, resulting in linearly interpolated values effectively matching none of the multiple polygons originally taking part in edge pixels. In particular depth and normal data, where pixel colors do not represent screen colors but rather depths or vectors, greatly suffer. Quite often, their resolving yields results that do not only destroy the effect of anti-aliasing, but also break the final image with unpredictable ugly artifacts on every other polygon edge.

The uneasy truth is that there simply is no general solution to overcome all of these problems on this kind of hardware. Innumerable modern rendering techniques and effects rely on correct screen-space depth and similar data, with each and every one of them calling for a specialized solution to overcome the failings of anti-aliasing. Fortunately, most of these effects can be grouped into categories sharing similar solutions, some of which I am going to outline in this article.

Depth-based Post-Processing Effects

This category includes effects such as depth of field, screen-space ambient occlusion, haze / fog and god rays. The first step to fixing your anti-aliasing problems in your particular effect is of course to make sure that you actually suffer from this kind of problems. Some very basic effects such as god rays and fog are likely to handle resolved depth values quite well, depending on your particular implementation as well as the effect's intensity. Subtle haze may not care if a pixel on a resolved polygon edge actually belongs to the polygon in the front or in the back, the resolved depth will lie somewhere inbetween and might still yield fully acceptable results. Even more sophisticated effects such as screen-space ambient occlusion may to a degree work well without your tempering.

The second step, if you have found that you do indeed suffer from anti-aliasing artifacts (most easily determined by simply turning off anti-aliasing), you have to gain understanding for what part of your effect actually causes these problems. For example, let's take a closer look on how to fix an otherwise working depth of field effect in respect to anti-aliasing.

Fixing Depth of Field in respect to Anti-Aliasing

The problem with depth of field in conjunction with anti-aliasing is that the computation of the blur intensity of a given pixel does most likely not scale linearly with the pixel's depth value. This may not be instantly apparent, as the formular may very well look something like intensity = depth * dof_scale, yet in the end some kind of clamping is bound to occur somewhere, breaking linearity (if only by a limit to the blur radius). Inside bounds of approximate linearity, depth of field and anti-aliasing probably won't look too bad, however, as soon as you start focussing nearby objects on a remote background, anti-aliasing will not only vanish, but also generate ugly halos around the objects in focus.

So what actually happens is that when the multi-sampled depth buffer is resolved, the depth of the remote background and the depth of the objects in focus are averaged into one pixel value, weighted by their sub-pixel coverage. For edge pixels, this means that the resulting depth will lie somewhere in-between the remote background and the objects in focus, and as the background is quite remote, this depth value will still be fairly remote. Therefore, during the depth of field pass, anti-aliased edge pixels will falsely be recognized as remote pixels lying way behind the focal plane, and their blurring will see to it that these already averaged (for anti-aliased) pixels are again averaged, incorporating mostly background pixels, thus effectively revoking anti-aliasing.
At the same time, background pixels lying near objects in focus will falsely recognize anti-aliased edge pixels nearby in screen space as not-so-far away in depth-of-field space (both are quite remote and likely to be clamped to some the-absolute-out-of-focus-blur-totally-plane as stated above). With these neighboring foreground edge pixels being weighted as out-of-focus background pixels, a halo occurs, surrounding the entire background-foreground-edge of objects in focus.


I.1: Without anti-aliasing fix (FSAA turned on)

I.2: Anti-aliasing fix (FSAA turned on)

This understanding of the two problems as outlined above yields a rather simple solution: recognize wrong edge pixel depth values and correct them, taking into account their direct neighbor pixels. The trouble is that any pixel shader is only run once per pixel, so fixing the depth value effectively means deciding for one depth value and corresponding blur intensity, either the one in the front or the one in the back. Bearing in mind what was already said, it is only logical that this depth value be the one closest to the focal plane, the aim being to recover anti-aliasing for objects in focus. In this way, accidental blurring of edge pixels is impossible, the same goes for accidental incorporation of edge pixels into the blurred background.

The diffculty here is to restore the frontmost depth value from a given resolved anti-aliased edge pixel. As observed in the beginning, it is basically impossible to restore sub-pixel information once a render target is resolved, therefore there is no way of retrieving the actual frontmost depth value that contributed to the final blended pixel value. In the end, a simple but effective solution is to determine the minimum depth of each pixel compared to its four direct neighbors on both main axes, using this "corrected" value to bias a given pixel into focus, depending on the blur intensity resulting from this additional value.

    float fDepth = tex2D(depthSampler, input.TexCoord).x;
float4 fNeighborDepths = float4(
tex2D(depthSampler, vNeighborTexCoords1.zy).x,
tex2D(depthSampler, vNeighborTexCoords1.xw).x,
tex2D(depthSampler, vNeighborTexCoords2.zy).x,
tex2D(depthSampler, vNeighborTexCoords2.xw).x );
float2 fMinNeighborDepth2 = min(fNeighborDepths.xy, fNeighborDepths.zw);
float fMinNeighborDepth = min(fMinNeighborDepth2.x, fMinNeighborDepth2.y);

float2 fDepth_MinDepth = float2(fDepth, min(fDepth, fMinNeighborDepth))

float2 fIntensitiy2 = computeBlurIntensity(fDepth_MinDepth);
float fIntensity = lerp(fIntensitiy2.y, fIntensitiy2.x, fIntensitiy2.y);

This fixed intensity value is far from being accurate, worse, it even influences pixels that are in no way edge pixels that would suffer from any anti-aliasing artifacts at all. The fix is a compromise between accuracy and directed inaccuracy to make up for the profound lack of accurate information. It might even give rise to other very subtle artifacts. Yet, especially when in motion, the resulting image quality is noticeably better than without the fix, making it a viable improvement.


I.3: Without anti-aliasing fix (FSAA turned on)

I.4: Very subtle artifacts resulting from the fix (FSAA turned on)


I.5: Without anti-aliasing fix (click to enlarge)

I.6: Anti-aliasing fix (click to enlarge)

Fixing artifacts caused by anti-aliasing by relating to direct neighbor pixels is a quite common solution that, with a proper understanding of the problem you are facing, will help you improve the quality of your post-processing effects in most if not all cases.

Indexing Inferred full-screen Textures

This category relates to anything that is generated in a fashion that complys with what has become known as inferred rendering techniques. At first glance, the situation appears to be identical with the one examined in the previous section, and indeed, processing the so-called G-Buffer (depth, normal and supplementary geometry data that is) bears the same problems discussed earlier. However, closer consideration reveals that the situation is actually quite different to our advantage. The big upside of inferred rendering techniques as opposed to deferred rendering techniques is that we actually can determine the exact origin of at least one of the samples contributing to a multi-sampled pixel. This difference is made by the fact that after lighting / shadowing / processing your scene in an inferred fashion, the results are again mapped onto the real anti-aliased geometry, providing the one big opportunity of comparing the actual sample value of the polygon currently rendered with the resolved pixel value in the G -Buffer, thereafter taking pointed measures to adapt inferred texture indexing if and only if the pixel currently rendered is an edge pixel.


I.7: Without anti-aliasing fix (FSAA turned on)

I.8: Anti-aliasing fix (FSAA turned on)

So the idea here is to identify polygon edge pixels and to shift inferred texture look-up coordinates away from the edge towards the polygon center for these pixels only. Again, the resulting pixels will not be completely accurate, however, these measures imply the advantage that there may not even be a need for fixing the generation step of inferred textues indexed this way altogether, as incorrect edge pixels will in fact never be indexed at all. Besides, this fix will indeed only ever affect edge pixels, thus overall image quality will generally not suffer, in the worst case edge pixels just won't be cured.

    float2 vInferredTexCoord = input.ScreenSpacePos.xy / input.ScreenSpacePos.w;
float fAntiAliasingDelta = abs( tex2D(depthSampler, vInferredTexCoord).x - input.ScreenSpacePos.z );
float2 fAntiAliasingDeltaDD = float2(ddx(fAntiAliasingDelta), ddy(fAntiAliasingDelta));
float2 vAntiAliasingTexOffset = sign(-fAntiAliasingDeltaDD) / g_fResolution;
vInferredTexCoord += vAntiAliasingTexOffset * saturate(4.f * fAntiAliasingDelta);

As with the post-processing fix, this solution will certainly not fix all of your anti-aliased edges, but falsely lit / shaded pixels will be greatly reduced, improving the overall image quality and reducing flickering when in motion.


I.9: Without anti-aliasing fix (click to enlarge)

I.10: Anti-aliasing fix (click to enlarge)

Depth-sensitive integrational Effects

This category includes depth-sensitive blur, bilateral upsampling and similar effects that are often required to correctly blur or de-noise shadow / AO / indirect lighting data and such. The trouble with these effects is that they spread false pixel data resulting from resolved anti-aliased pixel information beyond the tiny space occupied by such pixels onto their direct or even indirect neighbor pixels.

There are several possibilities to overcome this problem. The first and simplest solution is to perform some simple kind of edge detection, measuring depth deltas between pixels and providing a threshold to eliminate pixels lying beyond a cetain range around the sampling center. In the case of bilateral upsampling, this is probably what you are doing anyway. In the case of depth-sensitive effects relying on depth-continuity, fixing might be as simple as introducing an additional delta threshold.

If your effect calls for a more sophisticated solution for detecting such and only such pixels that have been altered by the process of resolving, there is another way of identifying these pixels that involves an additional channel in your depth render target. Either you can make use of the shader model 3+ feature called centroid sampling, e.g. as described here, or, especially if you are using techniques such as variance shadow mapping that come with an additional channel of squared depth values, there is another way of detecting if a given pixel suffered from resolving by comparing its depth value with its squared depth value. Due to the fact that squaring is a non-linear operation, the resolved squared depth value will most likely differ from the resolved depth value squared, making it a useful means of anti-aliasing edge detection.

Note: Images taken from our Breakpoint 2010 winning entry demo collaboration.

© 2024 Tobias Zirr. All rights reserved.