An exploration of the technique's new relevance

On Anti-Aliasing

Most game developers get the anti-aliasing in their games wrong. Or, to be more specific, the way most of us do our gamma correction causes the anti-aliasing to produce incorrect results.

For games, anti-aliasing is a set techniques used to help us represent high-resolution images on a low-resolution, pixel-grid based, LCD or CRT display. Most anti-aliasing techniques are variations on Supersampling Anti-Aliasing (or SSAA). With SSAA the image is rendered to a high resolution buffer and then the pixels are averaged to produce the lower resolution output image.

For consoles the high-resolution image can be either double the width (known as two times or 2X) or double width and double height (known as four times or 4X) anti-aliasing. They commonly use an optimised version of SSAA called Multi-Sample Anti-Aliasing (MSAA) which minimises the number of times the pixel shader is run.

So where does gamma correction come into all this? Well, if this isn’t set up correctly, it will send your anti-aliasing (and other things) all screwy. And this is the bit that very few developers do correctly.

Gamma correction is a colour space operation. It transforms the linear RGB colour space into a non-linear colour space that’s more suitable for viewing on an LCD (which has non-linear intensity range) and for viewing by the human eye (which responds to variations in intensity in a non-linear fashion). The RGB colour space after it’s been gamma corrected gives more resolution to the darker colours which correlates better with the way that our eyes work.

We have to be very careful about where we perform the gamma correction, because it’s vital that all pixel shader operations are performed in linear space. This is because maths just doesn’t work properly in any non-linear number system. Mathematical operations such as add, subtract, multiply or divide will produce unexpected results in anything other than a linear system.

So in an ideal render pipeline the textures would be in linear colour space, all the pixel operations would be performed in linear colour space and then right at the end (during the tone mapper if you’re HDR) the output image is gamma corrected ready for the display drivers to convert it for display.

[img :645]

The render pipeline in your game is almost certainly set up to do exactly that. So what’s the problem?

The problem is that, unless you’ve invested time in this problem, the textures that you’re feeding into your render pipeline are not in linear colour space. Any image taken with a digital camera or anything drawn using Photoshop will be in a format such as sRGB, which has an effective gamma of 2.2. This means all your pixel operations are being performed in a non-linear colour space and so when the MSAA averages the pixels to reduce aliasing, effectively adding the values together and dividing by the number of samples, the numbers it produces are wrong. And you can see that they’re wrong by the ‘roping’ effect on screen, whereby some of the pixels in a line appear darker than others – giving the effect of a coiled rope.

So what can we do about it? Firstly, be aware of the issue and be explicit in your render pipeline about where you perform gamma correction. A render pipeline that reads sRGB textures as RGB will produce an output image that is too light and so it’s likely that you’re compensating for this somewhere in the pipeline by darkening the scene down. By doing this you’re going to lose accuracy which will introduce banding into the scene.

Secondly, be aware of which colour space your textures are in. If you choose to keep your textures in gamma space then current generation consoles can be set up to perform the conversion back to linear space in the pixel shader for ‘free’. However, this conversion is a piecewise linear approximation and so introduces artifacts itself (see figure 1). Storing all your textures in linear colour space isn’t the perfect solution either because they don’t have good resolution for darker colours.

The team at Valve solves the problem by pre-computing textures to take account of the inaccuracy of the piecewise linear approximation, so that when the conversion is performed by the hardware it produces the correct results. For more info see

 It may seem like a lot of effort just to address some visual artefacts but the aliasing and banding detract from the solidity of the image, and so a decent fix makes a big difference to the visual fidelity of the game you might be working on.

About MCV Staff

Check Also

ELAM opens student games lab in collaboration with Creative Assembly

“We are so happy to launch this new dedicated games education space and to welcome the trainees to learn and grow their skills there."