Managing your game’s memory has never been more important, but the basic rules haven’t changed.
Allocate as infrequently as possible and use as little memory as you can. With each generation the extra memory provided increases the chances of introducing more bugs. Small memory leaks can go undetected for a long time and memory corruption is more likely to occur, while fragmentation of memory can pose real problems. Here I recommend why a custom allocator can save you time later on in development, while scaling well for all size games.
WRITE YOUR OWN
First things first, you need a memory allocator to replace the one provided. Most provided allocators don’t give you all the features, tools and debugging aids you need, therefore it is important to have a custom allocator. At a minimum it should include full logging of allocations, memory used/remaining and custom alignment of allocations.
A custom allocator also avoids some of the basic restrictions placed on you by the provided one and offers performance benefits. For example, the standard Wii allocator will allocate memory with a minimum size and alignment of 32 bytes that can result in large amounts of wasted space. A custom allocator can reduce this to 16, eight or even four for both. The Xbox 360 default uses 4k pages. Switching to 64k pages can give up to a 10% performance increase when accessing data from the CPU.
With many system providers you have to write your own allocator anyway if you want to take advantage of memory such as VRAM, therefore it makes sense to have a custom allocator which will not only allow you to do this but is also used for all your other memory in the game as well.
The standard approach with a custom allocator is to swallow up the entire system memory on a console or a large chunk on the PC (typically the minimum recommended spec) and work within that. This way you have full control, know exactly how much is used, what is left and can be expanded to use more advanced schemes to fit your projects need.
EASING CROSS PLATFORM DEVELOPMENT
One method to help cross platform development is to divide your memory up into different heaps. For example, the ‘main’ game for all platforms may fit in 24MB, so a dedicated heap can be created just for that. This is typically platform agnostic. After that you may have unique heap sizes for each platform. A dedicated physics heap might be the same size on PS3 and PC but a lot smaller on the Wii.
At the low level you have very platform specific heaps for memory like VRAM. Often this is where detailed system knowledge can save you a lot of memory. Minimum alignment on the Xbox 360 for most graphics data is 4k where as on the PS3 its 128bytes. This means small allocations for things like vertex buffers can waste extraordinary amounts of memory on the Xbox. Although you may be using the same assets, you may find that the alignment restrictions mean the heap needs to be larger than on the other system throwing your budgets out. A custom allocator will highlight this early allowing a different approach to be taken to resolve this.
Debugging so much memory is time consuming, so it is important to have the best tools available. Having tools and features which isolate a leak should mean that even in complex systems you are able to find it quickly. Real time visualization of memory use is also extremely useful, as it shows up the basic leaks and how close you come to running out of memory, while not affecting game performance.
Because you have full information about each allocation, you will also have full information on the amount of fragmentation. A few additions to a custom allocator, even if it is only a unique allocation number in deterministic scenarios, can indicate which allocations have somehow managed to pollute what should be one large, continuous, free block of memory.
The trickiest memory bugs are ones involving threading and corruption. These have become more common with the introduction of multi-core processors. Running over the bounds of memory is normally found quickly with the use of sentinels. However corruption can occur when memory is freed on one thread and written to at the same time from another. A form of garbage collection can be used to catch these errors but at the cost of higher memory consumption during debugging.