At Blitz we have a history of working early with new technology, and our story with Kinect is no different. As we were there from the start with multiple titles – including one for launch – we were able to take advantage of knowledge sharing across the studio, with R&D focused on the features our games needed. This article delves into one of those features – augmented reality, or AR.
Proving out AR was an early priority for us as we were developing Fantastic Pets for THQ – Kinect’s first, and so far only, AR game. The possibilities that this unique technology brings to AR over a more conventional image camera promised a big step forward so we were excited to see what we could do with it.
Firstly Kinect gives real-time depth information in addition to the colour image, opening up the possibility for virtual objects to pass in front of and behind real objects in the scene rather than being simply superimposed on top of the camera image. This is a big step forward and really increases the suspension of disbelief that is the magic of AR.
The depth information from Kinect is simply a measurement for each sample point of the distance from the sensor. In order to use this for rendering purposes the information needs to be converted in real-time to a non-linear space based on the game’s 3D projection and this is achieved using a shader when copying the depth buffer.
One thing we found is that it helps to render virtual shadows for the virtual objects as that really helps to ground them in the scene. The same depth technology we developed for the object rendering also let us render virtual shadow maps into the scene and cast them on to objects in the real world.
GENERATING VIRTUAL COLLISION FROM REALITY
Another exciting new possibility was the generation of collision data from the real world. We were able to calculate game collision information from the depth data so that virtual objects could sensibly interact with reality.
In-game characters avoided real-life obstacles like sofas or coffee tables – or even the player themselves – while virtual objects could bounce off or simply avoid them.
Of course we only have the information from the point of view of the Kinect sensor so it is by no means a complete view of the scene. For example, we can’t see parts of objects that are facing away from the sensor or those which are obscured by a closer object.
This makes it impossible to build an accurate virtual 3D mesh of the scene however we can use other representations that are both easier to generate and just as useful for our purposes. Using a simpler representation enabled us to generate the information far quicker and also make the algorithm much more reliable for different scenes.
To complement the techniques based on the depth information stream we use another unique aspect of Kinect – player skeletal tracking – to generate a physics skeleton that enables interaction with virtual objects via a physics simulation. For example, a player can push, smash or even catch a virtual object while seeing themselves interacting in the AR view.
Kinect is able to reliably track two players using a fixed skeleton hierarchy. The positions and confidence of each joint obtained from the Kinect system can be retargeted on to a physics skeleton similar to the ragdoll characters often seen coming to harm in physics demos. By adding in joint constraints and filtering we can quickly reduce the amount of noise in the signal for solid player interactions. The information can also be used to work out where the living room floor level is.
PUTTING IT ALL TOGETHER
Our early R&D experiments bringing these three techniques together were instantly compelling so we knew we were potentially on to something. Playing these demos generated a lot of ideas for AR gameplay that were quickly transferred into the games themselves.
AR on Kinect has a lot of potentital and Fantastic Pets is just the start – we look forward to seeing where else it’s taken in the months ahead.