Up close and personal with BlitzTech's new approach to character animation

Getting to know Kitsu

There’s a wealth of different character and facial animation systems available, but BlitzTech’s work-in-progress Kitsu offers something a little different. Develop speaks to Jolyon Webb (below), research and development art director, to find out more about the ‘emotional avatar’ technology.

For those unfamiliar with the term, what is ‘emotional avatar’ technology?
At the highest level it is a system for real time characters that displays easily readable emotions that react and change in response to live inputs. The whole drive of the system is to make emotional state readable without using HUDs so the characters themselves become more engaging.

What distinguishes Kitsu from other more typical character animation tech?
Primarily it’s the underlying approach of aiming to build a realistic performance from many constantly adapting components rather than from repeating blocks of specific pre-captured performance data. We have a procedurally driven adaptive system that enables us to trigger a range of realistic emotions from our character, dubbed Kitsu.

These reactions are not manually activated or pre-determined but instead are the result of a complex AI routine that analyses environmental changes and produces appropriate emotional responses. So when a butterfly enters the scene Kitsu is pleased to see it, and when it leaves she is despondent.

At night she’s more quick to fear things, and also takes longer to be pacified. None of these reactions are inherent ‘scripted’ events or pre-canned performances tied to one fixed state and condition. The whole drive to the system is that behaviour must be emergent driven by an interaction of character ‘personality’, other actors – for example the bat – and the environment itself. This would be valid for any character producing equally fluid but recognisably different results.

So it offers much more than just pre-canned animations?
Some amazing work has been done in games on specific performance capture and playback; for example EA’s UCAP used in Tiger Woods and recently Rockstar’s fantastic facial performances in L.A. Noire.

The key thing here though is that these examples are entirely reliant on pre-scripted performances from actors that are then triggered in-game. Our approach is to procedurally build a performance on-the-fly by mixing and adjusting some high-quality traditional key-frame animation with a number of other procedural behaviours and systems.

Full-body postural changes accompany every change in emotional state, as do state-specific surface textures such as cheeks flushing red and dimples appearing around the mouth when she’s happy, her eyes becoming fiercer when she’s angry, and actually producing tears when she’s sad. This makes the performance constantly adaptive to the in-game situation with little repetition.

It’s also much more extensible – increased variety can be generated quickly at a lower cost by simply adding to the core system assets without the need for extensive rewrites, performer time or additional capturing. We’re hoping people will find they can add to the readability and engagement of any style of character without incurring a huge overhead on content creation.

And the tech’s demo character Kitsu learns too. How does this work?
Without going into too much detail the core thing we want from the system is a coherent, recognisable emotion and that depends on character memory.
We want Kitsu to learn from her first few encounters with the bat, for instance, that it can’t harm her so the next time she sees it she won’t be as affected.

That kind of memory, and so learning, has to appear to be present to some degree for a performance and emotion to ring true. At a minimum I’d hope the tech would offer developers a ‘no dumbness’ option in their games.

It seems Kitsu will be particularly fitting for use with Kinect. Is that the case?
A good Kinect title has a tight and constantly reinforced feedback loop between player and game. We want emotional feedback to work in a very similar way, responding quickly, transparently and deepening engagement.

Additionally, with Kinect there is a real sense of the game watching the player so the natural extension of this is allowing a digital character with emotions to watch and react to the real world and we are already working on this internally. We have a small range of actions and poses that the player can adopt which will trigger the appropriate emotional responses from Kitsu.

If you wave she’s happy to see you, while if you adopt a more aggressive stance she gets angry and afraid. Leave the play space, and therefore her field of view, and she tracks your movement and then is sad when you’ve disappeared. This level of connection between a real person and an in-game characters is already pretty compelling, even in its current – relatively simple – form, so the potential of this is huge.

Of course, Kinect isn’t the only target either; this feedback/engagement loop is valid on pretty much any camera-enabled device with some computing power.

Kitsu is platform agnostic and relatively inexpensive. What about today’s games development sector made it important to make sure Kitsu remained accessible and affordable in this way?
It’s important because tools should be always strive to be both accessible and affordable and it’s something we’ve always held at the forefront of our thinking with BlitzTech.

More specifically though it is about how the philosophy behind this technology is to not allow depth of performance and variety to be very heavily dependent on massive amounts of content.

Blitz Games Studios has a long history of developing for multiple platforms and game styles, this is in our DNA and we develop tools and tech to support this broad range of game types. Frequently fascinating games with good financial returns are not of a monolithic size and we look to bring as much shared tech and added value to these as possible.

This is something that is likely to become more and more common across all development and Kitsu is designed to work exactly that way, providing a platform and genre agnostic system that is usable by all developers; not just those working on triple-A titles with enormous budgets.

Is it suitable for triple-A work?
Yes. Coherent, engaging adaptive performance; what’s not to like about that in a triple-A title? Mocap can sometimes be absolutely the right choice but if I wanted flexibility and for all my characters, hero or NPC, to perform consistently in my game without higher and lower classes of performance I would look to our tech.

And does it offer enough detail for the serious gaming market in which BlitzTech is so successful?
Absolutely. There is nothing in the tech that would prevent this and in fact much of what has gone into it already is a direct result of things we’ve learnt over the years in our Serious Games division.

There’d be a different level of tuning required for some serious games purposes but interestingly we’ve already had a psychotherapist review and feedback on our current work, and she was very impressed.

When can development studios expect to be able to have access to Kitsu as a productised tool?
We’re still in the prototyping phase right now but some of the lessons we’ve learnt and systems we’ve created are already being used by other teams at Blitz. We’ve long followed a strategy of detailed research and development because we’ve learnt that if you want your technology to stay ahead of the curve and able to adapt to new platforms quickly and cleanly you must keep learning and anticipating what the future may bring.

At the end of the day, we are all about creating tools, systems and pipelines that empower our creatives to do amazing things as efficiently and effectively as possible, and that applies to our external licensees just as much as our internal dev teams – so watch this space.

www.blitzgamesstudios.com

About MCV Staff

Check Also

The shortlist for the 2024 MCV/DEVELOP Awards!

After carefully considering the many hundreds of nominations, we have a shortlist! Voting on the winners will begin soon, ahead of the awards ceremony on June 20th