A chat with MD and technical director Stu Aitken about what Axis is up to, and the future of animation...

Interview: Axis Animation

Develop: Can you give me an overview of what Axis provides?
Stu Aitken:
Axis specialises in producing intros, trailers and cutscenes for the games market – though we also do commercials and broadcast animation. That ranges from full blown pre-rendered sequences with highly detailed assets, sophisticated shading and complex special effects to working directly with client realtime engines.

We offer an end to end production service in terms of direction, script treatment, conceptual artwork, motion capture direction as well as the actual production of animation and/or final rendered sequences, though its usual for there to be a lot of collaboration with our clients on many aspects or for us to flesh out supplied briefs or assets.

What distinguishes your offering from the those of your closest peers/rivals?
Apart from being fortunate enough to have a very talented bunch of people working for us, and many many years of experience delivering within the games industry, I would say we major on a collaborative approach where we try and engage as fully as we can with all aspects of the job right from the very first pitch. We see production management as an absolutely crucial element of what we do and I think we have some really strong people in that area, that really make a difference.

‘Plussing’ is a term that frequently crops up – we are usually involved in taking something that exists on many levels already and driving it further, in a more particularly focused direction than our clients could do on their own. That focus is usually about more traditionally linear story telling and direction skills and delivering more of a performance orientated result since we are not normally that involved in the interactive aspect of a game.

On the more technical side I think we have a well developed pipeline, especially on the rendering side that allows us to be able to produce high quality results quickly. This year we are also focusing heavily on the pipeline for cut scene work that may not end up being pre-rendered at all – developing tools and processes that allow us to efficiently interface with and export back into customer realtime pipelines. I think that area is one that’s potentially very exciting for external production partners like Axis – it involves a much deeper level of collaboration and integration than doing a two-minute trailer for example.

What are the advantages your approach provides?
We can span multiple approaches and try to pick what’s best for the job spec and budget. One job may require detailed muscle simulation to get the nuances right on one character say, whereas for 30 minutes of cutscenes exported back to a game engine thats just not feasable and we need to be looking at how to get the best results out of less sophisticated solutions.

Recently we’ve had experience of pretty much the whole gamut of current approaches to facial animation – from hand keyframing, to optical marker based capture to image analysis based solutions – they all have there pros and cons and I think they are all capable of good results if you can get the setup right, and they are suited to the realities at hand.

For example, for stylised work keyframing is probably best, optical may be best if a full performance based approach is logistically feasable – there are many quite boring reasons why it may not. Image analysis might be best if you have a large amount of capturing to do and you can’t necessarily have the same actors doing the dialogue and the body mocap.

I would still put individual artist talent and experience above any one technique being some kind of magic bullet. The truth is they all require a lot of work to get good results, and none of them is nearly as ‘automated’ as anyone might have you believe. Using good specialists in whatever approach decided on is very important – we will always try and work with the best expertise we can from that POV wether that means our own inhouse capabilities or working with external specialist contractors.

What opportunities has your recent work provided to push and improve your company’s offering?
I think we did some nice character work on the Killzone 2 intro from last year – and we did some great subtle close up face work for the Brink Trailer we did with AKQA in 2009 as well. Both of those projects required us to push individual performances that would hold up to extreme closeups at HD resolution. That tends to push all aspects – shading, texturing, lighting, animation, rigging and deformation.

We’re working on a couple of jobs right now which I think will really push the bar for us even more and make heavier use of capture technology and/or more sophisticated msucle based deformation but I can’t say much more than that until they are out in the public domain.

Unfortunately we are under NDA on pretty much all of our recent work and often a publisher is reluctant to publicise our involvement full stop – there’s stuff we worked on over three years ago we’re still not really allowed to talk about publically. I kind of keep hoping that attitude may change with time – its very different from the film industry for instance, where everyone who came together to work on a project is more or less encouraged to shout to the rafters about it.

In terms of video games specifically, how far over the uncanny valley are we? Will we begin to see realism in a broader spectrum of games anytime soon?
I actually think games are only just starting to maybe arrive at the uncanny valley much less get over the other side.

It only really starts becoming a big issue once you are pushing render fidelity to a point where things are believably real and I think game technology still has a few years to go before it gets to that point. As such I think ‘uncanny valley’ issues will get more pronounced rather than less as the rendering technology gets better. Even in film where people have much bigger budgets and bigger tech to throw at a more compact version of the same problem I would say that only Avatar has just made it to the point where its pretty much no longer an issue.

In my opinion games currently have a simpler problem which is about getting better drama and performances on screen – that has more to do with writing, moving away from the ‘direction by comittee’ approach and understanding acting and actors than technology.

Again if you look at films, a huge chunk of the budget is about getting a performance on screen, and at the moment the character performance aspect in games needs more priority on how to actually achieve that, if it wants to get closer. A lot of this has just as much to do with the logistics and production management side as it does about technical processes.

Having said that, titles like the recent Heavy Rain show that there are developers who think that performance and involved storytelling are really important elements in what they do. As someone who actually really enjoys the narrative and dramatic aspects of games I play I find that encouraging.

I think its also worth saying that realism isn’t the only game in town and often a more stylised approach can work very nicely and avoids the whole issue of ‘uncannyness’ altogether. Theres nothing wrong with that – for example Team Fortress had some wonderful character performances.

What do you see as key trends in facial animation?
More use of and better, more refined approaches to capturing better performances, delivered by decent actors – for ‘realistic’ games definately. It just doesn’t make sense to keyframe that stuff, in that context.

I think Avatar will have a huge affect on games production as well as film from that POV – in some ways that film is closer to a feature length game intro type thing than traditional film making anyway, in terms of it being almost 100 per cent CG in many places. James Cameron’s focus on – and success in – getting his actor’s performances into that CG intact is a huge marker for anyone doing anything remotely similar.

And what about the challenges?
Budget and timescales are always the challenge.

Where said Mr Cameron and co. had an enormous pot of money and, relatively speaking, a lot of time to achieve what they did, us mere mortals have to get by on rather significantly less of both.

From a games perspective I think there will be an accelerated learning process as more actors get involved in captured performances and more people in games get exposed to dealing with actually directing performances, and gain experience of what it takes to actually get that in the can so to speak. I suspect we will see more experts from the film and broadcast drama industries migrate to games because of that and there will be more cross-pollination between the two camps in terms of both techniques and people.

Realtime performance capture (capturing the entire performance at once: body, face, dialogue) is very much the holy grail given that it leverages the investment in the acting and gives you the most joined up results.

In many ways the capture tech itself is starting to mature, though its still pretty intrusive – the first company who solves the issue of capturing faces without restricting movement to small volumes or angles and/or forcing actors to wear awkward gear is going to be very succesful.

The other big tech hurdle is retargeting – transferring a performance from a given set of captured actor data to a different target character asset – there’s no easy way to deal with that currently and its in that process where you risk losing a lot of the original nuances. At Axis its definately something we are looking very closely at right now.

I’ve heard that a few studios are even starting to really look at a more direct relationship between the cast actors and their characters. For example, the character is based on the actor likeness, which helps massively.

www.axisanimation.com

About MCV Staff

Check Also

The shortlist for the 2024 MCV/DEVELOP Awards!

After carefully considering the many hundreds of nominations, we have a shortlist! Voting on the winners will begin soon, ahead of the awards ceremony on June 20th