Develop takes a magnifying glass to Team Bondi's divisive and dazzling facial tech

The art of lying: LA Noire

In the early years of the cinema – back before Spielberg, Lucas and Cameron et al shunned the medium’s more literary leanings and defined the ‘big’ film – character was everything.

The curl of a lip, the raise of an eyebrow, a word unspoken; little asides spoke volumes about the motivations of the tormented souls of the silver screen. The genesis of special effects offered writers and directors a far grander canvas on which they could create, but with the possibility of near-limitless scope, a certain nuance got lost in the mix.

For cinema, better technology has frequently had a negative effect on the power of human performances.

As if to highlight differences between two mediums, L.A. Noire will hit shelves across the world next month. A deliberate reflection of the film noir movement of the 1940s and 50s, the game’s advertising has proudly boasted of its interrogation gameplay mechanic in which gamers must decide how true the stories NPCs tell them really are.

The game features cutting edge facial animation captured on a massive 3D rig in a process dubbed MotionScan by Depth Analysis, a services firm that emerged from within the game’s developer Team Bondi in 2005. L.A. Noire wants to prove that in video games, better technology creates better characters.

“From the outset when we began developing L.A. Noire, we knew that the key moments in a detective game would be the interrogation of the game’s main suspects,” explains Brendan McNamara, founder of both Depth Analysis and Team Bondi, and studio head of the latter.

“We wanted to ensure that each interrogation was compelling and a core component of the gameplay, and that each suspect spoken to was positioned to give or withhold information. Having worked with existing motion capture and automated phoneme systems in the past, we knew that we wouldn’t be able to achieve the degree of subtlety that we would need to read each character’s facial expressions.”

Depth Analysis head of research Oliver Bao expands on the process that the company created to address that balance.

“We sit the actors in a chair with 32 cameras around it,” he says. “It’s kind-of like doing a constant close-up, and the videos are captured in-sync. We capture the audio and body positioning at the same time with body markers so the whole performance is done in one go.”

“The 32 cameras organise into stereo pairs, so each pair works as a 3D scanner. Each take allows you to scan a patch of the head, and then by merging the 16 patches together you get a full 3D head model.

“We do quite a bit of filtering to make sure that it looks temporally smooth, and we also do quite a bit of compressing down to make sure that it fits onto our game disk. That’s quite a challenge considering that the video data rate that we get before compression is one gigabyte per second and we compress that down to one kilobyte per second for running in-game.”

McNamara is left in no doubt as to the effectiveness of the process, and the role it played in creating the living world of 1940s Los Angeles in L.A. Noire.

“MotionScan allowed us to bring a sense of humanity to the game that has yet to be achieved up until now. As a player, we can interact with and look at each character in the eye throughout the game and essentially believe in their performances,” he says.

“It definitely transforms the game experience for the player. From them suspending belief that it’s just pixels clubbed together to look like humans, to their experience as being as authentic as watching a TV show of a film. It’s been amazing to see the transformation from disbelief to belief.”

DOUBLE INDEMNITY

The process behind MotionScan is a complex one, but Bao is adamant that it takes up, if anything, less time to complete than more traditional methods of facial animation.

“We’ve managed to do 40-to-50 pages of script in one day. There was a client who came in to do a test shoot – they’d never used the system before and they walked away in one day with 40 pages of script filmed,” he recalls.

“As long as you get the actors to turn up, you just have to go over their lines and then you let them get on with it. In terms of processing right now we can do 20 minutes in a day. If we had more hardware we’d be able to go much quicker than that.”

As efficient as the system is, however, such a fundamental overhaul of the facial animation system inevitably uses a significant amount of processing power, even with the aforementioned compressing process.

“We do have to go through a process of reassessing allocations,” Bao confirms.

“Right now we can get three heads talking in parallel, so you have to optimise the rendering so that the heads are not facing the camera so that certain heads don’t talk.

“The lead characters have priority so you make sure that they are covered. It actually works out quite well, so most people don’t even notice that there are only three people talking at the same time.”

McNamara reinforces the massive positive shift that Team Bondi hopes such new capabilities will generate, warts and all.

“It allows video game designers to think outside of the typical shoot and drive paradigm,” he says.

“To challenge designers by asking ‘What other types of gameplay can we develop based on human interactions?’, ‘Can we develop relationships with the characters?’ or ‘Can we make characters that people really care about?’ It opens all of that up to game designers and writers.

“Players have to decide if they really want to pull the trigger on the character that they have taken the journey with, for instance. MotionScan has helped us turn polygons and pixels into something you have to think more deeply about. That’s really exciting and incredibly liberating for games makers.”

Bao describes the way in which he sees the MotionScan technique as having positive potential behind the scenes as well as in front of them.

“It brings a level of humanity. With traditional mocap you can’t get those microexpressions and that nuance,” he says.

“There are all these little things that get filtered out.
With mocap you also have to do clean-up, and every time an animator touches the data then you lose a bit of that actor’s personality because animators tend to use their own face as reference. The more they touch it the less realistic it becomes.

“With our system we’re trying not to let anybody touch the face, so what you see is what you get.”

The simultaneous development of game and technology, McNamara explains, was to ensure that the direction of the L.A. Noire corresponded with the growing capabilities of the MotionScan system.

“L.A Noire overall was relatively untested in the sense that what we were trying to make was a game that kind of reinvented the whole action adventure genre. No one had really brought this blend of genre types together before, so MotionScan was really just another part of the puzzle,” he says.

“The game and the MotionScan technology were developed side by side of each other so it made it easier for us to execute. And seeing how it was coming along at each milestone was really exciting for the L.A. Noire team and it really informed the choices of how the dialogue, interrogations and cinematics teams designed their parts of the game.”

SCARFACE

Building and using new technology in a live dev environment will always bring with it new challenges. As McNamara and Bao have it, MotionScan was no exception.

“One thing that really threw us was when we started using the technology was that we realised there was no going back on any level,” McNamara explains.

“We used to include a dialogue line while in cover in a gunfight, and then at the end of that line the player’s character’s head would blend back to their game head and, presto, they would look like your conventional game robot. So the transition from looking really alive and kicking in the game world to being lifeless was like switching a light switch on and off.

“That made the team consider capturing lots of ‘idles’; angry, sad, proud, humble, exertion, just little sequences that we could always have on the player’s face and others people’s faces when you cut to them. So it has been all or nothing, because once you go down this route, you’re committing to a level of realism throughout the whole game. Not just when it’s convenient to do so.”

Such issues, as Bao sees it, pale when compared with the positive reactions of testers to the representation of characters in L.A. Noire he has noted already.
“Lots of people are commenting about how good the acting is in L.A. Noire,” he says.

“In QA we have had people saying ‘I don’t like this character, he’s a snob. You should replace him with this actor because I think he’d do a better job.’

“People are treating it like TV. They are getting so much more into it, I’ve had people come up and say ‘Why’d you replace that guy, he was doing such a good job’, and you have to explain about script changes or actor’s availability. People become so attached to their favourite characters that it’s no longer like playing a video game to them.”

Looking to the future of MotionScan, and what it represents for both facial animation and mocap in general, McNamara has set his sights impressively high.

“MotionScan embodies the future on a few levels. Firstly, when this technology can capture full body performances, the level of realism will be hard to differentiate between game, film and television,” he says.

“That will make the gameplay experience pretty seamless from exposition to action. Secondly, for film makers it will mean they can create whole scenes from capture data on the desktop the way they currently edit films. They will be able to adjust the action, move characters, change cameras and re-light the scene until their heart’s content.

“Overall, for filmmakers that’s pretty exciting.

“And for games creators, it means we can compete with films and TV on a pure storytelling and performance level, along with leveraging all of the other interactive strengths that will pave the way for more exciting games.”

And as Bao describes, Depth Analysis has many commercial avenues to pursue on the back of its work on L.A. Noire.

“Right now there are three projects on the go,” he says.

“The commercialisation of this current set-up is first, and then I’m looking at building an upgrade of the head-rig for films and commercials, which will entail getting more high-res cameras and algorithms to ensure better quality and 3D data fidelity.

“The third project is the full-body mocap, which is a completely different ballgame in which you basically use MotionScan on the whole body.”

McNamara’s confidence, along with the unarguable quality of the technology in MotionScan and the effect it has had on L.A. Noire, seem bound to lead both Depth Analysis and Team Bondi on to ever more interesting projects. The possibility for the technology to cross over into filmmaking is also another standout example of the way in which one industry, with Team Bondi and Depth Analysis at the helm, could well be leading the way for a more established one to follow behind it.

“For MotionScan the goal is to continually make it better. As I said earlier, it’s still very early days and we are listening to feedback from the people who are testing the rig and pipeline,” McNamara explains.

“We want to be able to use shaders more cleverly, take a look at subsurface scattering and also computer generated hair too, which we see a lot of our film customers are working with. We are also looking at retargeting so that you could take an actor’s performance from MotionScan and apply it to various non-human characters.

“We are already doing initial research for full body capture in costume for phase two – it’s exciting times for Depth Analysis and MotionScan for sure.”

www.depthanalysis.com

About MCV Staff

Check Also

The shortlist for the 2024 MCV/DEVELOP Awards!

After carefully considering the many hundreds of nominations, we have a shortlist! Voting on the winners will begin soon, ahead of the awards ceremony on June 20th