Motion capture is rapidly becoming ubiquitious in games. Once a major selling feature to show how high-end your title was, it is now found in almost every blockbuster title – and even a few indie hits.
This can be attributed to significant advances made in mo-cap technology of the past few decades. Develop and mo-cap specialist Vicon, which celebrates its 30th anniversary this year, invited experts to London for a roundtable discussion on what these advances have enabled.
“Motion capture was always about lots of technology getting as accurate data as possible, but now that’s almost taken for granted,” says Vicon product manager Phil Elderfield. “Now the technology is all but invisible, and the emphasis is much more about the animation and the creative side.”
Alex Counsell, principal technician for motion capture at the University of Portsmouth, says the final performance is the biggest beneficiary: “We don’t have to ensure that actors are mindful that they’re in a mo-cap suit and adjust their actions to fit our technology. They don’t have to worry about what we’re doing, they can just concentrate on their performance.
“I can look for professionals confident that the system’s going to pick everything up. We can just sit back and we’re almost invisible – apart from the velcro, gaffer tape and all the other fun aspects of mo-cap.”
Stuart Butler, senior lecturer in computer games design at Staffordshire Uni, agrees, adding that the rising quality of motion capture means that poor performances are more obvious than ever.
“You’re not just watching a bunch of green dots on a screen wondering if it’s going to work,” he says. “Now if students go back and shoot, it’s not a case of the capture not working, it’s that the guy in the suit wasn’t much cop as an actor.”
Motion capture has come a long way in the last few decades. Not only are the capture suits and head-mounted cameras now more comfortable and far less restrictive, leaps forward have also been seen in the back-end software. It all means studios can tackle mo-cap situations that they would never have even attempted in the past.
MOVING WITH THE TIMES
“17 years ago, you had teams of people whose job it was to replace bad trajectories and basically join the dots,” says Audiomotion MD Mick Morris.
“Every shoot was tough. If anything was blocking a marker, someone had to manually sort that out by hand. So we’ve seen huge leaps and bounds in the development of the software and hardware, which makes that part of the process easier.
“We’ll get the odd job where you need eight to ten performers, but we now have the technology to handle that.”
Ninja Theory’s co-founder Tameem Antoniades adds: “When we started, you usually captured the body and then animated the face or used ADR for the voice. Now we’re moving towards a full-scene capture: multiple actors using face, body and voice – even the cameras. We’re trying to figure out if we can capture the focal length in cameras and recreate every creative decision on set.”
The advances in motion capture software, and the other development tools it interacts with, means that studios can see the final result of their shoot much quicker than in the past.
“With things like Unreal and whatever, the quality you can have in real-time is amazing,” says Counsell. “It used to be a bit of a mystery: you had to wait a week before you could see what the data looked like and decided whether or not to re-shoot. Now you can make those decisions on set.”
But Antoniades warns that motion capture should never replace keyframe animation.
“We keyframe all gameplay and use motion capture for all drama,” the chief creative ninja says. “You could use mo-cap for walk cycles or incidental animations, but when it comes to gameplay we need to do so much tweaking on the mechanics and the responsiveness that if you motion capture it, you end up effectively turning it back into keyframe.
“At the early stage of Heavenly Sword, we hired stunt men and wires and had them simulate explosions, fly through the air and do all sorts. But in the end we replaced all that with keyframe, because somebody flying through the air on wires looks like someone on wires.”
As the experts discuss different motion capture experiences they’ve had on past projects, Vicon’s Elderfield observes that the methodology is becoming similar to that of another industry.
“It’s becoming more like the standard film-making process,” he says. “When you get a couple of video cameras, crew, directors and performers, there’s still a lot of tech involved on any shoot, but it’s well understood and the processes are very well defined, and that’s beginning to happen with motion capture.”
With such improved understanding of how the process works, and the increased accessibility of the tech involved, a number of studios are investing in their own solutions. But Butler doesn’t believe the process will be brought completely in-house any time soon.
“A lot of the developers I talk to have a mo-cap studio on site but they use it for prep,” he says. “They don’t actually use it for full-on development – they go out externally to another site or they get someone else to do a lot of it for them and work with professionals.”
Morris concurs: “It’s been the case for a long time that developers will buy a small system or mo-cap suit, stick and animator in it, and use them for rapid prototyping to see if certain things work. If so, you put it on the list of things to be shot at a bigger studio.
“But there’s a certain things you can’t do in-house: if you need a lot of stunt work, or height and big volume stuff, that’s where a service provider comes in. We’re still kept busy with things like sports games that need a big area for a footballer to run or someone to play cricket. We did a job a while ago – it wasn’t for games – where we were capturing two horses and a chariot going at full pelt.
“For big scale stuff like that and cinematics, service providers are still needed.”
Antoniades adds that mo-cap specialists develop this technology much more effectively than any games studio could in-house, making the benefits obvious.
“Yes, it’s expensive to have a lot of people under one roof for four-to-six weeks, but the benefit is you end up with a cut of your entire game and all the cinematics,” he says.
With the technology behind motion capture already more accessible and convenient than developers could have ever hoped, what’s the next step? What barriers remain that prevent studios from utilising this high-end tech?
“The barrier used to be the cost of entry, but that’s disappearing,” says Morris. “As the technology gets into the hands of new creators that can get their hands on the gear, we’ll begin to see more impressive performances.”
Elderfield adds that mo-cap isn’t restricted to use in studios anymore: “It’s starting to move outside, it’s starting to move onto sets and all over the place. As we move towards simplification at the production user-end, I think we’ll end up with a system that’s much more invisible and much more mobile.”
There are still technical hurdles to overcome however that could improve the efficiency of motion capture. The process can still be a slow one, partly due to the set-up time required.
“I’d like to see that time reduced,” says Butler. “If we move our cameras, we’ve got to refocus them. There isn’t an exact science with that, it’s just about going in and fiddling. The biggest shift would be the ability to very quickly set up wherever you are, within an hour rather than four.”
Meanwhile, Antoniades dreams bigger: “I’d like to see a future – which might be ten, 15 years off – where we’re not capturing dots, markers or paint on the face, but everything: the clothing, the way it wrinkles.
“I would love to see a future where we can capture a massive amount of data but all the sensors are hidden, so you can shoot outside and on location with the right lighting.”
Only time will tell if this will come to pass, but given the leaps forward made in the last 30 years, it’s a safe bet that motion capture will become even more advanced in the next 30.