AI in games is more believable than ever. Nearly gone are the days where the hostages you rescue run head-on into battle without any form of protection or weapon, intent on filling up your crosshair.
When we last visited AI, there was much talk of cloud computing being used to enhance AI, as seen in Forza Motorsport 5. But where does AI in games go now?
Epic Games lead AI programmer Mieszko Zielinski says the next frontier for artificial intelligence is deep learning. This is the latest trend in machine learning, where AI can recognise objects and speech. It’s something ex-Lionhead dev Demis Hassabis has been working on at his new start-up DeepMind, acquired by Google in January 2014 for a reported $400m.
“This is the ‘real’ AI, but it’s applicable to game AI as well,” says Zielinski. “We haven’t had a chance to have our stab at it just yet, but I do have a couple of cool use-cases for these fancy algorithms – smart game level generation being just one.”
But for all the advancements that have taken place, can AI be too smart for players? Matthew Jack, CEO of Moon Collider, which creates the AI tool Kythera seen in Star Citizen, believes few people enjoy being outsmarted when playing games, and says computers these days “can easily outsmart humans in many scenarios”.
“Good AI design is about appearing smart, which sometimes means you have to ‘hide’ information from the AI to help it make human-like decisions,” he says. “And don’t forget that a smart AI is also important to developers – who need a system they can work with at a high level without them having to explain every little step.”
Zielinski agrees, and further states that developers are in the business of entertainment, so the main purpose of AI in games should be fun.
“We often force AI to do things that will look good, or make players feel good, even though those are not the smartest things to do,” he says. “Why do you think AI soldiers are peeking out of cover when there’s a human sniper looking at them?”
Havok AI lead developer Ben Sunshine-Hill says while AI can exhibit too much superiority for players, it’s not quite the same as being too smart for players.
“Giving AI-controlled characters the ability to operate rationally and competently is important, even if you then throttle how effectively they use that ability, because then the precise level of competence is chosen to serve the design, rather than being imposed by the technology,” he states.
“To put it differently: It’s possible for AI to be smart and act dumb. It’s not possible for AI to be dumb and act smart.”
If there are limitations being enforced on AI in the name of fun, perhaps another question for the industry is, given such advancements already made, are developments in the field really that important anymore?
Jack says there is still room for improvement, and that with another area of games plateauing – graphics – AI is emerging is a key differentiator.
“There’s a lot of potential left untapped,” he says. “There’s a huge gulf between what designers would like to see, and what’s traditionally been feasible to deliver within the schedule of a single game’s development. So I think there are a lot of opportunities ready to be unlocked by the next generation of middleware. If anything, AI is going to be increasingly important for the games business.”
New AI techniques can make creating believable characters easier and quicker. AI can even be used to generate and filter game scenarios, like RPG side-quests.
Mieszko Zielinski, Epic Games
Sunshine-Hill admits that the industry has settled into a comfortable place in regards to AI. He says genres have evolved to compensate for, and minimise the effect of, the deficiencies in current AI techniques. These in turn have also evolved to support the use-cases specific to those genres.
Perhaps then, evolution in AI will come about in tandem with new genres and gameplay experiences, such as in Creative Assembly’s terrifying Alien: Isolation, where the Alien feels truly alive.
“Advances, ultimately, will be driven by the design side of things,” says Sunshine-Hill. “But I doubt neither the designers’ drive to innovate, nor the programmers’ ability to tackle the challenges presented by that innovation. It’s just a matter of time. And money.”
Zielinski says AI shouldn’t just be defined by what is seen on the screen, as the industry already reached the level of AI necessary for creating believable opponents some time ago. The way the tech makes a difference now is supporting creators.
“AI can assist developers in creating levels,” states Zielinski. “New AI techniques can make creating believable characters easier and quicker. AI can even be used to generate and filter game scenarios, like RPG side-quests. There are many domains AI techniques can be successfully applied to to speed up or improve quality of work of human developers.”
And despite great examples in recent times of near-believable AI companions, such as BioShock Infinite’s Elizabeth or Ellie in The Last of Us, Zielinski still believes there’s huge room for improvement.
“There are really good examples of faking it to a very good result, like Elizabeth in Bioshock Infinite, but those cases heavily depend on a given game’s specific characteristics, linearity and level scripting,” he explains. “It’s all smoke and mirrors still, and we’d like to see ‘real’ AI in games at some point. It’s currently not possible, and hardware is the main thing restricting us.”
Expanding on that last point, Zielinski describes it as one of the key challenges of developing AI. The technology that drives it has always struggled with the hardware limits, and this looks set to continue into the current generation and beyond.
“It’s not that the hardware is ‘weak’, it’s that AI is always getting scraps of processor time and memory,” he states. “With that in mind, AI is now required to populate vast, open worlds, navigating those worlds like humans would, never forget a thing – which in itself is ‘artificial’ – and ‘just feel real’. That’s a hell of a lot to handle in 15 per cent of your frame time.”
Sunshine-Hill adds that the processing budget is a perennial concern for developers. Even as each console generation improves on the last, he believes there is often never enough processing power to match ambition.
“The high-level behavioural control isn’t particularly expensive right now – though it may get more so, as planning-based systems gain traction – but the low-level query and executive systems underlying them will suck up as much CPU time as you’re willing to throw at them,” he explains.
“Humans bring a staggering level of computational power to bear when executing even the simplest of tasks. Our challenge is to emulate that skill, with a fraction of the time on a CPU with a fraction of the power of the brain.”
He goes on to say, however, that despite restrictions comparative to the human mind, there is a lot more within the realms of possibility on new hardware.
In fact, as games increasingly focus on both expansive worlds that are both destructible and allow players to construct new buildings and objects within them, AI faces an entire new frontier. It must remain believable while adapting to a world that can constantly change around it.
Epic is taking this challenge head on with its upcoming game Fortnite. Players collect resources during the day, and must use these to build a fort to protect themselves from the onslaught of monsters at night. This requires AI with reliable spatial information for navigation purposes, to ensure they are never perplexed by new objects.
“Whenever a player destroys or builds something, the simplified representation of the world that AI is using for its calculations needs to be updated,” explains Zielinski. “Regular tricks of creating hierarchies of representations just doesn’t cut it, with the frequency on the changes to the world we’re facing.”
Jack says that with its Kythera AI tech, one of its starting assumptions is that levels will change at run-time. One of the biggest design implications here, he says, is that they avoid reliance on pre-processing steps.
“Navigation meshes, and everything else, can be built on-the-fly,” he says. “Allowing that kind of responsiveness requires good design, and hard work on performance, but the end results are simple to use – and once you have the framework in place you keep finding unexpected benefits.”
Jack is also mindful of another challenge facing AI: the pace of games development.
“As free-to-play and backer-driven models rise, it is becoming the norm for games to offer substantial updates every few weeks or months – and the AI industry still needs to adjust to that rapid cycle,” he explains.
The next step
Despite arguably plateauing in some areas of traditional AI in games, our experts still believe there are many directions AI can go next.
Zielinski says one direction is designer or artist-guided procedural content generation, and then AI autonomous content generation – i.e. generating entire living worlds on-the-fly based on a set of parameters. It’s a description that evokes thoughts of Hello Games’ upcoming No Man’s Sky.
Jack believes the next big visible advance in the AI space will be scaling, driven by the development of cloud computing in the wider tech industry.
“For a multiplayer or MMO game, the cloud is a natural place to be, and even single-player games have a push to make use of more cloud services,” he says. “When those massive computing resources can be shared across a group, you have the raw computing power to scale up from a handful of agents to rich, fully-simulated worlds.”
Sunshine-Hill says between the increased console power and engineering work of teams like Havok’s own, AI characters can now be more observant and reactive than ever before.
“Developers will see different opportunities in that,” he says. “Periods of rapid evolution tend to lead to a lot of weird species. I’m looking forward to playing all of them.”