As hardware advances, so the challenge of pushing game graphics increase. Develop speaks to the experts looking at the future of looking

The future of game graphics

A little over 30 years ago AMD was rightly proud to unveil a graphics chipset boasting a 2KB memory capacity.
This summer, the longstanding semiconductor specialist debuted a graphics card with 2GB of memory. Even without considering more top-of-the-line products, that’s a million-fold increase over 30 years. Or a decade-by-decade exponential growth rate of a hundred-fold. And it doesn’t even touch on advances in compute density.

Growth numbers like that are certainly impressive, and as such it’s easy to get exceedingly optimistic about technology’s accelerating potential. But in spite all of that, game graphics’ stride into the future has, in recent years, seemed to the layperson to have slowed down. Photorealism still lies just out of reach, and the bold leaps forward of the past – such as the move to 3D – have become tiptoed steps through the same region that is home to the uncanny valley.

There’s reason to be optimistic, certainly. The consumer appears to have an insatiable appetite for ever more advanced displays, hardware outfits continue to push horsepower, and Moore’s infamous law still just about holds true.

As such, it’s easy to hear a lot of long-promised technologies making a lot of noise again. Stereoscopic 3D as standard continues to enjoy a push, it’s a definite maybe that VR is here to stay, we’ve our fingertips over the cliff edge of the other side of that uncanny valley, and photorealism is rumoured to be close now. Really, close, in fact.

Sage words

One person who knows what it is to chase the dawn of photorealism is Richard Huddy. He is AMD’s chief gaming scientist, and a veteran of the semiconductor and GPU business.

“I started in all this in about 1996 with 3Dlabs, so getting close to 20 years ago now,” offers Huddy.

“For most of that time, I would say to developers photorealism was ‘about ten years away, or something like that’, and then I started to know that was a little optimistic. It seemed for a time like we might even make it by 2006 or 2007, but we clearly didn’t.”

So when does Huddy feel studios will be able to deliver game worlds that are utterly convincing, and indistinguishable from reality?

“We’ll have limited photorealism in five years from now,” says Huddy. By ‘limited’ he means that stills, or short pre-rendered scenes, might trick even the most careful observers for a short time. The reason for Huddy’s caution is simple. There remain a number of barriers to making game graphics emit every detail of real life.

One example is screen size. According to Huddy’s research, a screen 8k-by-6k would just about offer enough pixels to match ‘eye resolution’. 48 million pixels, in other words. That may not be too far off, but photorealism is about much more than the games development canvas, and there are far more intricate problems to best.

Shadow is the beast

“The amount of processing time that is spent on very high quality shadows these days can, surprisingly, be quite a bit more than the amount of processing time spent on the rest of the scene,” continues Huddy. “That doesn’t feel terribly likely, but it’s awfully difficult to build stable, high quality shadows; there’s just no terribly good algorithm for that, and it’s a current barrier to photorealism.”

If developers were able to do ray tracing with a sufficiently high number of rays in a fully dynamic scene – Huddy feels around a billion per frame ought to do it – they might be able to generate a good quality image that might include photorealistic shadows. Alas, as it would take far too much compute power away for most, for now it remains something developers and hardware outfits must revisit in the future.

However, there have been significant, near-tangible leaps forward in the graphics space today where light and shadow are concerned, especially where physically-based shading is concerned.

“The biggest trend of the last few years has been the adoption of physically-based shading techniques,” asserts Chris Doran, director of the prolific Geomerics, which specialises in lighting technology.

“The rate of adoption has been quite stunning – it is unusual for the industry to move en masse in one particular direction. Part of this was maybe due to the console transition. Everyone was looking to improve their graphics pipeline, and physically-based shading was the right technology at the right time.

“For our team, the interest is about how much room for standardisation there is in material models based around the techniques of PBS.

“I think we can learn from the film industry here from the way they have standardised their very complex material pipelines.”

It’s awfully difficult to build stable, high quality shadows; there’s just no terribly good algorithm for that, and it’s a current barrier to photorealism.

Richard Huddy, AMD

Meshing together

It isn’t just about shadows and rays either. It’s also to do with the human capacity to match technology’s advance.

“The challenge that developers face will be in getting the right skill sets in place within their teams,” says Dave Cullinane, account director at CG specialist RealtimeUK, on the photorealism problem. “Putting together teams that have both expertise and experience in creating the level of visuals that we are able to achieve is no easy task.”

Cullinane points out that while the tools and technology are getting better and more accessible, they are meaningless without real experience and creativity.

“Results at the level that our own clients demand are very rarely achieved by talented individuals acting alone; modelling, lighting, rigging, animation, shader development are all highly specialist skills. The best results are created by those teams sitting under one roof who have a history in working closely with one another and solving problems together.”

Away from the challenge for very real development talent, there’s also a potential obstacle to reaching photorealism at the very core of modern 3D games development. It may just be that the polygonal model itself isn’t the smoothest path to convincing, lifelike graphics.

“Polygons have been bent into shape by the evolution of graphics hardware, really quite severely, over the last 15 years or more,” states Huddy.

“And they’ve stood up really impressively well. One of the best things that we ever did was to add tessellation to our hardware. The original Xbox 360 had tessellation hardware and it was introduced by Microsoft into DirectX11. That has helped as a form of geometry compression, but geometry compression is there because of the fact that memory has not been ramping up as fast as compute over the years.”

Suggesting developers will need a memory bandwidth ‘near-infinite’ to make that a reality, Huddy believes we are a couple of generations away from escaping the current confines.

Beyond reality

It’s important, though, to remember photorealism is not the be-all-and-end-all for games graphics; something most contributors to this article were keen to highlight. However, overcoming the challenges in pushing photorealism should help developers making games less tethered to reality.

“There is still a lot of room to improve graphics – stylised or photorealistic – further,” confirms Dag Frommhold, principal software engineer at middleware powerhouse Havok.

“In fact, we are still pretty far away from photorealism, and there is a lot of research and development effort needed to get much closer. For features such as global illumination, reflections, and even shadows, all current real-time graphics engines use more or less elaborate hacks which are relatively plausible, but still a good bit away from the ground truth.

“As everybody knows that the last ten per cent of quality takes between 50 per cent and 90 per cent of the development time – depending on who you ask – we can extrapolate how long it is going to take."

So how long is it before we can let designers run free of restrictions, and produce game worlds that are not just photo-real, but ‘imagination-real’? Huddy puts it at a decade after true photorealism; itself a time nobody can quite settle on.

We are still pretty far away from photorealism, and there is a lot of research and development effort needed to get much closer.

Dag Frommhold, Havok

People are working together on the problems if not the schedule, though, and must continue to do so if the games industry is to break the back of photorealism, and strive beyond that to some of the moreout-there graphical concepts.

But there is a need for collaboration across that other valley that exists in game design; the void between art and technology.

Peter Busch is vice president of business development at facial motion capture experts Faceware, which contributed to the striking quality of Call of Duty: Advanced Warfare cinematics. And he is certain an olive branch connecting programmers and designers can do much to help move game graphics towards photorealism.

“There is still a gap between the art and programming side,” says Busch. “For example, it is still common late in the development cycle for the quality of graphics to suffer simply because late development challenges force that concession. One of the main areas to suffer is animation. Many times character animation is created at a higher quality, but is ‘clamped’ down due to programming and engine challenges come ship time.”

Frommhold adds: “As a substantial bit of the technological advances are driven by new or upcoming hardware, I think it is critical for hardware and software developers to work closely together in order to ensure the best possible end-user experience.

“This is especially true for new experiences such as VR which are – despite all the well-deserved enthusiasm around them – essentially still at a very early stage and need to prove themselves.”

However, press your ear against the industry rails, and it’s easy to eventually hear mild caution around over-optimism towards collaboration. All agree it’s important, but some sense working together needs to be addressed with care.

“You have to be careful with words like standardisation in the games industry, as there is a suspicion that standards stifle creativity,” proposes Geomeric’s Doran. “But if the standards are right, people get behind them and that makes everyone’s life easier. It isn’t totally clear yet that the material model used for today’s physically-based implementations is flexible enough for most games, but the signs are pretty good.

“And one of the biggest benefits of the games industry adopting a common framework is it gives the hardware guys a fixed target to aim at. That could really push things forward.”

The ultimate forward, of course, is the Holodeck; a concept made iconic by Star Trek. The Holodeck sees a player move through a completely interactive, realistic and tangible virtual world pulled from their own mind, without the need for any carried or wearable hardware.

To boldly go

It’s not likely middleware and engine companies will be releasing Holodeck SDKs anytime soon, but the advances being made are significant, despite all the challenges. And according to Huddy, The USS Enterprise NCC-1701-D’s own spin on the arcade might not remain a flight of fantasy indefinitely.

“We’ve talked inside AMD and ATI about gravitating towards the Holodeck experience, creating an immersive situation where everything around you is properly represented and you are totally within that,” confirms Huddy.
“I think that makes more sense in the very long-term than the VR goggles approach, which will always weigh something and slow you down. You can’t scratch your face with VR goggles on.

“There is no doubt we will end up there,” concludes Ruddy of the Holodeck.

For now, though, studios may have to put up with the challenges to photorealism and the difficulties of VR, and return to the Holodeck another day.

About MCV Staff

Check Also

[From the industry] Five women-led games received an Innovate UK Award

Five women-led games from across the UK have received a national award from Innovate UK