Looking at the current state of the game market, it’s clear that the development sector is having a serious problem with multiplatform development.
Staggered releases have become more common: it’s not unusual to see a PS3 version of a title ship a week or two after its 360 counterpart. But it can be even more severe than that – some titles are appearing on some formats months after they’ve appeared on another, with slippage as developers continually shift attentions between platforms impacting production schedules.
In addition, more and more developers are starting to speak out on the difficulties of being multiplatform developers in this latest generation – usually at the same time as they boast the large amount of work they’ve done on their own internal development frameworks to alleviate the situation.
There’s even a feeling amongst some in the industry that simultaneous multiplatform development isn’t possible with a single team any more.
Going back to basics for a moment, development for multiple platforms has traditionally always been done because it spreads the cost and resulting risk of games development across the biggest audience. With games now costing more to create, it would seem that this would be needed more now than ever before. Similarly, though, with the large disparity between widely popular platforms such as the 360 and Wii, going cross-platform means more work, more ideas to exploit platform-specific quirks and similarly more money. But is it getting to a point where the benefit gained from going cross-platform is actually being outweighed by the cost – and is being a multiplatform developer really as hard as some say?
At first glance, it is clear that the very definition of ‘multiplatform’ is muddier than it ever was. While it could have previously been easily defined as the development of a similar-looking title on all of the ‘big three’s’ platforms, Nintendo’s decision to veer from the road of horsepower advancement and produce a largely last-generation specced machine has changed the playing field.
Similarly, with the PlayStation 2 enjoying a long tail in sales that none of its contemporaries managed, there’s much to be said for developing on last-generation machines. And let’s not forget the two wildly different handhelds while we’re at it, and the player interface differences that the Wii and DS bring to the table. Suddenly, ‘multiformat’ encompasses a set of vastly different standards of dizzying complexity. Or, as Eidos’ chief technology officer Julien Merceron thinks of it, possibilities.
“Actually, all these platforms are quite exiting to support,” he says. “You can generalise most games into two cases. The first case is that you’re making a ‘hardcore’ game, in which case the chances are you might only target PC, Xbox 360 and PS3, and that’s not hugely difficult.
“The other case is that, if you’re making a mainstream game, you’d ideally like to support the Wii as well. In that situation, developing for the Wii in addition to the other formats isn’t the easiest thing in the world – especially when each version needs to be cutting edge on their respective platform.”
One way of reducing the complexity of the latter case, says Merceron, is to actually open up to even more platforms. That way, “you could potentially develop three sets of titles: one for PC/PS3/360, one for Wii/PS2/PSP and one for DS and mobile,” he says.
CAUGHT IN THE NET
When taking a birds-eye view of the different ‘this generation’ consoles, it’s tempting to generalise the differences based solely on performance: the Wii is weak, the PS3 is strong but the Cell poses architectural difficulties, and the Xbox 360 is powerful and less of an alien layout. But to do so would miss some of the other large issues, especially one such as online gameplay.
Kuju Sheffield rebranded itself as Chemistry earlier this year, and with it decided on Unreal Engine 3 exclusivity. And while that’s certainly insulated the team from some of the pain of multiplatform development – “We’ve got Epic worrying about the technology, so we can be worrying about other things,” says managing director Mike Cook – as studio manager Simeon Pashley explains, there are still significant issues to be worked around.
“The very big issue, we feel, is networking. There’s no commonality on any of the platforms, even to the end-user, on what the experience is like,” he says. “There are very obvious differences between Xbox Live, PlayStation Network and GameSpy or whatever other PC service you use.”
As such, part of the network implementation focuses on just getting the same core experience right across all of the platforms, and working to create uniformity in the face of vastly different attitudes in the structures of the various networks. And that’s before differentiating to take advantage of some of the other functionality provided by, say, Xbox Live. It’s a another major point that has impacted the bottom line for next-gen game production.
Cook is keen to add that it’s not just the developers that underestimate the cost per platform of implementing network play – the publishers are just as guilty of overlooking the complexity involved.
“It’s a massive cost, and it will trip people up,” he says. “It’s not an easy thing to do by any means.”
But isn’t it a problem alleviated by the studio’s use of Unreal Engine 3, though? “Sure, making maps is fine – that does come free with Unreal, so to speak. But things like lobbies, the matchmaking, the leader boards – it’s a big job. The reality of using UE3 is that you’re still developing for multiple platforms, and you have to play to those platforms’ strengths and weaknesses.”
Interestingly, Unreal Engine 3 has in some ways become the poster child for both the ups and downs of multiplatform development. Which is understandable, given that it’s often seen as a flexible, catch-all engine and has been epitomised by creator Epic’s own Gears of War but also questioned by a lawsuit from licensee Silicon Knights. And then there’s the chatter that, despite UE3’s performance on Xbox 360, its PS3 showing struggled. At E3, Sony even pledged to work with Epic to get the engine working on PS3 with all guns blazing. Six months on, and how is the engine faring?
“Epic have done a lot of work on PS3 version of Unreal,” explains Cook. “The engine was obviously PC and Xbox-lead, by a long way, but I think the effort both internally and with Sony has got the PS3 version of Unreal up to where it should be. It certainly looks the part.”
Naturally, Mark Rein is confident about Unreal Tournament 3’s PC-equivalent performance on PS3, but admits that it has taken the production of one of Epic’s own games to get UE3 to ship quality. “Like we did last year with Gears of War on the 360, we’re kind of reaching version 1.0 of the technology for PlayStation 3. It’s really exciting – it feels like we’ve reached a big milestone and hurdled way over it.”
Epic’s troubles are pertinent because more and more studios are turning to existing technology such as Unreal Engine 3 to facilitate development for multiple platforms. The alternative – building equal-footed technology for several different architectures and performance envelopes – is a job too large for many small- to medium-sized developers.
Merceron points out several reasons for why building a bespoke multiplatform engine is more difficult in this generation than it might have been in the past. “These days, experts are rare – losing one key person in your team can make things tough, to a greater extent than before. Not only that, but the scope of today’s games is bigger: even console games are moving deeply into the online and social community spaces, an area where multiplatform needs to be applicable as well,” he adds.
But it’s not just building for the now that’s important – despite only a year having passed since two of the current-gen platforms launched, Merceron believes that cross-generation development will be a big force in the future.
“Developers that are already trying to architect so that the core technologies can migrate to PS4, Wii 2 and Xbox 720,” he says. “It’s a very interesting trend, as it can really have a very positive impact for the company in about three years from now.” (And, as we reveal on page 6, this is a future Eidos itself is investing in.)
OPENING THE FLOODGATE
So, what should those embarking on contemporary multiplatform development be careful of? Amongst the people we spoke to, advice on making sure you have a solid architecture featured strongly.
“It’s correct that there’s a lot of bespoke or tailored code, especially in the hardware intensive processes like graphics or audio, but it’s definitely the case that when it comes to aspects like multithreading we’ve got an awful lot of common code,” says Alex McLean, technical director at Pivotal, which itself has spent the past year building its own unified development base for 360, PS3 and PC. As such, it’s important to make sure you have a solid underlying structure that’s applicable across all of the platforms, so that platform-specific features can be abstracted out on top.
It’s a point that Merceron agrees with: “Whatever your approach, anyone can trip up on architectural aspects. Architecture work is now extremely important so that you ensure the implementation will be robust and manageable. A poor low level architecture will generate a lot of multiplatform issues when designing the high level features.”
Part of the problem that developers have had adopting to the high-end machines of late has been that of concurrency, the true power only obtainable when all of the cores or SPUs are working efficiently. As such, splitting big processes into smaller tasks and building a super-scheduler to manage them is a major priority for developers starting out.
“The SPUs are incredibly hard to program and optimise for,” empathises Valery Carpentier, Emergent’s EMEA field application engineer. “You have to write data to special parts of memory or it’ll crash, you have to transfer memory yourself – it’s a big nightmare.”
It’s a problem that Emergent saw coming early – not just that of programming for the PS3’s unbalanced concurrent architecture, but developers having to plan how their game will work on anywhere from one processor to six. As a reaction, it developed Floodgate, a new but integral part of its Gamebryo engine aimed at helping game studios get the most performance out of parallel systems. Although it could be (somewhat unfairly) described as a scheduler, Floodgate is in actuality a system that manages processes running on different cores, keeping them thread-safe and their memory managed effectively – and scales from the six core PS3 through the Xbox 360 and even the Wii.
“When you’ve written the task program, you can tell Floodgate to run it on one SPU, two SPUs or even all five. It’s as simple as that – you just say ‘run this task on this many SPUs’ and it will.”
These ‘Floodgate programs’ can be written in pure C++ to ensure cross-platform compatibility and then later rewritten in chip-specific assembler during the optimisation stage. The benefit of Floodgate, says Carpentier, is that it allows people to get code running on multiple processors quickly – and that time saving, which Emergent says can amount to around 12 man-months, can be better spent optimising, swapping tasks between cores or altering how many cores are working on the task.
Issues with getting code running correctly on multiple platforms are cause to most of the grumbles we hear today, but what about actual assets? Are there significant performance differences between the high-end consoles, or are they similar enough to this gen’s stumbling block one of code?
“If you build assets, they are sharable across multiple platforms, so that’s where you win. Assets that you build are usable on multiple platforms. So long as you don’t do something like 360 and DS, the platforms aren’t that different – you can genuinely share assets between them all,” says Chemistry’s Pashley. “It’s when you get down to the technical sides of things – the things players shouldn’t care about – that’s where there’s differences that need to be addressed.”
BUILDING THE FUTURE
Ultimately, however, even if there are performance differences, most developers are used to having scalable pipelines – it’s not as if console specs have ever been identical. So in many respect it’s possible that growing pains today are aiding future developments. Certainly there’s historical precendent. Could it be the case that those developers with PC experience are better prepared, having dealt with flexible specs for a while now? When asking Mark Rein if he’s ever thought that having a PC background has helped Epic Games with working on different architectures, his surprise is palpable. “No-one’s ever mentioned that as a positive before – people used to see us as a PC company trying to make a go on consoles, and that’s always been a hard selling point for our technology,” he says.
“But yeah, we’ve been dealing with different system specs, different performance envelopes, different CPUs and GPUs for years, so it probably does make us better equipped – especially as these new systems, the 360 and PS3, are very similar in nature to high-end PCs in terms of some of the parts that they’re using. So it’s important to remember that we had this problem already on PC in a much larger way than we did on the consoles.”
And so, as dual- and quad-core chips continue their onslaught into even entry-level PCs, there will come a time when a single-core chip is an anachronism and working in parallel is just the standard. The transition has been – and still is – a difficult one, for sure. But it’s one that will gradually be overcome, leaving developers in a better place to squeeze every last drop of performance out of whatever architectures the future may build.