When graphics giant NVidia announced it was to purchase Ageia, makers of the PhysX hardware-accelerated physics processing unit, many questioned the motives behind the deal. After all, what exactly does NVidia have to gain by branching out into physics?
While the idea of a dedicated physics processing unit is a good one – as the complexity of games continues to increase, the number of processing units available will need to do the same by necessity – the uphill struggle Ageia always faced was getting gamers to add another dedicated card to their machine. While the benchmarkaholics might have always been a shoe-in, money no object when a couple of frames per second are involved, it was the everyday gamer that held the key to their fortunes.
Was the company successful? In some respects, yes. While the uptake of the hardware might have been much less than hoped, Ageia's work on their SDK was significant. Purchasing popular physics engine Novodex, adding hardware-acceleration support and then releasing it for free was a good move in currying favour. Porting the same SDK to PS3, 360 and Wii, thereby enabling multi-platform titles could use the same SDK without worrying about whether a PPU was present, helped even further – as did its integration of the tech into Epic's Unreal Engine 3.
And yet, despite all these efforts, the relatively small sales of PhysX cards left developers with little confidence of spending more time on the extra capabilities that only few may experience. In a sense, it could be said that Ageia was stuck in a catch-22 situation that, despite its best efforts, it couldn't get out of.
As such, NVidia's acquisition of Ageia will give the company the opportunity for its vision to come to fruition - not the vision of its PPUs in gamers' PCs worldwide, admittedly, but that of having a specialised processing unit on hand for math-hungry tasks.
It's looking at this area that makes NVidia's motives for the purchase become clear. Its recent GeForce 8-series cards have featured a technology called CUDA ('Compute Unified Device Architecture') that takes Microsoft's unified shader DirectX 10 initiative and generalises it further, providing an SDK to let developers run any C code on its GPUs, treating the units as a massively parallel array of processors (a quoted 118 on its GeForce 8800GT GPU). The highly specialised vector-centric operation of a GPU lends itself to running certain number-crunching tasks at a far swifter pace than a general purpose CPU could even with multiple cores, meaning that parts of the game loop can be offloaded to the GPU where they will not only free up the CPU for other tasks but perform the job quicker. Parts of the game loop such as - you guessed it - physics.
Indeed, this set-up - the heterogenerous processing model, also seen in the PS3's Cell processor - is how the PhysX PPU is structured, a general purpose RISC core controlling a bunch of floating point processors.
As such, NVidia's purchase of Ageia has given it a mature physics SDK optimised to work best when supported by this kind of architecture which, with some reworking, could easily take advantage of Nvidia's latest chips. Which is the better way of selling your technology to developers: telling them they could do all sorts of tasks, or providing them with a fully-fledged physics engine that already does it for them?
Knowing this, there's a large chance that what made Ageia an interesting proposition for NVidia wasn't its hardware at all, but its software. As such, Ageia's future as a hardware company may be in question. In an interview with FiringSquad, NVidia's PR representative Derek Perez stated that the company will continue to support the already existing PhysX cards, but did not discuss whether further generations of the PPU will be designed and released by the company.
NVidia clearly believes that heterogenerous processing is the future, and found Ageia to be a company that not only shared its vision but could significantly boost its chances of said vision reaching fruition. Ageia's desire to leave the CPU out of physics processing still remains alive, but has an exponentially higher chance of being successful now that it can piggyback the market leader in video cards.
However good this may sound, though, it still relies on one thing: that developers won't wish to wring every last drop out of the GPU for graphics. How many developers would find themselves saying that they have spare GPU cycles that they just don't know what to deal with? As long as graphic showcase games like Crysis exist, this won't be the case - and isn't it still a case of the physics sharing precious cycles with other important tasks? Has the problem not just shifted from the CPU to the GPU?
Maybe if CUDA takes off we'll see more multi-GPU cards, with one reservable for non-graphical tasks. Maybe developers will be able to dynamically alter the graphics/'other' balance of the card depending on scene complexity and user preferences. Or maybe, just maybe, the idea of a dedicated physics processor was always the right one and, eventually, people will come around to the idea out of necessity.
There are still many questions to be answered but, should things pan out as both companies are obviously hoping they will, this deal could have a significant impact on how PCs are architected in the future.
What do you think? Can non-graphical tasks on GPUs really take off, or will the pressure for impressive visuals leave physics in the dust yet again? Let us know your thoughts by e-mailing them to firstname.lastname@example.org