Nicolas Fournel has an impressive 20 years’ experience developing commercial digital audio software.
He started out coding Amiga sample editors in assembler and went on to build audio technology for Factor 5 (including the GameCube SDK audio tools), Konami Hawaii and Electronic Arts, Vancouver before arriving at his current senior position within SCEE’s Creative Services Group.
So, how does he see the current and future state of game audio programming?
“A significant focus for me is audio analysis to help create smarter tools, improve audio engines and enhance or even create gameplays,” he says.
“For example you can analyse the spectral content of your assets and export this information to the game as metadata. When sounds are triggered or modified at runtime, you update the spectral matrix – a representation of the game’s overall output in the frequency domain.
“The audio engine can then make informed decisions: how to dynamically mix the game, to apply (or not) audio shaders to a sound effect based on its audio properties, and so on.
“Perceptual voice management is also made possible, supplementing voice priority systems, to decide whether frequency-wise, it’s appropriate to start a new sound or not. If there are already ten very low frequency sounds playing on the left you might not want to add more.
“Remember – audio engines are deaf. They take decisions that impact the whole gaming experience without ‘listening’.
“Analysis is also the key to creating higher-level tools.
The more your application knows about the data you’re manipulating, the better because it can assist with creative choices. Content-aware tools can represent your assets in a meaningful and useful way – for instance, maybe for a debris sound effect what is important is the distribution of the impacts in time and the overall envelope. For a pitched musical instrument it will be the harmonics and the pitch. Audio analysis can be used to extract all kinds of features from amplitude to spectral shapes and more.
“As to enhancing gameplay, an example from my own experience would be when I worked on Lost In Blue, a DS game where the player is lost on an island and has to make a fire. You use the stylus to rub wood together onscreen whilst physically blowing into the microphone.
“In this case, an envelope follower can be used to analyse the incoming audio signal and autocorrelation evaluates if the player might just be saying ‘aah’ instead of actually blowing. It’s a simple example but there’s no reason you couldn’t have gameplay based on how you clap your hands, whistle, or hit a resonant object.”
Fournel is convinced that real-time processing methodology will continue to develop with more content dynamically updated at run-time, believing that talk of audio assets will give way to talk of models of assets.
“Real-time sound generation – including voice and sound effects – is the next big step for game audio,” he explains.
“One of the main tasks of a sound designer is creating dynamic content from static sounds. Usually, this is done with scripting and randomisation, and by multiplying the number of assets – but this is still playing static snapshots rather than a truly dynamic model. It would be naïve to say that procedural audio will replace everything though; it won’t make sense for all cases. But it’s a perfect solution for physics-based sounds – impacts and contacts for example.”
All well and good though the industry still faces a shortage of audio programmers. Moreover, Fournel is concerned about hiring the right individuals:
“Requirements have evolved significantly. I want to hire audio programmers with synthesis, processing and analysis knowledge. There are enough good game programmers who can stream files and calculate the 3D position of an emitter – now we need people passionate about audio who understand band-limited oscillators, filter design, or the Q-transform – who can invent fresh audio-centric solutions to our problems.
“Hopefully, the industry will attract more people into this discipline. There are really interesting technical challenges. For instance, right now I’m looking at analysing library sound effects to create dynamic models of them so that for procedural audio, the sound designer doesn’t have to go to an animal anatomy class to be able to build a bird call model. There’s the opportunity to make really rewarding, smart solutions, push back boundaries and realise entirely new ideas.”
John Broomhall is an independent audio director, consultant and content provider firstname.lastname@example.org