In his session at the Develop Conference 2009 in Brighton, computer scientist Andy Farnell has been discussing how the demand for assets and real-time sound control mean that a generative model must replace data-driven audio development.
"What we’re bringing into procedural audio is a causality that hasn’t existed previously," said Farnell, as he spoke about creating effects built by simulating the physicality of how a real sound is produced.
"The size of the worlds is growing, so providing assets is getting tougher, but what if we could do this for free? It is perfectly possible to generate sounds from within the physics engine."
Farnell used a range of technical demos to demonstrate techniques for efficiency, the balance of realism and impact, and how to best achieve automated content generation with limited cost.
"I call what we’re making here cartoon sounds," explained Farnell. "These are the result of Vincent Van Gough school of sound design. You need to look at how few brush strokes can you use to create a believable suggestion of a sound."
Offering advice to those new to generative audio design, Farnell also suggested: "If you do one thing, always be mindful of the physical process you are trying to communicate."