How is the audio development for Inside characterised by a blurring of sound design and implementation?
One thing is to create great sounds. Another is to make them come alive in the game. Creating Inside’s character sounds often required an iterative process where we’d first go and make sound recordings, start developing an implementation strategy for them – and then, based on our learnings, go back and re-record in a way that would fit the implementation strategy.
The more the sounds are shaped by various game parameters, the more the game comes alive. We expanded that approach by feeding output from the sound back into the game. For example, the sound system for the boy’s breathing features a real-time interpolation between the natural rhythms of breathing, ranging from relaxed to panic, extracted from actual sound recordings, so the rhythm of the boy’s interactive movements – the rise and fall of their chest – is controlled by the ‘breathing’ audio data.
On a global level, a lot went into implementing custom sound transitions between death and respawn to maintain immersion through unloading/reloading. This is something I often miss in games: attention to the overall experience embracing death/respawn.
There’s an intangible dynamic between real-world and game-world time. Even though my character dies and I go back in game-world time, real-world time still frames my experience, and I easily get annoyed hearing the same line or music cue over again as I die and respawn.
However, if I quit the game and get back to it after a few days I probably do want to hear them again. Making a distinction between load and respawn, and creating unique mix and music transitions for every situation is integral to Inside’s sound design.
You’ve had a deeper and longer involvement with Inside than with Limbo. What benefits has that provided?
It’s allowed me to get at the core aesthetically and technically, doing things that are impossible to introduce later in the process. For example, prototyping gameplay where timing and mechanics are hooked on music or clock time, rather than the usual but much more unstable game-time, is great for tight integration between music and gameplay. But it’s a technical challenge – you have to demonstrate it’s worth the effort early on.
Early involvement means sound becomes part of the creative toolset in forming the game’s structure, not a bolt-on. For sequences in which gameplay and sound played very well together but eventually became too repetitive sound-wise, I could suggest changes in the game’s structure. That worked in reverse as well, with the team suggesting sound structure changes, enabling us to create coherent musical build-ups that encompass entire sections of the game.
Like Limbo, does Inside have an overall sonic identity?
Yes, but more subtle. It’s the graininess of early 12-bit digital audio hardware like samplers and delays. By means of convolution, we’re running an ‘80s (then) state-of-the art hardware reverb dynamically in-game, which really makes the audio elements meld together.
Aesthetically, I took inspiration from ‘80s B-movie horror – often featuring a synthesizer soundtrack, though I didn’t want any synth per se, just a vague association. I’d already been playing around with a real human skull in order to create bone-conducted sound. I made a workflow of processing synth sounds through the skull using audio transducers and contact microphones, and then restoring them.
The result has a sombre, chill quality, and as in the aforementioned film scores, haunting tones often contrast with something horrible taking place. My goal is that, like a siren song, the gloomy, faint echoes of synths will coax the player forward – to whatever end.