This blog will focus on the following game-related subjects: sound design, music, game design, thoughts on cultural aspects of games, indie dev stuff as well as my current game project Modsork. My intention is to write at least bi-weekly, but same as with the length of the posts and other parameters, please bear with me as I experiment and hopefully find my groove. Of course I'd much appreciate any feedback you have so I can improve my blog. Ok, so let's dive right in:
Implementing Game Audio
There is an aspect of the sound design of games that has a big impact on the final result, yet it's easy for inexeperienced developers and sound designers to miss or neglect it: Let's call it the technical implementation of sound effects, for lack of a better term. Now, by that I don't mean file formats and audio engines, but instead all those small, yet important details of how and when a sound effect is actually played back in the game and what its context is. There can be a world of difference between a sound effect sounding nice in the isolation of the sound designer's DAW or editor in his studio, and the way that same sound is perceived, e.g. when:
- it's heard in the context of the game's music track(s) and other concurrent sounds.
- the player hears it for the 100th time in rapid succession.
- it's heard "on release" of a button, instead of "on press" (or vice versa)
- it comes out of a smartphone's speakers
So what can you do, what should developers and sound designers keep in mind when they plan and create the audio part of project?
- Acoustic context: At the planning stage you should try to identify which sound effects are particularly important (e.g. because they convey critical information for the player). Make sure the context leaves room for these effects. That requires coordination between the devs, the composer and the sound designer so the soundtrack and the soundeffects don't end up getting crammed into the same frequency space. Technical solutions might be needed (dynamic in-engine mixing, ducking etc.).
- Frequently repeated sounds (e.g. shots, footsteps): These will sound best if you can add a bit of variation to them so the ear doesn't tire of them. Have your sound designer
create slight variations. Implement them in a way that these variations are randomly selected for playback, maybe add additional programmatic variation (e.g. slight pitch shifting, filtering)
like many musical sample players do. It's not hard, but somebody needs to think of it and actually program this stuff. If you're using Unity, you can simply use this little script I wrote:
- The exact timing when a sound is triggered is important, it's not enough to just define "button x has a sound effect y". To give you an example: I worked on a project for touchscreens where we realized that stuff happened "on release" of a tap gesture, partly because "on tap" we didn't yet know whether the player was about to perform a pinching gesture, or whether he was actually pressing a virtual button. The button sound, however, was designed to reinforce the sense of physically pushing an actual button. Even the short delay between the player's physical gesture and the acoustic response "on release" was jarring and the action didn't feel as pleasant as it could, so we had to come up with a work-around. If you think of this stuff beforehand, you can choose the best way to do this for your project (e.g. create 2 separate sounds for virtual buttons, one for the pressing, one for the releasing or altering the game's button logic.)
- Expected listening scenario: This is rather rudimentary, but still bears repeating: If your game is going to be played on smartphones on a busy train, its sound might need to be handled differently than if you're making a console game for the living room. It's of course best if the sound designer can test sounds directly in the game on target platforms.
All in all, based on my personal experience and what I've heard from other sound designers, you get the best results if you establish a process where the sound designer is involved in the concrete implementation of the sounds effects and can for instance keep an eye out for things your busy programmer might otherwise miss when he puts the sounds in the game at the eleventh hour. This is much better than just have the sound designer create some content in isolation and toss it over the fence and then everyone is surprised because the end result sounds different than they expected.
At the end of the day, as a sound designer, you need to think outside the box of your own sound creation tools: your sounds are only as good as they sound in the actual game. As a developer, you
need to be aware a well sounding game needs more than just nice sound files; Be prepared to either put in some work yourself, or get a sound designer who can handle not just the content creation,
but also its technical implementation in the game.
Do you have your own experiences with the implemention of the audio part of a game to share, either as a sound designer or as a developer? Please drop me a comment :)