If you have ever dealt with game composers and/or sound designers, chances are you’ve already heard about“dynamic audio”, "adaptive audio" or something similar. But what is that, and why should you care?
Why is audio important?
First, we need to understand what good audio can do for a game. Let’s use the IEZA framework for that:
Daunting, huh? Let’s break it down. We have two axis: diegesis and action/setting. Every audio element in a game is a combination of them:
Interface (action/non-diegetic): the sounds heard by the player, but not by the characters. Interface sounds, like button presses and popup interactions, are very helpful in giving feedback to the players. Without them, players might be confused as to the effectiveness of their actions regarding interface.
Effects (action/diegetic): unlike interface sounds, the sound effects actually exist in the game world. They help with the suspension of disbelief while also providing feedback, making players feel how their actions actually interfere with the game's world. Bad sound effects can take the players’ attention away from what they should be aware of.
Zone (diegetic/ambient): the zone effect refers to the interaction between sound and environment, and it provides even more immersion to the player. Take, for example, the sound of birds chirping at the top of a tree: if the player’s avatar is far away from the tree, you hear the chirping with lower volume and without their highest frequencies; as the avatar gets closer, the player (and the avatar) starts to hear it louder and “clearer”. Another example of the zone effect is when the character dives in a pond or river, and the sounds reflect the sensation of being underwater.
Affect: the main component of immersion. Music dwells in this area. Well crafted music can place the player anywhere on Earth (and even outside it), in any given era. It helps convey emotions and drive the narrative forward. It can sonically brand your product, sometimes even drawing thousands of people to a game music concert. And it can also help the players better understand the consequences of their actions, as we will see in a moment.
So, audio has two basic roles in games: immersion and UI/UX. Studies have been made to prove that audio can actually improve the gameplay experience.
What is dynamic audio
Simply put, dynamic or adaptive audio exists when the sound in your game changes according to the players' actions in real time.
Dynamic sound effects and zone effects
The simplest way to use dynamic sound effects is by implementing random variations in the assets that repeat the most. We can randomize pitch and volume in order to use only one sound asset and make it sound like lots of different ones.
Why do that? Ear fatigue. When you hear the same sound playing over and over again, it starts to annoy you, like you’re listening to a machine gun on repeat. When we use tiny changes in pitch and volume, the brain doesn’t recognize the sounds as being the same, thus preventing ear fatigue.
More creative ways to use dynamic sound and zone effects might include:
an important item that emits a characteristic sound, and as you get closer, the sound changes (teaching the players that they’re approaching something useful);
a special reverb zone in a tunnel or hall, as acoustics behave differently inside them;
a heartbeat sound that grows stronger as the player's health decreases, making them more aware to their surroundings; and so on.
Dynamic music has been around since the “music” in Space Invaders got faster as the threat increased. Or, in a little more sophisticated way, when bongos started playing as Mario hopped on Yoshi’s back in Super Mario World.
Nowadays, we can be even more daring. Be it within the game engine itself or with audio middlewares like Fmod, Wwise, Fabric or Elias, we can change music and sound effects according to the players’ actions in almost any thinkable way. Using Michael Sweet’s terminology, we can use, for example:
Horizontal resequencing to change the music from one screen to the next, or even to create multiple short music “modules” and shuffle them in real time to feel as one longer piece of music;
Vertical layering to modify the musical arrangement according to the players’ situation in-game. That means we can have an ambient piece of music, designed for exploration, seamlessly morph into an awareness state when the enemies are approaching (thus making the players aware of a change in their in-game situation) and transition into an action piece when the battle starts, and revert back to the original ambient arrangement once the enemies are defeated.
Those are two common techniques to create dynamic music, but the sound designers can use their creativity to come up with other possibilities, or to use these ones in a different way. Most AAA games nowadays rely on dynamic music to improve immersion (and even for UX, like in the forementioned "awareness state" music), but casual and indie games are also perfectly capable of taking advantage of those techniques, in any platform.
With those techniques for seamless transitions, there are quite a few advantages for the game developer:
Dynamic music helps minimize ear fatigue, as the player is listening to something different most of the times. With less ear fatigue, play time tends to increase;
Horizontal resequencing might allow for fewer music assets to provide multiple different paths for background music to develop, thus saving some valuable data budget in the shippable build;
When using an audio middleware, game programmers are only expected to call simple triggers and parameters on code, leaving the audio behaviour and all the tweaking to the sound designer. This allows for better task distribution within the game development team.
Now, go ahead and get your game some nice dynamic audio!