In my first attempt to work on a video game of some scope, I served as composer, sound designer and audio programmer for Lantana Games unreleased mid-size title, Children of Liberty from 2011 to 2014. This was before audio middleware companies offered friendly pricing schemes to independent game studios. So, with a first-timer’s naivety, I decided to build a well-featured audio engine from scratch. It wasn’t a totally insane idea – I had significant experience as a programmer and software designer, and Unity used C# as a programming language, which was a strong suit. So this seemed like a challenging opportunity to combine left and right brain skills.
What follows is a description of my attempt at conceiving and implementing a holistic audio design based on the aesthetic and technical needs of Children’s gameplay, story and platform. I’ve included specific examples of how that translated into the game’s audio, music, interactive behaviors and technical solutions, many of which can be heard and seen in the Warehouse level walk-through video above.
A game of this scope requires a good deal of planning to ensure that the distributable footprint, runtime footprint and streaming needs for the audio meet the game developer’s requirements as well as other practical considerations and limitations. To that end, it’s important to get a working idea of the audio needed for the character and object sound effects, ambiences and music.
Using the game’s design as a guide, I analyzed, researched and asked questions of the designers, documenting the sounds needed for the various characters, objects and ambiences. Similarly, an outline of the score was developed based on the story and individual levels, which led to the conceptualization of the two main themes and a running cue list of the music needed for the game.
Next, it was necessary to consider what sounds might be shared among characters. Could Joseph’s footsteps, with minor pitch adjustments, be used for Ally? How many different sets of footsteps are needed for the many redcoats that appear? Do we need separate versions of sounds printed with different reverbs for different locations, or would Unity’s built-in reverb suffice and perform well enough?
If a sound was just for a cinematic, I might need only one variation, but if it was part of the gameplay, I might need several. Also to be considered was the amount of file compression I could use and still end up with acceptable quality for short sound effects, ambiences and music.
I used test cases to determine rough estimates for the audio footprint and performance, and would update and re-run them to gain feedback during the game’s development.
Sound effects configuration was stored as xml, allowing for simpler source control and maintenance, with the system design incorporating the following features to allow for extensibility:
Game Audio Configuration Class and the Adapter Pattern
Though stored as xml, audio configuration is implemented via a C# configuration class. Following the adapter programming pattern, an xml-to-configuration class adapter was created. This allows for configurations to be saved in a simple text file, alleviating the dependence on Unity’s scene file to maintain audio settings, while providing the middleware necessary to support alternative options (e.g., sql database) or an administration app in the future.
Sound Effect Actor Hierarchies
To facilitate the development pipeline, budgets and quick functionality turn-arounds, hierarchies were created for different sound actors (characters, various inanimate objects) so that sound effects would be quickly available and functional as new features are brought online. For example:
Player character hierarchies:
- Joseph -> Player Character -> Character
- Ally -> Joseph -> Player Character -> Character
- Doug -> Player Character -> Character
- Sarah -> Player Character -> Character
So, for example, with Joseph being the first player character, when Ally was ready to be brought into the game she inherited and functioned with all the sound effects from the actor(s) earlier in her hierarchy. Pitch shifts and other adjustments can be configured to tweak and vary individual sounds. This provides a quick way to get new features up and running in an internal environment, and creates an easy way to see what existing sounds might work for different characters and objects.
When starting this project, I knew it would be important to develop a sound pallet that complemented the organic, hand-drawn nature of the artwork and reflected the characters and historical nature of the game. To accomplish this I recorded new foley for most of the sounds, using historically appropriate sound sources whenever possible. I also wanted to apply my understanding of sampled versions of acoustic musical instruments to implement an audio engine that would create a similarly natural sound.
These considerations led me to the following approach:
Vintage toys would serve as an overall sound motif. Since the characters are children and many of the ‘weapons’ are toys – wooden swords, jacks, tin whistles, yoyos – I decided that vintage toys would provide a fun, cohesive motif for the sound effects. I visited various old-timey five-and-dime stores, collecting tops and spinning toys that provided the basis for the sound of a runaway carriage, a wooden sword for Joseph, this long hollow tube that created the sound used for the checkpoint indicator when swung around, and other miscellaneous toys.
- Use realistic sound sources where possible. I picked up various kinds of leather-soled shoes from thrift shops and strikers with flint to re-create the sound of lighting flames. Living in Arlington, MA (right on Paul Revere’s path) made it easy to record mid-April exterior ambiences including bird and cricket sounds, as well as the annual reenactments of Revere’s ride and the Battle of Lexington and Concord. As game development continued, I hoped to record effects and even impulse responses in some of the historical buildings that are present in the game, such as Revere’s house, and to work with local re-enactors to record clothing and equipment sounds.
- Follow the sampled instrument concept. I built the sound effect engine to incorporate a degree of controlled randomness that would help the soundscape sound natural. This means some effects are made of multiple recordings with subtle variations. The trick to this approach is that the sounds not be so different as to be unnatural or distracting, but that there be enough variety to avoid a “machine gun” effect with multiple repetitions.
Going into this project it was clear that, this being a stealth game, ambiences would play a particularly important part in creating and maintaining a suspenseful mood. Due to the historical, real world milieu of the game, it was also going to be important to create a very natural, varied soundscape.
The ambiences for Children of Liberty are made up of two components – longer continuous sounds – these are the wind and ocean in the Warehouse walk-through video above – and shorter stingers – the gulls, shutters, crashing waves, dock sounds and building creaks. Since I was designing and building the audio engine from scratch, I had an opportunity to come up with a design that would bring as much variation and randomness into the soundscape as possible, while also considering technical needs like hard disk space, runtime memory and streaming.
These practical and creative considerations led to the following approach:
- Limited streaming resources would be reserved for music, which requires the largest files.
- Longer, continuous ambiences were kept small by breaking larger files into smaller segments.
- These ambience segments are played in random order, creating more variety than by looping a single, longer file. This allows us to keep these non-streaming resources small while maximizing variation.
- A cross-fade is created across the change in ambience segments by the software class that selects each segment to play. This cross-fading of segments alleviates the need for matched start and end points across multiple files. The timing and placement of the cross-fade can also be randomized, creating some more variation.
- Components of the continuous ambiences are kept as separate layers (e.g. the ocean vs. the wind in the Warehouse walk-through), allowing them to be re-used individually or at different relative volumes, such as when outside on Long Wharf where the ocean would be comparatively louder. This also creates greater variety than one combined layer of ocean and wind because of the changing juxtaposition of the layers.
- The stingers are also assigned to separate layers, allowing them to be re-used individually, with variations in their relative timings. Each stinger layer can have different trigger intervals. I love triggering an odd variations at an extremely long interval to create an unexpected surprise for the player!
Gun Tapping Redcoats
The sound of the redcoats tapping the barrel of their guns is very important in the game. It serves as an aural cue as to the proximity and state (idle) of redcoats off screen. There is often more than one redcoat in the vicinity of the player, and the sound occurs so frequently, it becomes part of the ambience. So it was important that there be plenty of variation. Again, the sampled musical instrument approach was used:
- Separate sounds for each of the three fingers were recorded, with three separate variations produced for each, yielding nine different sounds for a given redcoat.
- Three full sets of tapping sounds were created, which are randomly assigned to individual redcoats.
This proved to be an effective approach, yielding a sound that adds suspense and holds up well to the frequent use.
With the aim of creating as natural an effect as possible, footsteps were conceived as mini-sampled instruments. Player character footsteps (including the wall shuffle variation pictured) are made up of seven or more separate sounds. As an example, the most frequent – the walk and run effects are made up of three left foot steps, three right, two infrequently heard variations and one very rarely heard one – perhaps a slightly louder squeak – as well as two variations of the stopping step.
The trick to making this approach work is that these sounds not be so different that they distract, but have subtle variations that sound organic. Ultimately I found that building the right set of sounds that work together for an effect requires a lot of listening, experimenting and tweaking.
The dynamic heartbeat speeds up and gets louder the closer the player is to an enemy. As with the other repetitive sounds that meld into the ambience, it was important that there be enough variation to avoid the distracting ‘machine gun’ effect that occurs when a sample is repeated multiple times in near succession.
Using separate loops for each heart rate would not have achieved this unless the loops were long. Instead, each of the two heart valves has it’s own set of samples, with more than twenty different samples used to create the four different heart rates. The volume is also varied dynamically based on proximity to an enemy. Ultimately this maximized variation while limiting memory resources.
Integration and Development
Being the developer of the audio engine as well as the composer and sound designer for Children of Liberty had several advantages, including allowing me to integrate audio functionality with the main codebase where it made sense, which created runtime efficiencies and saved game developers from having to spend significant time on audio functionality. Here are a couple of examples:
Sprite Manager 2 Animation Cell Integration with Game Audio
For animation-driven sounds, rather than exposing the expensive Sprite Manager event handlers that would make delegate calls out to the audio engine every frame (30x per second), I wrote a SpriteManagerIntegration class that assigns sound effects to individual animation cells at load time. This replaced the per-frame delegate calls with a simple local variable check. The audio engine is only called when a sound effect has been assigned to the particular frame.
Game Audio Parameter Integration
Several parameters interact with the audio during gameplay, including the proximity to an enemy, which affects the player’s heart rate and volume while under cover, and the danger level of a moment, which impacts the dynamic mixing of the musical soundtrack. Rather than the game developers having to know when to update a parameter’s value, the audio engine is aware of how and when to retrieve their values through a set of functions whose call frequency and other properties can be configured.
One question that came up for this side-scrolling platformer is how the audio should react when the camera distance changes suddenly with the size of the game space or the player’s actions, as when the camera quickly zooms in when the player peaks around a corner. The audio listener (the “virtual microphone”) needs to track with the camera to maintain proper perspective, while avoiding the odd doppler effect and sudden volume change that would occur if the mic tracked the camera precisely.
The solution I developed is the ‘MicBoom’ class, which is a game object that hangs off of the main camera at either a fixed or relative distance between the camera and player. It tracks the camera at velocity relative to the camera, which can be configured to avoid the unwanted doppler and volume effects without having to change game parameters that would have adversely affect other aspects of the audio, such as the relative attenuation by distance.
Scenarios came up during testing when, for example, several redcoats would trigger the same or similar piece of dialog in an unnatural way, having discovered the player at nearly the same moment. It became apparent that the NPCs would need some awareness of what they’ve heard to avoid this and similar situations.
The solution for this was the Dialog Audio Manager class and a set of rules to filter NPC speech based on an ongoing remembered conversation. By tracking dialog this way, the opportunity also exists to introduce some intelligent back-and-forth between NPCs in the future.
The two main themes for Children of Liberty’s score were developed from historical styles – military drum and fife music of the colonial era and orchestral music of the early classical period. These, along with a few additional pieces (for example, a traditional folk piece for the Green Dragon Tavern cutscene), would provide most of the material for the score.
I also incorporated traditional source material in the score in the form of the Scotch and English Duties, which were used to deliver orders in the field and would reflect the activities of distant British troops in the game.
Theme I: ``Give Me Liberty or Give Me Death!`` for early classical orchestra
The first theme, heard in the opening of the Warehouse walk-through video, is period orchestral music to drive the big action scenes in the game. Music of the early classical period has a particularly antiquated quality, creating an opportunity to style the score in a way that stands out from that of other orchestral games, many of which use a larger orchestra and take their musical language from the late 19th-century forward.
Theme II: ``Children of Liberty`` for Military Drum and Fife
The second theme of the game represents it’s namesake characters and their development through the story. It is a traditional military drum-and-fife piece, a version of which plays during the menu screen in the Warehouse walk-through video. In line with the use of toys as the building blocks of the sound effects, this incarnation uses recorders to reflect the character of the children. As the children evolve and succeed, the presentation of this theme would become larger and more dramatic.
It’s unclear whether Mozart actually said, “What’s even worse than a flute? – Two flutes!”, ** but he clearly wasn’t a fan of the early flutes of the period, whose pitch proved a bit unwieldy prior to the addition of a full set of keys. Woodwind writing for early-classical orchestras tended to emphasize double reeds such as the oboe and bassoon. Focusing on these instruments (along with the strings and horns) happens to give the music an eerie and mysterious character perfect for a stealth game!
Classical-era music by it’s nature is melodically driven, with accompaniment patterns that shift every several measures and harmonies that strive towards some resolution or goal. For key dramatic moments in the story, such as Revere’s ride, this is perfect. For other, more stealthy scenarios, a classical style melody would likely be too much. Something suspenseful, with ambient qualities that can continue for extended periods would be more fitting in such cases. For these situations I came up with three options, all of which can be heard in the Warehouse walk-though video:
- Let sound effects create the mood. A common and effective option for stealth games, the walk-though video illustrates this, with a rich, varied soundscape and sparse music through most of the level.
- Create atmospheric drum loops based on the “Children of Liberty!” theme. The first encounter with a redcoat in the walk-through video illustrates this idea. This loop is triggered when the redcoat is near, and fades out once the danger is removed.
- Chaconne! A pre-classical inspiration provided a historically consistent idea that could also be ambient and suspenseful – a Baroque-era chaconne based on the “Give Me Liberty or Give Me Death!” theme. A chaconne is a piece based on a repetitive chord progression or bass line, which is a great framework for building something suspenseful. A sample comes in towards the end of the Warehouse walk-through, where it builds in texture as the situation gets more dangerous! ***