Archive The Eidos Interview at IASIG (by Alexander Brandon)



In our last issue we introduced the topic of Interactive Composition through an examination of DirectMusic several key issues surrounding its introduction. Notably, the questions were asked "is DirectMusic too late?", and "just how can it differ from streamed audio?", The discussion focused mostly on theoretical issues surrounding the concepts of linear vs. non-linear audio.

In this issue we transition from the theoretical to the practical for an interview of members of the Eidos USA 'Soul Reaver' sound team by Alex Brandon (AB). Soul Reaver is the anxiously awaited sequel to the popular Blood Omen, Legacy of Kain. The team members interviewed include Jim Hedges (JH), Kurt Harland (KH), (of Information Society fame), and Amy Hennig (AH). <Ed Note: In the following text "Legacy of Kain: Soul Reaver" will be referred to simply as SR.>

Afterwards, we also asked Jim some additional questions about other Eidos projects which use their in-house Adaptive Audio system.
―Alexander Brandon[2]

An Interview With The 'Soul Reaver' Sound TeamEdit

AB Please describe the roles that the various team member playing in realizing the sound design for Soul Reaver and any other Eidos products that you want to talk about.

JH For "Soul Reaver", the music was composed by Kurt Harland, while I did the adaptive audio programming, as well as being the "technical/creative liaison" between him and Amy Henig, the project's producer/director. For "Akuji", I wrote the music and did the adaptive audio programming.

AB First, could you describe the first game in the series briefly and mention a few details such as what kind of response "Legacy of Kain: Blood Omen" received, and what building blocks you had to use for its sequel, such as the extensive use of voice in the original.

AH Blood Omen: Legacy of Kain (the first game) was an action/adventure game in the Zelda tradition (not a hardcore RPG), with an overhead camera perspective and 2D graphics. The game was well received by critics and consumers, primarily for its unique theme and story line, which revolved around a "lone wolf" type hero afflicted with vampirism, and bent on revenge.

When we set about to design the sequel, we knew we wanted to take the franchise into 3D, with a third-person camera, but retain the spirit of the gameplay-in the same way that Zelda64 was an evolutionary step beyond Zelda on the SNES. We also determined from the beginning to establish a new central character. Since Kain had essentially established himself as a dark god, ruling over Nosgoth, at the end of the first game, we thought it would be interesting to turn the tables, and make him the nemesis for a new protagonist.

The quality of the story line, writing and the voice acting are distinctive elements of the Kain franchise-as opposed to similar action/adventure games, where plot is less important, and either text or second-rate voice acting is used to convey the story. It was important to us to retain the best voice talent (most of the actors are reprising their roles from the first game) and directors-Gordon Hunt directed the sessions, and Kris Zimmerman (who directed Metal Gear Solid) is our casting director. There's roughly an hour of voice-over in the game, and about 100 in-game cinematic events.

AB Describe, using technical as well as layman terms, the system you used for music in SR. Give a brief example of a part of the game where the music is used to enhance gameplay.

JH We used an in-house developed adaptive audio MIDI driver, which replaces the Sony driver entirely. Signals from the game, based on location, proximity and game-state set special music variables, which are read by the driver and used to effect changes in the MIDI data. How these signals are interpreted is controlled by an extensive scripting language with standard branching, logic and arithmetic functions. This scripting language is written using MIDI text 'meta' events. These text commands can be written in a standard text file, or interspersed with other MIDI data in the MIDI bytestream. Some of the changes to MIDI data available are: muting/unmuting, transposition, pitch mapping, sequence start/stop, volume/tempo/pan changes etc.

As an example, in Soul Reaver, every piece of music in the game has several arrangements which correspond to different game states. The default arrangement, or "ambient" mode, is used when no signals from the game are present. When the player comes within range of an enemy (weather seen or not), a signal is sent to the driver which sets a designated "danger" variable. The script sees this change in the variable and mutes/unmutes tracks to produce a more intense "danger" arrangement. When the player engages in combat, another variable signifying combat is sent, and the same process ensues, this time with a tempo increase. If the player stops fighting or kills the enemy, the combat variable changes again, and this time certain tracks from the "combat" arrangement begin to fade out. If the player resumes combat, they fade back in. If no combat resumes, the combat tracks fade out entirely and the music changes back to either "ambient" or "danger" mode.

AB What kind of leverage did you have in creating the soundtrack for SR? Did you control or at least coordinate the style of music? Were you able to recommend sound systems or request new features?

JH The director of the game had some fairly well developed ideas regarding the music, however she chose the composer, Kurt Harland, specifically because she liked his style of music and thought it would be within his stylistic range. Since the arrangement of the music was so dependent on the interactivity, and the abilities of the driver, I had a lot of input into how the music was put together, since I would ultimately be responsible for making it "work" in the game.

I'm always able to request new features because the guy who wrote the driver, Fred Mack, works next door to me. I'm a constant pest.

AB Did you collaborate cooperatively with the sound effects person(s) on SR, or did the two teams work mostly separated?

JH We did all the SFX in-house, so it was very collaborative. Sometimes we used the adaptive audio tools to create MIDI sound effect sequences which could not be created otherwise.

AB Did you collaborate cooperatively with the level designers?

JH Sound effects: yes, music: no. The music interactivity in Soul Reaver was specified on a very global basis. Most of the signals being sent were from the game code, as opposed to the level description files, so I worked mainly with the programmers. In the cases where signals needed to be sent from specific levels, I went in an edited the files myself.

AB What kind of preparation did you do on planning the soundtrack, and what tools did you use to create it?

KH I spoke with the producer, Amy Hennig, quite a bit and got her ideas on what sort of feel to give the game through the soundtrack. She not only described the interactive structure she had in mind, but also the ideas of the environments and characters in the game. For any given area, we took the history and nature of the creatures living there as the first inspiration for the soundtrack. For example, one of the regions of the game was inhabited by a race of mechanical-engineering-oriented vampires. Based on their goals and behaviors and on the intended smoky, mechanical environment in which they lived, I composed sounds and music which were thick, slow, and thumping, like big machines far away.

Amy also gave me a lot of architectural drawings and photos to get a feel for the look of the environments.

I used my very old computer sequencer: Voyetra Sequencer Plus Gold for composing, and Sonic Foundry's Sound Forge for designing sounds, mainly. Other than that, I primarily had to use the Sony tools for developing on the PlayStation.

AB Was there anything particularly satisfying (or dissatisfying) about composing music for SR?

KH The best thing was finally working on a game in which the music could be quiet, unobtrusive (except during combat) and filled with environmental sounds. I used rain, birds, screams, etc. This came at the request of the producer, and it's also always been the way I thought game scores could be more often. As has been said so many times, a fully-structured pop song can sound great to listen to once, but become a blood-sucking leech with teeth in your ear after the 47th repetition. This music is much less structured and "in-your-face". I loved getting a chance to do that after thinking about it for so long.

Also, to combat this problem, Amy described a structure for the music tracks which would allow for the music to change a lot depending on what was going on. In this game, as you enter areas where enemies are known to be, the music gets a little more suspenseful. As you get within visual range of enemies, it sounds dangerous. And when you actually start to fight them, it becomes quite intense. The music also changes when you're underwater, outdoors vs. indoors, etc. This was a lot of fun artistically, even though it was a bit trying technically. It produced a lot of problems for composition, since it doesn't fit well with many aspects of music that people are used to and expect to hear. But it worked out quite well in the end.

This interactivity was made possible not only by the creative vision of the producer, but also by some of the past and present audio department at Crystal Dynamics/Eidos USA: Fred Mack, Mark Miller and Jim Hedges. They developed their own interactive audio driver which makes all this possible.

Dissatisfying? Well, the details would be a little boring, but in general, the biggest thorn in my side was making separate versions of all the music for use when the player is in the spirit (spectral) world instead of the physical (material) world. It caused numerous niggling problems that were hard to work out for everyone. But we did.
―Alexander Brandon[2]

More With Jim HedgesEdit

AB Describe, using technical as well as layman terms, the system you used for music in "Akuji the Heartless" (ATK, for short)

JH The system was basically the same one that we used for SR. The music in Akuji was arranged in sections which corresponded to different sections of a given level. In this way the music progressed and developed on a large scale in accordance with the development and action of the game. To achieve this, signals would be placed at key points throughout the levels (entering a new room, fighting a battle, solving a puzzle etc.) These signals would trigger new sections of the piece to play, so that the music would follow the level. These large scale sections were then broken down into subsections, which consisted of various tracks which could be muted and unmuted based on various game states.

For example: The player begins the level. The first part of the piece starts, playing the theme and setting the tone for the level. This section repeats until the player enters the second room, which contains enemies. Upon entering the room, a signal is sent to the driver, unmuting tracks which play the battle music for that section. The door closes behind the player, so they must defeat all enemies in order to go on. The battle music plays until the last enemy is defeated, at which point a signal is sent, the battle tracks stop, and the music goes into a new "post battle" section, which also serves as a musical transition to the next section. The door opens, allowing the player to leave the room and continue through the level. When the player leaves the room, a signal is triggered and the music moves on to the next section.

AB What kind of leverage did you have in creating the soundtrack for AH? Did you control or at least coordinate the style of music?

JH I had pretty much complete authority in writing the music. Once some basic styles were established (tribal, voodoo, heavy ambiance), I just ran with it.

AB In working with the ATH team, how interested was the team as a whole in an adaptive audio system? Did you call most of the shots or did you work cooperatively in suggesting ideas? Did you use any techniques from your previous games such as the "Gex" series?

JH Initially, the team wasn't very interested simply because they weren't aware of what an adaptive audio system could do. I had to sell the idea to the producer and lead designer. Once I started writing the music and implementing it, however, the whole team really liked it and were very supportive. I ended up calling most of the shots, but was helped greatly by the cooperation from the designers. I was constantly asking them to add new signals to their levels.

Gex II was the first game to use the adaptive audio driver, and used it in a limited way. Akuji was the first game to use the capabilities of the driver as a starting point for composing the music, so all of the music was written specifically for the driver.

AB Did you collaborate cooperatively with the level designers?

JH Yes, very much so.

AB What kind of preparation did you do on planning the soundtrack, and what tools did you use to create it?

JH Instead of sitting down and writing a self contained, complete piece of music, I started by writing in fragments. For example, a rhythm chart with different parts which could be muted and unmuted to produce different densities. Or an ambient bed which could support many different parts. In general, I composed different parts which could be combined in many ways. All of this was done using Studio Vision Pro along with various samplers and synthesizers. I then got together with the lead designer and went through the material with him, showing the various combinations possible. He would give me feedback on what combinations might work in certain areas of the game.

Then I would get together with the level designer and have them show me their level from beginning to end, so that I could get a sense of the structure of the level, including all important battles, puzzles etc. This structure would largely dictate the form the final composition would take. After the designer put in the signals for the level, I would go about scoring it, writing the script which would control the behavior of the music.

AB Was there anything particularly satisfying (or dissatisfying) about composing music for ATH?

JH The most satisfying thing was creating a complete adaptive soundtrack for a game from beginning to end, instead of slapping some interactivity on after the fact. The feeling I got when I first started implementing the music, and having it react to the gameplay, was the same as when I scored my first film.

AB What games in the past that you have written music for have used adaptive/interactive audio?

JH Gex II, Gex III (PSX), Mr. Bones (Sega Saturn), Tazmania, Taz II: Escape From Mars (Sega Genesis)

AB Has there been a steady trend from your perspective either towards or away from the use of adaptive audio in games?

JH I think it's growing, but it's not steady. Producers still need to be educated as to it's benefits. It will take a breakthrough title with great adaptive audio for the rest of the world to really notice.

AB What techniques have you used, with what engines, for older title soundtracks, whether it be adaptive or not.

JH Back in the day, I used the GEMS driver for the Genesis. That was a great driver for it's time, and had some adaptive audio features as well. When I moved on to the Saturn, I had to use the standard Saturn driver from Japan. Glad those days are over.

AB How much control have you had in the past over your soundtracks?

JH I've pretty much done what I want, weather people liked it or not In the past when I was a contractor, I would often not have control over the final mix. Now I make sure I always do.

AB What kind of relationship do you have with the manufacturers and developers of the tools you use? For example, have you been able to suggest modifications to audio engines on the consoles you have worked on as well as PC based titles?

JH I'm really fortunate to be working in a company with an audio programmer. He's the same guy who writes the tools, and I'm the guy who uses them, so I have a lot of input.

AB What new techniques do you see being used in interactive audio? Is there a game or are there games in particular that you have seen use interactive audio in a new and effective way?

JH I think the whole thing is still in it's infancy. I think the games coming out of Sony's 989 studios are doing great things with adaptive audio. Buzz Burrowes has a great driver there.

AB Do you feel the average gamer can get more out of a game with an interactive score as opposed to looped or single shot playback?

JH Absolutely. In the best case they will get an experience that is better than some movies. In the worst case they'll get music they won't be compelled to turn off.

AB Do you feel that producers and lead programmers are considering interactive audio more important in games you have worked on? If not, what do the team leaders on games you have worked on prefer, and why?

JH I don't think they consider it more important, they just agree to it because I bug them about it and promise them that I'll take most of the burden. With some exceptions, in my experience most producers and programmers prefer to not think about music until they absolutely have to, and then it's a little late for adaptive audio.
―Alexander Brandon[2]


  1. Archive Sat, 04 Oct 2003 02:11:38 GMT snapshot of IA-SIG Newsletter at IASIG (by Alexander Brandon)
  2. 2.0 2.1 2.2 Archive Mon, 08 Sep 2003 09:21:11 GMT snapshot of The Eidos Interview at IASIG (by Alexander Brandon)


Community content is available under CC-BY-SA unless otherwise noted.