I track the game-developer world to a limited extent (because I work on real-world pervasive games), and today I ran across an article by Andrew Clark in Gamasutra describing what he calls adaptive music. Very few online articles keep me reading from start to finish – and a four-part article is even less likely to do so.
Why did this article intrigue me so much? Well, first, I have been a musician (to some degree) since I was five years old; and second, I’ve been writing educational (computer-based) games and programs since 1978. And I’ve always wanted to incorporate sound and music when it was possible. At the age of 18 I was faced with a flip-the-coin decision between engineering and music. My piano teacher told me that I could embark on a life of music and perhaps if I worked really hard I could make it – but that it was more likely that I’d be a “talented amateur” rather than a successful professional musician. So I went with engineering, and ended up in computer science.
But when personal computers came along, I tried to incorporate music wherever I could. When I started a company (in 1980 – DesignWare) to create educational titles for kids and we started working on the new Apple-II, we had little musical themes that popped out of nowhere when the game started up.
What Andrew calls adaptive music is the kind of music that appears in today’s video games that “tracks” in some ways the action and stages of the game itself.
The composer can’t determine in advance the length of the required music, and so it’s most likely got to be created out of a series of fundamental sounds and a bunch of rules (or computer code) on how to assemble something that sounds good. Because the music isn’t composed linearly in advance, the game program itself has to use or to contain the “rules” from which the music can be composed on-the-fly (he doesn’t describe it that way) and these rules determine how the state of the game affects the chords, rhythm, tonality, pitch and other aspects of the music that the game produces.
What the article made me think of was some current “research” that I’m working on… one of my colleagues and I are playing with ways that people can “visually navigate” a web site (the navigation uses images rather than words – as much as possible). Imagine that instead of visual navigation you listened to some music that was playing and extracted your clues for navigation from the music. A bit far-fetched, perhaps, but let’s explore it. Don’t you sometimes listen to music and have a reaction like “that’s ‘soothing'” or “that’s exciting music” or “this is complex” and “this is sad” – what if you could translate those into clues for finding and then clicking a place on the page that would get you to a particular part of the web site? Or, maybe there’s a better scenario – let’s imagine that the web page is like a “noisy party” – as you move the mouse around the page you hear people talking – and as you near particular points on the page you begin to hear more intelligibly a specific voice that tells you what you will hear (or what page you’ll go to) if you click “right here” on the web page. in other words, you wander around the party listening in on these snippets of conversation and when one sounds interesting you click and off you go to another page.
Of course, my brain immediately whisks me away to think about “how would we implement this?” But, that’ll be a story for another time and place.
[Also see: An earlier (2001) article by Andrew Clark on adaptive music.]
[posted with ecto]
Leave a Reply