Black Sound, Improvisation, and Computer Music

By Brian Miller

What is the relationship between black sound and computer music? And how do algorithmic musical practices relate to more traditional archival practices like transcription? We can explore some answers to these questions as posed by two very different kinds of computer music projects, both concerned with the nature and role of improvisation.

Computer music that is interactive is also necessarily improvisational, in at least the basic sense that even a rigidly defined computer program must to some extent adapt to the actions of the humans it performs with. One approach to computer improvisation involves the attempt to teach a computer to replicate more or less human modes of performance by “training” it (in a specific technical sense) on preexisting music, the idea being that music made by humans contains patterns that can be reconstructed statistically, and those patterns, abstracted away from an individual performance, represent the rules or constraints on a musical style.

You can see one example of this kind of training in the performances of Shimon, a jazz-playing robot developed at Georgia Tech’s Center for Music Technology, trained using transcribed solos from such jazz players as John Coltrane and Thelonius Monk:

Shimon’s training as a soloist is related to a much deeper history in jazz of transcription: notating or imitating solos from famous recordings, a practice that goes back at least to the 1920s, when a variety of breaks and solos by Louis Armstrong were published in 125 Jazz Breaks for Cornet and 50 Hot Choruses for Cornet. Interestingly, these transcriptions come not from commercial recordings but from special recordings made by Armstrong for the purpose of transcription.

 

These breaks are intended to be played as part of the turnaround, leading from the end of one chorus into the beginning of another. They represent a library of “licks” for the player to choose among, depending on the key and the harmony.

Since these solos aren’t from commercial recordings, we can only hear them in later recreations. Here, Swedish jazz trumpeter Bent Persson plays Armstrong’s solo from Jelly Roll Morton’s “Sidewalk Blues:”

If jazz transcription has always been tied to technologies of musical reproduction, things like the MIDI standard and the internet have facilitated new ways of engaging with this peculiar kind of jazz archive. The Jazzomat project, hosted at the Hochschule für Musik Franz Liszt Weimar, aims to “investigate the creative processes underlying jazz solo improvisations with the help of statistical and computational methods,” in part by way of a large database of digitized solo transcriptions.

For each solo, the database contains a variety of information about the recording from which it was transcribed (in this case, readily available commercial releases), the key, tempo, style, and chord changes. There is also a MIDI file, a transcription in standard notation, and a number of charts of statistical measures, like the number of times the soloist played a given pitch.


Transcription of Charlie Parker’s solo from “Ornithology,” by the Jazzomat Research Project

Scholars interested in music cognition have often turned to computational methods drawing on databases like this one, though the use of digital transcriptions by a jazz robot represents an unusually generative, rather than analytical, use of such technologies. To generate music, an algorithm builds a statistical model of certain kinds of relationships among the notes in all the transcriptions in the database. An interactive jazz robot will then “hear” the notes played by its human collaborators and choose its own notes accordingly. Perhaps it recognizes the key the human is playing in, as well as the tempo, and then, based on its settings, will respond with notes based on the statistical profiles of, say, Coltrane, Parker, Monk, or some combination of the three.

Musical applications of machine learning algorithms, however, are also intimately tied to more troubling technological issues that have become pervasive in recent years. Beyond Shimon, another project, called MUSICA (for “Music Improvising Collaborative Agent”), aims to accomplish a similar task: according to the project’s founder, rather than “following chord progressions programmed into it, MUSICA will take cues from a long history of jazz royalty, including the great geniuses of improvisation, Miles Davis, Charlie Parker and John Coltrane. The software will play collaboratively, as if it were a human musician jamming at an open mic.” The project is funded by DARPA (the Defense Advanced Research Agency), which seems to imply that whatever kind of interactional abilities are involved in jazz improvisation might have more general military applications. This hardly invalidates the idea of modeling jazz performance computationally, but it does suggest that the relation between computation and the jazz archive is fraught in ways that require careful consideration.

Composer, scholar and trombonist George Lewis’s computer music practice represents a very different approach to the understanding of black sound in the realm of computation. His computer music system, Voyager, is what he calls a “nonhierarchical, interactive musical environment that privileges improvisation.” Such a system doesn’t necessarily recreate musical style in the same sense as MUSICA or Shimon, because it isn’t trained on any preexisting archive of transcriptions, nor does it have rules programmed into it about how melodies or chord progressions work in jazz. It is more closely linked both sonically and conceptually to free improvisation, a term that refers to a broad array of practices that developed out of free jazz and other improvisatory traditions in the second half of the twentieth century. You can hear Lewis in an improvised duet with koto player, composer, and sound artist Miya Masaoka (and no computers); this is the beginning of “Live Duet No. 2” from the 1998 album The Usual Turmoil and Other Duets:

While collaborative free improvisation may sound random or unstructured on its generally atonal, dissonant musical surface, a performance is nonetheless structured by way of the performers’ careful attention to each other’s playing. Voyager, like a human free improviser, may respond to what it hears by “imitating, directly opposing or ignoring” it, and may respond with a wide variety of timbres, pitches, and rhythms. Hear Lewis perform with Voyager on the 1993 album of the same name:

Voyager thus destabilizes the idea of an archive for improvisational music like jazz. While the archive of recordings and the transcriptions they engender is a deep and essential source for our understanding of jazz, we can also understand that archive in terms of the more abstract cultural practices that built it—not the relations among the notes, but the relations among the performers and composers that make improvisation itself a “collaborative and heterophonic” practice, in Lewis’s terms. In this video, from the Kennedy Center in 2016, a more recent incarnation of Voyager improvises on its own before being joined by pianist Jason Moran (16:40), along with Lewis on trombone (22:15):

In Lewis’s words, “As notions about the nature and function of music become embedded into the structure of software-based musical systems and compositions, interactions with these systems tend to reveal characteristics of the community of thought and culture that produced them.” So what is black style in the digital archive, as revealed in these very different systems and our interactions with them? And what makes a musical utterance by a computer intelligible? If an algorithm that knows jazz as the statistical relations among the notes in all the transcribed solos of Charlie Parker gives a certain kind of access to the content of Parker’s musical archive, a program like Lewis’s Voyager demonstrates a different kind of account of what an archive of music as social and cultural practice might be—not one of relations among notes but of relations among improvising and interacting agents.