Real Live Action

Play Nice: Musical Collisions Between Humans and Intelligent Machines

w/

Gold Saucer; July 28, 2017

Words + Photography
Lucas Lund

Speaking to my grandparents about modern music can be a bit trying sometimes. Despite my most patient explanations on how the technological advancements in music equipment has fundamentally shifted the grounds for what music is, and what it means to be a musician, their responses are almost always the same: “But they don’t even play their own instruments — it’s all computers nowadays!” As frustrating as they are, their complaints about the apparent lack of musicianship at present contain the seeds for what I find to be a very interesting concept. What if music was entirely made by computers?

Arne Eigenfeldt, professor of Music and Technology at Simon Fraser University, has obviously spent a lot more time investigating this idea. Co-founder of the Musical Metacreation research group, Eigenfeldt’s practice involves creating musebots, “pieces of software that autonomously create music in collaboration with other musebots.” While this form of generative music may not be entirely made by computers — it requires a human to do the programming — it is definitely closer to human-less than most music out there.

To showcase these futuristic music makers, Eigenfeldt, along with three other Musebot designers — Matthew Horrigan, Paul Paroczai and Yves Candau — joined forces with Sawdust Collector and took over the Gold Saucer Studio to pit their intelligent machines against human collaborators.

Raj Gill
Emcee Raj Gill||Photography by Lucas Lund for Discorder Magazine

The Musebot designers sat at a long, safety-blanket-draped desk, covered in laptops, monitors and cables stretched along the side of the studio. In front of them, and in centre stage, the human collaborators set up their equipment. To match the innovative mood of the event, emcee Raj Gill chose an experimental mode of introducing the pieces. He acted as if he were an artificially intelligent machine, learning how to emcee in the midst doing it, while narrating his own experience of emceeing.

Eigenfeldt’s bot’s first piece was a collaboration with cellist Peggy Lee. The musebot, while creating a soundscape, generated traditionally notated music that was displayed for Lee on a screen, with which she played along. Dark and aching tones seemed to drag themselves out of Lee’s cello, alongside the musebot’s ambience.

Peggy Lee:Arne Eigenfeldt
Peggy Lee and Arne Eigenfeldt||Photography by Lucas Lund for Discorder Magazine

The other two pieces, both accompanied by improvised prepared guitar, courtesy of Matthew Ariaratnam and Nathan Marsh, were significantly noisier than the first. Having studied and analyzed the two guitarists’ improvisation styles in advance, the musebots were forced to react and adapt to the new musical situations. In essence, Eigenfeldt’s bots improvised alongside the guitar.

Matthew Horrigan’s musebot’s piece was of a similar setup, with improvised guitar by Adrian Verdejo. Along with the musebots and guitar, David Storen added projections of glitchy and occult images, providing an ominous atmosphere to the distorted sonic landscape of the guitar and bot.

Matt Ariaratnam
Matthew Ariaratnam||Photography by Lucas Lund for Discorder Magazine

The most diverse performance in terms of artistic media was definitely Yves Candau’s musebot’s piece, during which Sawdust Collector’s Barbara Adler sat on the floor reading poetry while Candau fluidly danced to the bot’s soft, sample-laden music played out over the room. This piece seemed to me the most organic of the night. The human voice and human body captured the majority of my attention, distracting me from the idea that the sounds in the room were being generated by an artificially intelligent machine.

Inversely, Paul Paroczai’s musebot’s piece was the least organic. Without any other human collaborators besides himself, Paroczai’s musebots seemed to be playing solo. In fact, Paroczai was performing alongside his bot, changing the parameters with which they created music. From the audience’s perspective, what Paroczai and what the musebots were actually doing was completely opaque, since all the spectators saw was a person at a laptop.

Conceptually, the music created by musebots is a fascinating intellectual artifact — following the timeline of the influence of technology and computing on music, generative music and artificially intelligent machines producing it seems to be a logical continuation. But for the performance side of things, and for us humans wanting to experience it, I think the musebots fell a bit short. The most engaging and interesting pieces of the night were the ones with the most performative elements: the dancing, the projections, the human fingers playing physical instruments.

Matt Horrigan:Adrian Verdejo:David Storen
Matthew Horrigan, Adrian Verdejo and David Storen||Photography by Lucas Lund for Discorder Magazine

Perhaps it isn’t the fault of the musebots that I feel this way — I am human after all. For millennia, humans have created music for humans to experience. But now, as computers are beginning to create music on their own, does it really make sense that humans are the ones that listen to it?