CRACK

Harmonising with Artificial Intelligence
Words: Rachel Grace Almeida
Illustration: Andrew Cunningham

Earlier this year, the first-ever commercial album using Artificial Intelligence was released. Hello World, composed by Benoit Carré – aka SKYGGE – and François Pachet, started as a technological research project called Flow-Machines. The vision was firmly rooted in the concept of metamorphoses; from music to data, from neural networks to musical notes, and from a scientific project to a pop album.

Hello World is a collection of idiosyncratic pop songs, featuring both automated and organic vocalists spanning across all genres. Machine-made algorithms captured and reproduced a series of musical styles, all with one goal in mind: to create songs that are not just enjoyable, but to generate a new direction for musical melodies, rhythms and practices. Above all, the release of Hello World made one thing clear: Artificial Intelligence is becoming an evolving tool for exploration in music, whether the music industry is prepared for it or not.

As it stands, the way Artificial Intelligence works in music is through neural networks, and the recognition of patterns in data through machine-learned algorithms. When musical data is put into the machine, the algorithms can identify patterns that define each genre of music – the styles on Hello World, for example, range from pop to jazz and ambient. And with the continued development of Artificial Intelligence in recent years, algorithms are now capable of creating unique music, rather than just borrowing and manipulating existing sounds.

An increasing number of tech start-ups and multi-disciplinary artists are developing AI systems in which music is created and manipulated. Melodrive, an AI composer application, allows any user – regardless of their musical skill level – to create original music through following an intuitive workflow, at the touch of a button; Flow-Machines, the research project that brought us Hello World, tested the boundaries between human creativity and automation; and artists and collaborators Holly Herndon and Mat Dryhurst are currently in the developmental stage of Spawn, an AI system that is experimenting with the ways we process traditional audio.

As with every new technological advancement, there are many outcomes to consider. Humans have a potentially unique capacity to process information in a time and context-specific way, unlike machines. There are concerns that AI is threatening the musical process, and there have been questions raised about whether human creativity and instinct can be automated by a machine. However, it has been widely agreed – from AI startups like Google’s Magenta, all the way to academic research institutions like the Austrian Research Institute for Artificial Intelligence – that there isn’t any real threat on the output of human musicians, unless you’re a stock music composer. In some ways, there is an air of false competition between Artificial Intelligence and music – that one cancels out the other, instead of working alongside each other.

Mat Dryhurst and Holly Herndon’s Spawn is trying to tackle exactly that – the complicated relationship between music and technology. “Why does it have to be competitive?” Dryhurst wonders as we talk in the hallway of Kunstquartier Bethanien in Berlin, where audio-visual technology event CTM Festival is based. “With the stuff that me and Holly are trying to do with Spawn, it’s more about symbiosis and about learning from each other. Why can’t we coexist? Why do we have to be more creative than them? There’s different ways to be creative and all are valid.”

Creativity at its core is problem solving, and technology has always been a vehicle for that – in order to fully understand Artificial Intelligence in music, Dryhurst says we have to ask ourselves what creativity is. “Displaying creativity is kind of like a spiral mechanism – the idea of creativity being a novel idea that relates to the current pressures placed upon music, we’re yet to see, but I don’t see why not. Maybe the idea of humans being uniquely creative is bullshit.”

 

Music is entirely subjective; there is no agreed-upon measure of what a ‘good’ piece of music actually is. The argument is that the aim for AI in music isn’t to replace human talent, but to serve as a functional tool in order to enhance it. In some ways, Artificial Intelligence in music has always been at play; so much of the musical experience is already automated through algorithmic processes. Most people now online are consuming in a way that’s algorithmically tailored to them, with the first blinding example being a service many of us consume daily: Spotify.

“I think there will be more algorithmically generated content for people to whom music is a garnish on their day,” Dryhurst says. “So, the end goal for something like Spotify is that they can produce more and more of their own content to satisfy the needs of a consumer. For example, if you want to listen to music in the shower, and they figure out how to playlist the music you like to listen to in the shower, there’s going to be a space for AI with no human intervention at all because it’s just cheaper for them.”

There’s a sense of responsibility when it comes to AI more generally, and this also relates to its role in music. How is AI going to serve music in the long term? It’s important to realise that we can’t extricate music from the political and cultural circumstances that it finds itself in, but AI can be a useful device to propel creativity in itself.

AI being used as a tool for symbiotic relationships between technology and humans is likely what the future holds; the idea of AI as a sort of auditioning servant. If you’re writing a track at home, you might be able to develop your own personal AI system to perform with, or use as a creative crutch to show you variations of sounds to choose from. Ultimately, it’s outsourcing labour to a tool rather than a human being. But that’s the way commercial pop music works – Beyoncé goes into the studio, and the producers say, ‘we need someone who’s good at guitar, let’s send it on' and they see it go through many variations until it reaches its final form. With advancements in AI-based music technology, that process will become cheaper and faster to do to the point where you can have it in your laptop or on a server somewhere.

 

As it stands, AI in music can be used to enhance the creative process, but it’s debatable as to whether it would actually replace human labour in its entirety. Before leaving CTM festival, Dryhurst and I talk about how the marriage of technology and music has the capacity to inspire new genres, musical practices and even sounds we’ve never heard before. We speak about how the fetishisation of the future via kitsch films and imagery is painting AI in a menacing light. He tells me that, at its core, music’s relationship with Artificial Intelligence is one of mutual understanding. “I like the idea of new forms of human and musical ritual that can be spawned from this new form of technology,” he says. “New ways to occupy space together, new ways to create together, new ways to communicate with each other. Hopefully it does change music, because these tools have so many potential applications. I’m hopeful.”

Your support would mean everything. Literally.

Our Supporters really do power everything we do; as an independent media publication this community is vital to sustaining us. Sign up and get a load of benefits in return, including discounted festival and event tickets.