Modern life is increasingly controlled by algorithms.
Our personalities live in servers, we’re dating through data and keywords dictate what we dance to. Groundbreaking technologies are meant to make our lives cleaner, slicker and more polished. But most of us still don’t know what blockchain is. And admit it, you’re still worried that Siri is secretly recording you talking shit about your ex. How do we begin to understand concepts that were alien to us just a few years ago? Breaking into the music industry is already a minefield without having to make your music video VR compatible. We already idolise virtual icons like Hatsune Miku and The Internet’s Steve Lacy makes hits through his iPhone. But what can we expect for the next evolutionary phase of creativity? And if you’re actually trying to make money from your music, how do you take advantage of these innovations?
Over the past two decades music production has been democratised. Equipment that was only available in state-of-the-art studios 20 years ago can be found in kids’ bedrooms across the world. With the technology to create only becoming more accessible and innovative, how will we produce the anthems of the future?
“There’s nothing new under the sun, things just become bigger, faster and shinier,” says Andrew Melchior. As founder of the 3rd Space Agency and longtime collaborator of Massive Attack and Björk, Melchior explores how we can create and consume music in new ways. “We’ve been using hammered, stringed and plucked instruments for years, so I try to uncover forms of musical expression based on the technology of the day.”
The most controversial new method of musical creation is that which is executed by artificially intelligent machines. “AI in music is super interesting because of how primitive it is,” says audiovisual musician Mat Dryhurst, who frequently works with fellow experimentalist Holly Herndon. “It’s important that beyond music, we grasp its potential implications and our agency in that process. Music would do well to play a role in that, and ditch the kitsch robot narratives.”
2017 saw the release of Hello World, the first commercial album created using AI and composed in a Parisian computer science lab by Benoit Carré and François Pachet, members of the collective SKYGGE, who built a piece of compositional software called the Flow-Machine. To create a record, you need to enter a guitar riff, an acapella vocal, or, say, the entire back catalogue of Cher into the Flow-Machine, which will then begin creating a piece of music in that style. In simple terms, if you want a song that sounds like The Smiths, let it listen to nothing but The Smiths.
The machine then proposes new melodies or sounds, musical frameworks that a team of performers take away and polish. The Flow-Machine isn’t hugely different from what’s known as the neural synthesiser, which teaches itself the characteristics of various instruments and merges them to create original, never-before-heard sounds. But making something that actually sounds ‘good’ on The Flow-Machine is a long process of trial and error. “[It] generates a lot of average results,” admits Carré. “But it was like a treasure hunt, searching for surprising melodies or unique sounds that we couldn’t have created ourselves.”
And machine learning boasts many unexpected benefits. How can Elton John write music long after he’s kicked the bucket? By allowing himself to live on in virtual reality, obviously – he recently performed in a motion capture suit to preserve his digital self forever. As machine learning and AI develops, we may even hear new music from him in another millenia. Make of that what you will. “If you can feed a machine learning model with enough information about you, your life, your songs…” speculates Melchior. “There could be a version of you that continues to enlarge its knowledge and create art after death.”
The prevalence of streaming services has meant that, according to YouGov’s recent Music Report, only one in 10 people in the UK download music illegally. That being said, Spotify is yet to make a profit, Soundcloud is in a constant battle to avoid bankruptcy and users of Jay-Z’s Tidal service are known to jump on the platform for blockbuster albums from the likes of Beyoncé and depart after the 30-day free trial is over.
So, if the streaming services aren’t making money, neither are the artists featured on them. Most indie musicians use Spotify as a promotional tool, popping up in ‘Ultimate Chill’ compilations but receiving pennies for their efforts. How can we level the playing field? “Ditching our idols,” declares Dryhurst, point-blank. “I’m critical of the unethical distribution of funds in DJing as an institution, or of musicians who have contributed a lot to our culture but now find themselves enthroned atop a zombie music industry. Unpacking that reality, and thinking of better ideas, could lead to something quite exciting.”
On that note there’s promise in Resonate, a co-operative streaming platform. Going public in May 2017, Resonate operate a #stream2own system, only paying for what you play. After nine plays the track is yours forever, and as a co-operative your payment gives you a stake and say in the company’s decision making. Ethical streaming, if you will.
“I come from a generation that was told that we could live our lives online because the reality of our material lives is that we can’t own anything,” says Luke Dubuis, Marketing Director at Resonate and co-founder of the Circadian Rhythms imprint. “But the reality is that you don’t own anything online, so Resonate is ultimately about putting artists back in control of things.”
Resonate can deliver double the amount of Spotify per stream – with over 5000 artists registered to the platform including Legowelt, DJ Paypal, Coldcut and FEMME, it’s proving to be an enticing platform for musicians outside the mainstream.
As more of our personal data lies beyond our reach, formats like Resonate present a promise: to bring our data back to us. Whyis this appealing? Imagine Soundcloud ultimately succumbs to the demise many are predicting for it. Your recordings are gone and so are the followers that you’ve built up over the years. (As anyone who has tried to log in to MySpace in 2018 can testify, good luck getting them back.)
“These platforms are built in unsustainable and inequitable ways,” Dubuis tells me. “But we need more sustainable systems, and decentralisation is the solution to that. It means that people take ownership of their data and their connections, and that is the first revolution I see happening.”
With so many avenues for consuming music it’s hard to see beyond the noise. Our dependance on recommended playlists and YouTube videos might have turned us into lazy, feckless listeners, but through the smog a number of innovative names are asking the question ‘how can we experience music differently?’
“What can you do that’s actually any good and that you want to consume more than once?’ asks Melchior. That, and an air of cynicism toward technology, shared by those he works with. “Björk looks suspiciously at things,” he laughs. “In Japan a few years ago we were looking at the robots made by Hiroshi Ishiguro, and I felt she was like ‘ugh, why bother making rubbery skin female robots? It’s all a bit creepy to me’.”
Recently, alongside Robert Del Naja, AKA 3D of Massive Attack, Melchior built the iPhone app Fantom as “a thought experiment in regards to procedurally generated music”. In essence, Fantom creates live dub mixes of existing tracks through data algorithms captured from your surroundings; your heart rate, the time of day, your location and the motion of your phone all change how the music is played.
“I think you’ll have more traditional artists working in augmented forms like [Fantom],” muses Melchior. “We’re thinking about how the real world and imagined world can converge and how music can be experienced as you move through your environment.”
The ability to amplify our environments gets exciting when you apply it to a festival stage or concert hall. Given how record labels and marketing agencies are keen to jump on the VR hype, it’s no longer such an abstract idea. “In the future people might be wearing glasses with real time feedback, and if that was the case, what could you do to make that more entertaining?” Melchior asks.
Whether it’s an AI album or glitterati popicon immortalised in VR, that’s ultimately the aim: to be more entertaining. “In the bubble we live in we’re trying to keep the shared experience alive,” Melchior asserts. “People forget we’re in show business. You don’t always need a philosophy, it’s about coming out of a field with a bunch of your mates and saying: ‘that was great.’”