In case you missed it, deepfakes are on the rise and they’ve been bending our perception of reality since the late 2010s.
What are deepfakes? ‘Deep’ refers to deep learning, and the use of deepfakes itself can be found across several disciplines and fields; from images of celebrities in porn movies to fake words in politicians’ mouths. In the field of visual deepfakes, algorithms learn to reproduce and manipulate the image data they’re given. But there’s also a growing number of tools – namely, apps – for creating musical deepfakes. In the music industry, deepfakes have taken on the form of fake audio that can accurately imitate famous artists, reproducing all their idiosyncrasies of intonation, phrasing and pitch.
This July, Holly Herndon turned the tools on herself by unveiling Holly+, a “digital twin” that can sing back any audio in Herndon’s voice. Her self-made deepfake is the next logical step after her 2019 album Proto, created with a neural network called Spawn. But it should be recognised that before Spawn, time-saving algorithms were already widely used in music production to create everything from vocal synths to royalty-free soundtracks. Deepfake technology may still seemingly be in its infancy – and perhaps it’s news to some – but we’ve been drifting into hyperreality for years.
How did we get here? Across seven tracks, we map out the use of deepfake technology in music and how it’s evolved over the past decade.
Tupac at Coachella with Dr Dre and Snoop Dogg
(2012)Much like posthumous albums, holograms of dead artists can feel exploitative at worst, and at best just plain creepy. In 2012, Dr Dre surprised Coachella with a lifelike projection of Tupac performing Hail Mary on stage with him and Snoop Dogg. Though not technically a deepfake, the hologram started a chain reaction of digital reanimations of Tupac, including in Snoop Dogg’s music video for I C your Bullsh*t. Tupac’s “appearance” in the video – which is full of other digital illusions – is incongruous, but the video also sees Snoop using a time travelling car to kill plantation overseers, so we’re living out Snoop Dogg fantasies here. Still, it raises the most important question around deepfakes: can it ever be respectful or ethical to use an artist’s likeness when they have no say in the matter?
Miquela – Not Mine
(2017)In 2016, Miquela became the first computer-generated social media influencer. Designed by Trevor McFedries and Sara DeCou, whose software startup Brud specialises in creating digital characters, Miquela released her debut single in 2017. Not Mine is a laidback R&B beat with AutoTuned vocals and vacuous lyrics—like something an AI might generate after listening to hours of Justin Bieber and Alicia Keys. But it’s not clear if Miquela’s vocals are sung by a human or AI-generated – a detail that has been deliberately obfuscated by her creators. Either way, Miquela’s music plays with her fans’ perception of what’s real and what’s manipulated.
The 1975 – The Man Who Married a Robot / Love Theme
(2018)The 1975’s third album A Brief Inquiry Into Online Relationships is heavily preoccupied with how our digital lives match up with our IRL ones. The Man Who Married a Robot / Love Theme explores our reliance on digital technologies for human warmth and comfort, with a Siri-esque voice telling the story of a man who fell in love with the internet, lost his real-life connections and died lonely. Whatever you think about the lyrical content, the track sets the scene for the kind of experimentation with computer-generated vocals that deepfakes take to the next level. There’s something deeply strange about hearing Siri say penis, even if you know that the producer is making that happen.
Holly Herndon – Godmother
(2019)To produce Godmother, Herndon fed her AI “baby” Spawn with percussion tracks by Planet Mu auteur Jlin, the artist she calls Spawn’s “godmother”. Spawn then performed Jlin’s tracks with its own version of Herndon’s voice, sometimes approaching recognisable words before careening into unintelligible babble. Herndon, Jlin and Spawn are all equally credited on the track, acknowledging the AI’s capacity for independent creativity. Godmother represents the beginnings of Herndon’s utopian approach to deepfake vocal technology, now rapidly expanding through the Holly+ software project.
OpenAI – Jukebox
(2020)OpenAI is an AI research company with a mission: to ensure that artificial intelligence “benefits all of humanity”. Their 2020 project Jukebox is an AI fed with over 1.2 million songs that can create raw audio files in the style of various performers and genres. A browse of the SoundCloud page created for the project shows minute-long clips that mimic well-loved performers: crooner pop in the style of Frank Sinatra; vintage jazz in the style of Ella Fitzgerald; funk metal in the style of Rage Against the Machine. Scarily accurate in some ways, the results are also shaky: melodies and harmonies seem to skip around undecidedly, as though you can hear the machine learning even as it produces. The Jukebox demonstrates the playful side of deepfake vocals but also hints at questions of intellectual property and artistic integrity – is Frank Sinatra really singing “human flesh for a sacrifice”?
Travis Bott – JACK PARK CANNY DOPE MAN
(2020)Created by the digital agency Space150, Travis Bott is a deepfake that sounds like Travis Scott, and it does a startlingly good job of nailing his flow, AutoTuned vocals and arpeggiated melodies. The lyrics, however, are largely nonsense, as this song title suggests. Space150’s executive creative director Ned Lampert has revealed that Travis Bott often got stuck on the subject of food when generating fake lyrics: “There was one line like, ‘I don’t want to fuck your party food,’” he told AdWeek, “and we were just like, ‘What?!’” But Bott’s personalised lyrics show just how creative the machines can be on their own – and maybe one day the deepfake rapper will look back on these lyrics like you look back on the poems you wrote in primary school.
30 Hertz – [AI Music] What if Eminem wrote “My Name Is” in 2021?
(2021)30 Hertz is an enigmatic producer whose deepfake vocal track of Eminem’s My Name Is has had over a million views on YouTube. Making the old Eminem rap about modern phenomena like Billie Eilish, K-pop girls and Donald Trump’s wig, the track offers the cleanest and smoothest deepfake vocals on this list. 30 Hertz says that he got into deepfakes as a way to get his favourite rappers to feature on his productions while keeping their voices as they were in their prime. What does it mean for musicians if they can be made to sing and dance on command? If a bored fan can resurrect the “old” Eminem at will, where does that leave the new Eminem?
COMMENTS
[fbcomments title=""]