What’s next for John Lennon? A duet with Taylor Swift?

We’re sorry, this feature is currently unavailable. We’re working to restore it. Please try again later.

Advertisement

What’s next for John Lennon? A duet with Taylor Swift?

With a few basic prompts, AI can fill our world with any sound we ask for - but not everyone will call it music.

By Michael Dwyer

Credit: Judith Green

The Beatles are back! Strange how the perennial headline always signifies progress in the music business. So it was in June when Paul McCartney told the BBC about a Fab new song made with the help of artificial intelligence. “Kind of scary but exciting,” he said of reuniting his undead band once more. “Because it’s the future.”

In a world where John Lennon’s AI-generated voice is suddenly, surreally, a plaything for online dilettantes, wrong conclusions were quickly drawn. No, this new Beatles song is not a machine simulation, Sean Ono-Lennon clarified. It’s one of Dad’s old vocals isolated by noise-removal software and mixed with new instrumentation.

Other rumours swirled. Some wondered whether McCartney’s voice would be “de-aged” for harmony purposes, as per the rising clamour of AI-generated YouTube novelties. (Last year’s Glastonbury festival included a “duet” that drew unflattering comparisons between McCartney’s current vocal range and Lennon’s youthful on-screen contributions.) And were George Harrison’s old guitar takes data-scraped for the posthumous solo? Both seem likely. Weird otherwise.

Each of these applications of AI is subtly different, involving varying degrees of creative and mechanical intervention. Which ones you consider “kind of scary” and which ones “exciting” for the future of music depends on what you mean by music.

“People don’t listen to Taylor Swift because of her music,” says Stephen Phillips of Brisbane AI music company Splash. “They do in part, but it’s a lot to do with personality and celebrity and connection to the artist and all that stuff.”

That’s the stuff AI can’t replace – yet. Splash already has the technology, roughly on par with Google’s LM Music and Meta’s newly announced AudioCraft, to have a crack at anything. John Lennon and Taylor Swift together at last? Let them at it, as soon as those artists’ respective IP holders agree to lease their data sets – ie, their back catalogues – to train the machine.

Advertisement

This is the negotiation that industry players such as Phillips are waiting to have. Canadian singer Grimes is a pioneer in this space, having declared her voice open slather for creators in April – as long as they cut her in for 50 per cent. “I wanna be software, upload my mind,” she sings on her latest release. “Take all my data, what will you find?”

Grimes has long been a proponent of technological experimentation.

Grimes has long been a proponent of technological experimentation.Credit: The New York Times

“If they play this right,” Phillips says of artists and industry more broadly, AI will be “the biggest boon for the recorded music industry since the CD. Especially for that classic [breed of] artist that are not making new music any more. They’ve still got fans, but they’re not releasing anything new. This is a huge opportunity for them.”

While pop’s personalities prevaricate, there’s plenty of hold music on tap. Want quirky, upbeat funk? Want movie scene in a desert with percussion? Want tropical jazz for breakfast? Just type that into one of an exploding legion of AI music generators online and out it comes. Like ChatGPT for your ears, it’s flawed but getting smarter fast.

‘If they play this right [AI will be] the biggest boon for the recorded music industry since the CD.’

Stephen Phillips, Splash

Most observers agree that AI will soon own this kind of faceless, generic music. That’s no small thing because it’s increasingly what consumers want. The vast majority of streamers aren’t typing artists’ names into apps, they’re searching “lo-fi hip-hop” or “mellow electronic” playlists, often neither knowing nor caring who made it.

Melbourne music producer and educator David Jacob sees this as a “race to the bottom” incentive for AI music developers to “flood the market with hundreds of thousands of really poor-quality tracks in order to monetise the technology”.

Advertisement

Like most studio professionals, he’s loving the way AI is removing the “donkey work” from the mixing and mastering process, but he names Berlin-based American electronic artist Holly Herndon as one of few artists using AI in “genuinely creative, artistic ways”.

Herndon’s 2019 album Proto is the latest culmination of her experiments in “voices, polyphony and artificial intelligence” using her bespoke “neural network”, Spawn. It’s essentially an expanding data set of her own voice recordings for AI to crunch. The idea, she says, is to “expand past the limitations of my physical body”.

Venezuelan artist Arca is another name at the forefront of AI exploration. Her generative remix project with Bronze AI yielded 100 variations of a single track, and for the artist: “a sense of relief and excitement that not everything has been done”. Even if, in the pop scheme of things, her six-hour Riquiqui;Bronze-Instances(1-100) left consumers less excited.

Alejandra Ghersi, who performs as Arca, is one of the artists at the forefront of AI exploration.

Alejandra Ghersi, who performs as Arca, is one of the artists at the forefront of AI exploration.Credit: The New York Times

From veteran music systems guy Brian Eno to Japanese anime pop star Hatsune Miku to Portland dance-pop outfit YACHT, the last few years have heard scores more AI-generated experiments that history may count as pioneering. So far, it’s fair to say experimental electronic systems have mostly yielded experimental electronic music.

At the more traditional end of the spectrum, AI has more learning to do. On his recent tour, Beck performed a comically terrible song called Rebel Soul written by AI “in the style of Beck”. As Nick Cave has gruffly noted, artificial intelligence can offer only “a grotesque mockery” of a song without the human conditions required to conceive it.

Advertisement

That said, if formulas can be taught, machines can learn them. Beautiful Life, recently recorded by Melbourne songwriting lecturer Greg Arnold, is an AI collaboration composed in the style of his band, Things of Stone and Wood. ChatGPT-generated lyrics, chords, genre and tempo cues in seconds. “For the bridge,” it thoughtfully suggested, “the melody could build up to higher pitch to signify the rise above adversity.”

Imagine an intelligence hungry enough for such structural wisdom to devour every tutorial ever committed to YouTube. Imagine it crunching 100 years of methodology and philosophy, beginning with Paul Zollo’s weighty interview compendium Songwriters on Songwriting, then David Byrne’s How Music Works for cultural context …

There are thousands more academic resources out there, of course: centuries of human intelligence combined to codify countless esoteric tangents into an artform the layperson still quaintly perceives to be some kind of magic. How magic will it be when artificial intelligence learns those codes – in nanoseconds – and runs with them?

Credit: Getty Images

In her recent book This Is What It Sounds Like, Susan Rogers, American neuroscientist and former sound engineer to Prince, set to demystifying the nebulous question of why people like the music they like. Spoiler alert: lots of reasons, but surprisingly quantifiable.

Her take on whether AI will “downgrade the artform” is unsentimental. “My own attitude sides with the listeners who love it,” she writes. “Whenever music delights our [neurological] sweet spots, then who can say it is inferior to any other music?”

This goes to the heart of what we mean by music. Today, industry has successfully convinced us it’s all about an ingenious elite in a closed circuit of studios and stadiums and videos and magazines and charts and award shows. In fact, it’s a kinetic human activity that pushes air in the general direction of neural synapses.

Advertisement

“It’s not music that AI threatens, it’s a capitalist paradigm,” says Melbourne composer and audiovisual artist Robin Fox. “The music industry has done more to destroy music than any other entity under the sun. What AI threatens is authenticity; and authenticity is at the heart of commodification – so if we can’t pick between what’s authentic and what’s not, we don’t know what to buy.”

Melbourne composer Robin Fox: “It’s not music that AI threatens, it’s a capitalist paradigm.″⁣

Melbourne composer Robin Fox: “It’s not music that AI threatens, it’s a capitalist paradigm.″⁣Credit: Joe Armao

Fox’s current favourite AI music generator is Relentless Doppelganger, a YouTube portal that spews speed metal in real time, uninterrupted and forever. If speed metal musicians feel threatened by it, they need to transcend the genre that AI finds so easy to replicate, he says. That’s the role of human intelligence.

So, where is AI leading music in our lifetime? In one possible future lies the ongoing industrial myth of genius recycled by licensed gatekeepers in infinitely new but recognisably branded guises. The Beatles are back! (feat. Taylor Swift).

Loading

In another future lies the utter destruction of everything we’ve been sold about scarcity value; about music as product, about genius creators versus lowly consumers. A world where music is everywhere, by everyone who cares to make it, with whatever tools and data they choose.

Both futures are possible concurrently, of course. But even in a world already wallpapered from Netflix to Spotify with AI-generated mood soundtracks, there seems zero chance of machines storming into your local to put an end to music in its essential form.

Advertisement

“It’s an exchange of human frailties,” Fox says. “In that moment, when you’re sitting in a room with a musician, and they bare their soul to you, that’s a kind of magic. There’s no empathy exchange with an AI. You don’t see yourself in another when you’re interacting with an AI. That’s the kind of magic that an AI can’t replicate.”

Most Viewed in Culture

Loading