Artificial Intelligence’s First Album

“Hopefully I’m not in 20 years one of those people who has been responsible for the downfall of creative music.”

It’s here. It’s done. We’re at that point.

Just a few Poolsides ago, we quoted a story by Drew Millard in Noisey that cautioned against the streamlining of music from playlisting and algorithms. Here was a line towards the end of that story:

“We could wake up one day to discover that musicians themselves have become obsolete: Google is currently working to adapt its neural networks to create original music, while Spotify has hired a computer scientist specializing in teaching AI to emulate popular music styles.”

Last month, that Spotify guy — his name is Francois Pachet — dropped his first collab project with AI.


Flow Machines, the AI tool used for this album, work by being fed score sheets of music, digesting it, and spitting out a new composition.

It’s derivative in the purest sense of the word, a characteristic wielded like a blade by music critics to cut down unoriginal music at the knees.

“But!” A quick-witted rebutter may argue, “Isn’t then all music derivative? All music is just a processed repurposing of the music that’s come before. You’ve been fed tunes all your life, and you spit out a new composition.”

Short answer: no. Hell naw.

Read any sampling of interviews with artists asked about their inspiration, and more than a few will cite non-musical influences. “I’m inspired by life,” I frequently hear from artists I interview, which is tremendously frustrating as an interviewer but undoubtedly true.

Lived experience — the culmination of diverse sensory stimuli accumulated over a lifetime — that’s what goes into making music. There’s something ineffable about it, something that’s being short cut here when it just gets spat out. It’s what made me furious when an English teacher would bark out the quote about ‘every story having been told before.’

Wrong, Ms. Teacher! The searing reality of conscious experience changes, alters, shifts us every single moment. We’re different for every second we live, and our art — its newness, uniqueness — reflects that.

In a 2010 Guardian article, the composer David Cope clued us in to his process of working with AI platforms to create classical music from recombined composition.

When we listen to a Beethoven symphony or a Chopin sonata, we are hearing, we might say, the authentic expression of the composer’s inner harmonies and discords, carried magically across the centuries. Could we ever be so moved by a piece of music written by a computer? We’d probably like to think not. David Cope, emeritus professor of music at the University of California, Santa Cruz, would beg to differ. “The question,” Cope tells me, “isn’t if computers possess a soul, but if we possess one.”

Honestly, get the fuck out of here, Professor emeritus David Cope. I’m not trying to go Will Smith paranoid in i, Robot here, but something feels amiss here. Something’s being overlooked that shouldn’t be.

“In the next 10 years,” Cope said in that same interview, “what I call algorithmic music will be a mainstay of our lives.”

And here we are, in 2018, and we’ve got Kiesza & Stromae (feat. your Macbook). We can’t front — some of the tunes on the album are cool. They’re interesting, even a little weird.

“It can be like grafting the coat of a bear on the back of an elephant,” says Benoit Carré, the producer/compose who teamed up with Pachet on the project, “But sometimes it gives very nice results!”

And the collaborative aspect — human meets computer — is essential to that. If we stopped here, we’d be good. But we won’t..

Jukebox and Amper are already making generic pop tracks to fill out slideshows and wedding videos (Royalty free!), and Carré and Pachet urge us to understand that their own projects are just for collaborative purposes, to be used as instruments.

“The philosophy of the project is to develop a creative tool to help artists, but people don’t want to hear that,” said Carré in an interview with the BBC. “They want to hear that the musicians will be replaced.”

In an interview last year with Thump, Jack Nolan, co-founder of Techstars-incubated A.I. ‘jamming musician’ Popgun, expressed a similar caution. “[Our AI] can create music on its own, but we just don’t think’s as exciting,” he said, ”It’s about giving artists the freedom to experiment with it. We don’t think the future is going to be either AIs or artists; it’s going to be AIs and artists.”

But isn’t that almost all technology’s logical conclusion — that the human-computer partnership devolves to just computers taking the wheel? When have we limited technological expansion on principle?

Basically never. This movie ends with more and more reliance on seamless tech, more and more derivative, paint-by-numbers, pleasant music, first by AI, then by the human artists looking to make sure they’re keeping up with the between-the-lines trends.

Michael Lovett is part of NZCA Lines, a British synth-pop bad, and he co-produced Hello World. He seems to have some recognition of this. “Hopefully I’m not in 20 years one of those people who has been responsible for the downfall of creative music,” he said in an interview.

Sign up for our monthlyish newsletter here.