AI-created non-human music will need human narratives

To me, it’s beyond a doubt that we’ll all be listening to AI-created music within a few decades, and probably much sooner. The most important way in for this type of music is mood playlists. After the first couple of songs on such playlists, most people tune the music out and get back to their main activity. Does it really matter who has created the song then? Does it matter whether they’re alive? Does it matter whether they’ve ever been alive at all?

[EDIT Aug 15: a small disclaimer since a piece linking here makes an incorrect claim. I don’t think all AI-created music needs a human narrative. I believe the future contains a lot of adaptive, and generative music. More on my point of view in this piece: Computers won’t have to be creative]

We are all creative, and therefore I think it doesn’t matter whether computers will be able to be creative. We are creative as listeners. Computers will be able to predict what we like, then test thousands of versions on playlists until they have the exact right version of the song. As a matter of fact, AI offers the prospect of personalized music, or music as precision medicine as The Sync Project calls it.

A point that’s made often is that AI-created music lacks part of the story people expect with music. People bring it up as an obstacle that can’t be overcome, but it feels like that’s just because of a decision to stop thinking as soon as the point is brought up. Let’s think further.

For one, I think AI-created music already is and will continue to be born in collaboration with people. People will increasingly take the role of curators of music created through algorithms. Secondly, why not give music a story?

Last week at IDAGIO Tech Talks, the music streaming service for classical music where I’m Product Director, we had the pleasure of hearing Ivan Yamshchikov talk about his neural network capable of music composition. With his colleague, Alexey Tikhonov, they fed their system 600 hours of compositions and had it compose a new work in the style of Scriabin. The human narrative was added at the end: as it was performed live by acclaimed musicians (see below).

This is how you get people to knowingly listen to music by artificial intelligence. Most consumption of AI music will be through ignorance of the source of the music. Yet people will warm up to the idea of AI being involved in the music creation process, just like they warmed up to electric guitars, samplers, and computers being used as instruments.

And that’s the narrative that will make it human: artificial intelligence as an instrument which requires a whole new skill set for artists to successfully work with it, and evoke in listeners what they want to.