What playing around with AI lyrics generation taught me about the future of music

Will AI replace human artists? What would the implications be? These questions grip many in the music business and outside of it. This weekend I decided to explore some lyric generation apps and see what I could get out of them – learning a thing or two about the future of music along the way.

Below I’ve posted the most coherent lyrics I managed to get out of one AI tool. I’m dubbing the song Purple Sun.

Image with a purple sun
What I imagine the song’s artwork to look like.

You can make the sun turn purple
You can make the sea into a turtle

You can turn wine into water
Turn sadness into laughter

Let the stars fall down
Let the leaves turn brown

Let the rainwoods die
Let wells run dry

I love the turtle line. I guess the algorithm struggled with rhyming purple.

Two lines down is a wine / water line. Initially I was impressed by having a western cultural reference. But hold up… turning wine into water? That’s just evil.

Read it over once more. Or twice. By reading it over more, I became convinced that obviously humans are the superior songwriters.

But you know what, I’ve been lying to you.

The origins of the above lyrics are actually human, from a 90s rave song called Love U More by DJ Paul Elstak.

And they carry meaning. A lot of meaning to a whole generation of people in The Netherlands and other parts of Europe. Myself included. The meaning comes not necessarily from what the intent of the lyrics is. It comes from the music, nostalgia, memories, associations.

This is listener-assigned meaning. As soon as you release music, you give over control of the narrative to an audience. Artistic intent may have a lot of sway, but sometimes a song that’s a diatribe against fame turns into something stadiums full of drunk people chant.

A few statements to consider:

  1. AI has a role as a tool to be used by people to apply their creativity.
  2. Not all successful human created art objectively requires a lot of skill.
  3. Creativity doesn’t end with the creator. The creator sets intent, the listener assigns meaning.

Let’s pair #1 and #3. In the first statement I talk about people, rather than mention specific roles as in the thrid statement. That’s because AI allows more people to be creative, either as listener, creator, or the space in between.

It’s this space in between that will be impacted and shaped by AI. Think of the dadabots projects, such as their infinite neural network generated death metal stream, apps like JAM, Jambl, and Endlesss which allow people to express themselves musically in easy ways, or technologies that turn music into something more adaptive like Bronze and FLUENT (disclaimer: I’m an advisor to the latter). Not all of the above use AI, but all cater to this space in between listener and creator.

The reason why I added statement #2 is because AI-created music doesn’t necessarily have to be objectively good. Music is subjective. Its sucess depends on how well it can involve the listener. That’s why AI is destined to be the most important force for the future of music in a more creative world.

Credits for the lyrics above: Lucia Holm / Paul Carnell. Thank you for the wondrous energy, the memories, the music.

Image via Rising Sun.

New to MUSIC x TECH x FUTURE? Subscribe to the free newsletter for weekly updates about innovation in music. Thousands of music professionals around the world have gone before you.

Mood augmentation and non static music

Why the next big innovation in music will change music itself — and how our moods are in the driver’s seat for that development.

Over the last half year, I’ve had the pleasure to publish two guest contributions in MUSIC x TECH x FUTURE about our changing relationship with music.

The first had Thiago R. Pinto pointing out how we’re now using music to augment our experiences and that we have developed a utilitarian relation with regards to music.

Then last week, James Lynden shared his research into how Spotify affects mood and found out that people are mood-aware when they make choices on the service (emphasis mine):

Overall, mood is a vital aspect of participants’ behaviour on Spotify, and it seems that participants listen to music through the platform to manage or at least react to their moods. Yet the role of mood is normally implicit and unconscious in the participants’ listening.

Having developed music streaming products myself, like Fonoteka, when I was at Zvooq, I’m obviously very interested in this topic and what it means for the way we structure music experiences.

Another topic I love to think about is artificial intelligence, generative music, as well as adaptive and interactive music experiences. Particularly, I’m interested at how non-static music experiences can be brought to a mass market. So when I saw the following finding (emphasis mine), things instantly clicked:

In the same way as we outsource some of our cognitive load to the computer (e.g. notes and reminders, calculators etc.) perhaps some of our emotional state could also be seen as being outsourced to the machine.

For the music industry, I think explicitly mood-based listening is an interesting, emerging consumption dynamic.

Mood augmentation is the best way for non-static music to reach a mass market

James is spot-on when he says mood-based listening is an emerging consumption dynamic. Taking a wider view: the way services construct music experiences also changes the way music is made.

The playlist economy is leading to longer albums, but also optimization of tracks to have lower skip rates in the first 30 seconds. This is nothing compared to the change music went through in the 20th century:

The proliferation of the record as the default way to listen to music meant that music became a consumer product. Something you could collect, like comic books, and something that could be manufactured at a steady flow. This reality gave music new characteristics:

  • Music became static by default: a song sounding exactly the same as all the times you’ve heard it before is a relatively new quality.
  • Music became a receiving experience: music lost its default participative quality. If you wanted to hear your favourite song, you better be able to play it, or a friend or family member better have a nice voice.
  • Music became increasingly individual: while communal experiences, like concerts, raves and festivals flourished, music also went through individualization. People listen to music from their own devices, often through their headphones.

Personalized music is the next step

I like my favourite artist for different reasons than my friend does. I connect to it differently. I listen to it at different moments. Our experience is already different, so why should the music not be more personalized?

I’ve argued before that features are more interesting to monetize than pure access to content. $10 per month for all the music in the world: and then?

The gaming industry has figured out a different model: give people experience to the base game for free, and then charge them to unlock certain features. Examples of music apps that do this are Bjork’s Biophilia as well as mixing app Pacemaker.

In the streaming landscape, TIDAL has recently given users a way to change the length and tempo of tracks. I’m surprised that it wasn’t Spotify, since they have The Echo Nest team aboard, including Paul Lamere who built who built the Infinite Jukebox (among many other great music hacks).

But it’s early days. And the real challenge in creating these experiences is that listeners don’t know they’re interested in them. As quoted earlier from James Lynden:

The role of mood is normally implicit and unconscious in the participants’ listening.

The most successful apps for generative music and soundscapes so far, have been apps that generate sound to help you meditate or focus.

But as we seek to augment our human experience through nootropics and the implementation of technology to improve our senses, it’s clear that music as a static format no longer has to be default.

Further reading: Moving Beyond the Static Music Experience.

5 Bots You’ll Love

Since launching its chatbot API last April, Facebook’s Messenger platform has already spawned 11,000 bots. Bots are popular, because they allow brands to offer more personalized service to existing and potential customers. Instead of getting people to install an app or visit your website, they can do so from the comfort of their preferred platform, whether that’s WhatsApp, Messenger, Twitter or something else.

Bots, basically automated scripts with varying levels of complexity, are ushering a new wave of user experience design. Here are some of my favourite bots.

AutoTLDR – Reddit

AutoTLDR bot

AutoTLDR is a bot on Reddit that automatically posts summaries of news articles in comment threads. tl;dr is internet slang for “too long, didn’t read” and is often used at the top or bottom of posts to give a one-line summary or conclusion of a longer text. It uses SMMRY‘s API for shortening long texts.

The key to its success is Reddit’s digital darwinism of upvotes and downvotes. Good summaries by AutoTLDR can usually be found within the top 5 comments. If it summarizes poorly, you’re unlikely to come across its contribution.

Explaining the theory behind AutoTLDR bot.

Subreddit Simulator – Reddit

Subreddits on Reddit center around certain topics or types of content. Subreddit Simulator is a collection of bots that source material from other Reddits and, often quite randomly, create new posts and material based on that. Its most popular post is sourced from the “aww” Subreddit and most likely sourced two different posts to create this:

Rescued a stray cat

Check out other top posts here. Again, the reason why it works well is because of human curation. People closely follow Subreddit Simulator and upvote remarkable outcomes, like the above.

wayback_exe – Twitter

Remember the internet when it had an intro tune? wayback_exe takes you back to the days of dial up and provides your Twitter feed with regular screenshots of retro websites. By now, it’s basically art.

It uses the Internet Archive’s Wayback Machine, which has saved historic snapshots of websites.

old site 1

old site 2

pixelsorter – Twitter

If you’re into glitch art, you’ll love pixelsorter. It’s a bot that re-encodes images. You can tweet it an image and get a glitched out version back. Sometimes it talks to other image bots like badpng, cga.graphics, BMPbug, Lowpoly Bot, or Arty Bots. With amazing algorithmic results.

 

Generative bot – Twitter

Generative bot

Generative Bot is one of those bots that makes you realize: algorithms are able to produce art that trumps 90% of all other art. It uses some quite advanced mathematics to create a new piece every 2 hours. Seeding your Twitter feed with occasional computer-generated bits of inspiration.

Want more inspiration? We previously wrote about DJ Hardwell’s bot.

What are your favourite bots? Ping me on Twitter.