The non-static tipping point: how culture’s going non-linear and generative

Deepfakes, infinite albums, generative NFTs – creative pioneers are rapidly pushing technology-enabled concepts into the center of web culture. Whereas just a few years ago, it was hard to get people to care about non-static media, it’s now grabbing people’s attention and their (crypto) currency.

Problem-solving

The Infinite Album gives video game streamers a way to soundtrack their streams without risk of takedowns. The AI creates soundscapes that react to your gameplay and even lets Twitch viewers use commands in the chat in order to influence the soundtrack. The music industry as a whole has been unable to form a global approach that makes it easy for gamers to understand what music they can play on-stream. This situation has given room for new entrants, some very tech-driven, to solve a clear problem.

Another example I’ve mentioned here before over the years is Endel, which provides ‘personalized soundscapes’ that help the listener focus, relax or sleep. They’ve essentially taken a common use case for music and have built a product that doesn’t neatly fit within the common formats of the music industry: a mental health app with adaptive soundscapes. Their artist collaborations include techno pioneer Plastikman (Richie Hawtin) and Grimes.

World-building

One of the most ambitious projects to recently launch is Holly Herndon‘s ‘Holly+‘. The singer, musician and frequent AI-collaborator has created a deepfake version of herself which people can then collaborate with. Another way of putting it is that Holly, through her collaboration with Never Before Heard Sounds, has created an instrument that is based on herself. She will set up a Decentralised Autonomous Organisation (DAO) to govern her likeness and is creating an NFT auction house using the Zora protocol to sell approved artworks. She describes:

“The Holly+ model creates a virtuous cycle. I release tools to allow for the creative usage of my likeness, the best artworks and license opportunities are approved by DAO members, profit from those works will be shared amongst artists using the tools, DAO members, and a treasury to fund further development of the tools.”

Other recent examples include a new release by Agoria & Ela Minus on Bronze Player, a tool that lets artists create music that recomposes itself infinitely, which in a way makes recorded music feel more like performed music in that you won’t be able to experience it exactly the same way twice. A linear version of the song was also released (embedded below).

One NFT platform I’m keeping a close eye on is Async, which sells art NFTs in layers, allowing the creator to set rules for the manipulation of the art and the buyers to reconfigure the work. After starting with visual arts, it launched Async Music:

“This is music with the ability to change its composition. It may sound different each time you come back to listen. This is achieved by breaking down a song into separate layers called Stems. Each Stem has multiple Variants for its new owner to choose from. In this way, a single Async Music track contains many unique combinations of sounds.”

Water & Music, run by Cherie Hu, estimates that Async has grossed about $650K in revenue from music NFT sales in May & June of this year, taking the third place in terms of music NFT marketplaces by revenue size.

Status-shaping

Possibly the largest non-static art projects, by revenue share, are NFT collectible avatars such as CryptoPunks and Bored Ape Yacht Club. These avatars are generated from variables like hair & skin colour, accessories and other types of character customization, leading to sets of 10,000 unique avatars. These collectibles are then sold as NFTs. Particularly CryptoPunks are highly valued due to them being minted before last year’s NFT explosion and thus being a status symbol in the budding Web3, often selling for tens of thousands of dollars. There are even cases of people paying over a hundred thousand dollars, like Jay-Z for CryptoPunk #6095.

A tipping point?

I believe that music’s future is non-static. It gained a default characteristic of linearity in the age of the recording, meaning: a song will sound the same every time you hear it. That’s a very recent trait for music to have by default. Now with powerful connected devices and a new generation of DAWs, we’re seeing this temporary reality of the recording age unravel and become optional rather than a default.

If you’re an artist, this unraveling means greater freedom in how you approach music as an art; it can be interactive, adaptive, generative, dynamic, augmentative, 3D, etc. If you’re more interested in the business side, you may find that you can take a page or two from the gaming industry’s book and make more money by charging for features rather than the content itself. Sell features, not songs.

Endlesss studio

Music’s non-static future as seen through music making app Endlesss

For those unfamiliar with Endlesss: it’s a collaborative music making app founded by musician Tim Exile that has been on the market as a (free) mobile app for a while already. In December, Endlesss launched its desktop app which I’ve now given a go and it provided a glimpse of how music is reconquering a quality it has lost in the age of the recording: participation.

Why Endlesss is different

Instead of writing songs, the app’s users are encouraged to make ‘jams’ which essentially are iterative loops of up to 8 bars. Each iteration is called a riff. When you add an instrument or effect to a riff, it creates a new riff inside your jam which then plays as a loop. The audio keeps playing, the interface keeps staring at you, encouraging you to keep jamming.

Endlesss’ desktop interface. Instrument selection at the left. At the top right, you can see the visual representation of the riffs in your jam, allowing you to go back a few steps.
Endlesss’ desktop interface. Instrument selection at the left. At the top right, you can see the visual representation of the riffs in your jam, allowing you to go back a few steps.

The app is also social, allowing users to participate in jams with others or just to listen in and explore riffs. There are prominent public jams that everyone can participate in as well as invite-only ones. Some of these jams lead to users sharing interesting moments of the jams (riffs) to the community, which can then be remixed and used to kick off another jam. Pretty cool considering some of the app’s users are popular producers themselves (Imogen Heap and Ninja Tune co-founder Matt Black joined Endlesss founder Tim Exile for livestreamed jams last year).

Endlesss Jams have chat rooms for participants (or observers) of jams to share thoughts, tips, expertise, or coordinate the direction of the jam.
Jams have chat rooms for participants (or observers) of jams to share thoughts, tips, expertise, or coordinate the direction of the jam.

How Endlesss redefines music

The social dimension, culturally speaking, is Endlesss’ most important aspect, because it changes the default meaning of music. For people who are not creators, music is something you listen to. It’s the same every time you hear it and it doesn’t change. If a remix or a cover version is made, it’s considered as ‘less real’ than the ‘original version’ (which in some cases may just be the most famous version, but not the first recording).

These are new qualities of music – at least as a default – introduced by the age of recorded music and mass consumerism. Music has become less participatory in that you don’t need anyone to play or sing a song if you want to hear it. The fact that it’s a new quality also means that it’s not inherent to music, meaning we can use the power of our devices (now easily amplified by connected AI) to experience music in new ways.

In the case of Endlesss, that means music is not a song, but an iterative jam. It’s something that happens, that invites participation, and that changes over time (though a snapshot of each iteration remains on the platform as a riff).

The age of non-static

This trend extends way beyond Endlesss and goes decades back to ‘affordable’ drum computers and samplers sparking the foundations of today’s most popular genres: house and hiphop. Then we got the rapid interchange of ideas and remixes enabled by Soundcloud which enshrined the platform’s cultural influence into genre names such as cloud rap. Outside of music, internet meme culture evolved through remixes and iteration, providing a non-linear visual culture detached from the channels of mass media and behaving according to the network reality of the internet.

They don't know where this song was originally sampled from People Line art Cartoon Text Head Arm Child Standing Human Organism
A recent example of a highly participatory meme format called They Don’t Know (and originally I Wish I Was At Home).

For the connection back to music, you only have to look at today’s hottest social media company, TikTok, which is completely based on remix culture. I’m not saying Endlesss is the TikTok of music production software; I’m saying that there’s a generation of people for whom the primary point of interaction with music is through a new set of interfaces that make music more than just its static, recorded self. It’s participatory and made to be engaging, like live music… but scalable.

Mood augmentation and non static music

Why the next big innovation in music will change music itself — and how our moods are in the driver’s seat for that development.

Over the last half year, I’ve had the pleasure to publish two guest contributions in MUSIC x TECH x FUTURE about our changing relationship with music.

The first had Thiago R. Pinto pointing out how we’re now using music to augment our experiences and that we have developed a utilitarian relation with regards to music.

Then last week, James Lynden shared his research into how Spotify affects mood and found out that people are mood-aware when they make choices on the service (emphasis mine):

Overall, mood is a vital aspect of participants’ behaviour on Spotify, and it seems that participants listen to music through the platform to manage or at least react to their moods. Yet the role of mood is normally implicit and unconscious in the participants’ listening.

Having developed music streaming products myself, like Fonoteka, when I was at Zvooq, I’m obviously very interested in this topic and what it means for the way we structure music experiences.

Another topic I love to think about is artificial intelligence, generative music, as well as adaptive and interactive music experiences. Particularly, I’m interested at how non-static music experiences can be brought to a mass market. So when I saw the following finding (emphasis mine), things instantly clicked:

In the same way as we outsource some of our cognitive load to the computer (e.g. notes and reminders, calculators etc.) perhaps some of our emotional state could also be seen as being outsourced to the machine.

For the music industry, I think explicitly mood-based listening is an interesting, emerging consumption dynamic.

Mood augmentation is the best way for non-static music to reach a mass market

James is spot-on when he says mood-based listening is an emerging consumption dynamic. Taking a wider view: the way services construct music experiences also changes the way music is made.

The playlist economy is leading to longer albums, but also optimization of tracks to have lower skip rates in the first 30 seconds. This is nothing compared to the change music went through in the 20th century:

The proliferation of the record as the default way to listen to music meant that music became a consumer product. Something you could collect, like comic books, and something that could be manufactured at a steady flow. This reality gave music new characteristics:

  • Music became static by default: a song sounding exactly the same as all the times you’ve heard it before is a relatively new quality.
  • Music became a receiving experience: music lost its default participative quality. If you wanted to hear your favourite song, you better be able to play it, or a friend or family member better have a nice voice.
  • Music became increasingly individual: while communal experiences, like concerts, raves and festivals flourished, music also went through individualization. People listen to music from their own devices, often through their headphones.

Personalized music is the next step

I like my favourite artist for different reasons than my friend does. I connect to it differently. I listen to it at different moments. Our experience is already different, so why should the music not be more personalized?

I’ve argued before that features are more interesting to monetize than pure access to content. $10 per month for all the music in the world: and then?

The gaming industry has figured out a different model: give people experience to the base game for free, and then charge them to unlock certain features. Examples of music apps that do this are Bjork’s Biophilia as well as mixing app Pacemaker.

In the streaming landscape, TIDAL has recently given users a way to change the length and tempo of tracks. I’m surprised that it wasn’t Spotify, since they have The Echo Nest team aboard, including Paul Lamere who built who built the Infinite Jukebox (among many other great music hacks).

But it’s early days. And the real challenge in creating these experiences is that listeners don’t know they’re interested in them. As quoted earlier from James Lynden:

The role of mood is normally implicit and unconscious in the participants’ listening.

The most successful apps for generative music and soundscapes so far, have been apps that generate sound to help you meditate or focus.

But as we seek to augment our human experience through nootropics and the implementation of technology to improve our senses, it’s clear that music as a static format no longer has to be default.

Further reading: Moving Beyond the Static Music Experience.