The non-static tipping point: how culture’s going non-linear and generative

Deepfakes, infinite albums, generative NFTs – creative pioneers are rapidly pushing technology-enabled concepts into the center of web culture. Whereas just a few years ago, it was hard to get people to care about non-static media, it’s now grabbing people’s attention and their (crypto) currency.

Problem-solving

The Infinite Album gives video game streamers a way to soundtrack their streams without risk of takedowns. The AI creates soundscapes that react to your gameplay and even lets Twitch viewers use commands in the chat in order to influence the soundtrack. The music industry as a whole has been unable to form a global approach that makes it easy for gamers to understand what music they can play on-stream. This situation has given room for new entrants, some very tech-driven, to solve a clear problem.

Another example I’ve mentioned here before over the years is Endel, which provides ‘personalized soundscapes’ that help the listener focus, relax or sleep. They’ve essentially taken a common use case for music and have built a product that doesn’t neatly fit within the common formats of the music industry: a mental health app with adaptive soundscapes. Their artist collaborations include techno pioneer Plastikman (Richie Hawtin) and Grimes.

World-building

One of the most ambitious projects to recently launch is Holly Herndon‘s ‘Holly+‘. The singer, musician and frequent AI-collaborator has created a deepfake version of herself which people can then collaborate with. Another way of putting it is that Holly, through her collaboration with Never Before Heard Sounds, has created an instrument that is based on herself. She will set up a Decentralised Autonomous Organisation (DAO) to govern her likeness and is creating an NFT auction house using the Zora protocol to sell approved artworks. She describes:

“The Holly+ model creates a virtuous cycle. I release tools to allow for the creative usage of my likeness, the best artworks and license opportunities are approved by DAO members, profit from those works will be shared amongst artists using the tools, DAO members, and a treasury to fund further development of the tools.”

Other recent examples include a new release by Agoria & Ela Minus on Bronze Player, a tool that lets artists create music that recomposes itself infinitely, which in a way makes recorded music feel more like performed music in that you won’t be able to experience it exactly the same way twice. A linear version of the song was also released (embedded below).

One NFT platform I’m keeping a close eye on is Async, which sells art NFTs in layers, allowing the creator to set rules for the manipulation of the art and the buyers to reconfigure the work. After starting with visual arts, it launched Async Music:

“This is music with the ability to change its composition. It may sound different each time you come back to listen. This is achieved by breaking down a song into separate layers called Stems. Each Stem has multiple Variants for its new owner to choose from. In this way, a single Async Music track contains many unique combinations of sounds.”

Water & Music, run by Cherie Hu, estimates that Async has grossed about $650K in revenue from music NFT sales in May & June of this year, taking the third place in terms of music NFT marketplaces by revenue size.

Status-shaping

Possibly the largest non-static art projects, by revenue share, are NFT collectible avatars such as CryptoPunks and Bored Ape Yacht Club. These avatars are generated from variables like hair & skin colour, accessories and other types of character customization, leading to sets of 10,000 unique avatars. These collectibles are then sold as NFTs. Particularly CryptoPunks are highly valued due to them being minted before last year’s NFT explosion and thus being a status symbol in the budding Web3, often selling for tens of thousands of dollars. There are even cases of people paying over a hundred thousand dollars, like Jay-Z for CryptoPunk #6095.

A tipping point?

I believe that music’s future is non-static. It gained a default characteristic of linearity in the age of the recording, meaning: a song will sound the same every time you hear it. That’s a very recent trait for music to have by default. Now with powerful connected devices and a new generation of DAWs, we’re seeing this temporary reality of the recording age unravel and become optional rather than a default.

If you’re an artist, this unraveling means greater freedom in how you approach music as an art; it can be interactive, adaptive, generative, dynamic, augmentative, 3D, etc. If you’re more interested in the business side, you may find that you can take a page or two from the gaming industry’s book and make more money by charging for features rather than the content itself. Sell features, not songs.

AI-powered robot rappers: a 100 year history

The word ‘robot’ is 100 years old. Czech playwright Karel Čapek coined the term in his play Rossum’s Universal Robots which premiered in January 1921. Back then, most people in Europe knew someone who had died in the First World War. The idea of sending robots to war instead of humans seemed quite compelling in that context. Nowadays, armies aim to actually downsize in preparation of more drones and combat robots.

A different, but related, development also gained momentum around 1921: the idea of noise generated by machines as music. This idea stems from the thinking of futurist Luigi Russolo and specifically his intonarumori.

Luigi Russolo & his assistant Piatti with the intonarumori (Photo by Hulton Archive/Getty Images)

These machines are not robots, but the idea that machine-generated sounds combine into music has had a profound impact. How that idea currently plays into our thinking of what music is, is what I explore here.

Robots & music in 2021

Thinking about robots in music right now often centers around AI music stars. A couple of months ago, I wrote how robots could create a connection between the virtual and physical worlds. What does it mean, though, to have a robot, or even just an artificially powered intelligence, create music?

FN Meka

A self-proclaimed ‘AI-powered robot rapper’ FN Meka has more than 9 million followers on TikTok. He just dropped a new track this month.

FN Meka isn’t actually a robot, there’s a team of people behind the avatar and while the music is AI-generated, the voice is human. Yet, this is a great example of physical and virtual worlds colliding, similar to what we’ve seen with, for example, Gorillaz. The main difference with the Gorillaz being the shift in music production from human to AI-based.

What Russolo’s intonarumori could actually do was make very specific noises that the composer related to everyday industrial sounds such as hissing, crackling, exploding, etc. In total there were 27 variations of the instrument. Inside the instruments a lever controlled the pitch to fix or slide across a scale in tones and gradations thereof. Which, in a way, isn’t too dissimilar to how Auto-Tune works, a technology that FN Meka isn’t one to shy away from.

Visual impact

During the first months of the pandemic there was a clear shift from audio to video in terms of music consumption. This implied that people wanted to lean in more, pay more attention to what they listened to with an added visual element. The rise of FN Meka on TikTok fits into this narrative. He’s an engaging visual appearance who entertains by drawing viewers into his world.

@fnmeka

Which song 🎡 is your Favorite ⁉️ #iphone13 #airpod #iphone

♬ original sound – FNMeka

The importance of this visual aspect to musical culture is what prompted researchers at Georgia Tech in the US to create Shimon, a musical robot that looks, well, kind of cute.

The visual cues – like the ‘head’ bopping to the beat – are important for both audiences and fellow musicians to help connect to Shimon. Moreover, the researchers who developed the robot not only drew on artificial intelligence – more specifically deep learning – but also on creativity algorithms. This means that Shimon has the power to surprise, to sound creative in his own way.

Creative robots?

The connection between creativity and machines is significant, because it allows for a future that exists beyond the boundaries of what we know right now. When Shimon surprises his fellow musicians by shifting rhythm or pitch he softly pushes a boundary that often seems very far away.

In his blog on the AI Revolution Tim Urban explained how developments in AI research are propelling us towards artifical superintelligence – the moment computers become more intelligent than humans. What Urban doesn’t discuss is creativity. A trait some argue will never appear in a machine. And yet, this all depends largely on how we define creativity. Arthur Miller, for example, asks not just whether machines can make art in the future but also whether we will be able to appreciate it. Perhaps we will have to learn to love it, similar to how some of us enjoy the sound of Auto-Tune and others do not.

While the threat of a superintelligent being is not something to dismiss, right now any AI-powered music still leans heavily on human input. To create music with an AI, so to speak, is to create a set of parameters – a training set – which the machine uses to process sound. For Holly Herndon‘s album Proto she worked with her own AI which she called Spawn. From the data fed into the machine by Herndon she was able to draw a voice. To then incorporate this voice into the music in such a way that it felt musical, or creative, meant splicing and editing that vocal sonic output.

The final leap

What Herndon does is similar to what Russolo did with his intonarumori. He had 27 instruments to recreate the sounds he heard and aimed to combine those into a music that fit into a certain tradition of composition. She built not a physical machine, but a synthetic singer whose voice she created with data input and subsequently rearranged into her musical vision. FN Meka plays around with the idea of an AI-powered robot but leans more heavily into the visual culture of a virtual music star. Where the next jump in history sits, then, is closer to what Shimon stands for: a robot capable of listening and by virtue of his data, his knowledge, being able to jump just that bit further than the training set supplied to him. That already leads to surprise, which in itself is a prerequisite for creativity.