What is the next record? Moving beyond the recording industry

What will the next format be to usher in a new music industry, like the record did in the 20th century?

The 20th century saw the rise of consumerist culture as a response to mass production causing supply to outgrow consumer demand. An example of this phenomenon is 20th century fashion which became highly cyclical (and wasteful), marketing new clothes for every season. After World War II, it became common to use clothing to express oneself through styles and fashions which often went hand-in-hand with music subcultures, just think of hippies, skinheads and punk music, hiphop, funk, or disco.

“Our enormously productive economy demands that we make consumption our way of life, that we convert the buying and use of goods into rituals, that we seek our spiritual satisfaction and our ego satisfaction in consumption. We need things consumed, burned up, worn out, replaced and discarded at an ever-increasing rate.”
Victor Lebow (Journal of Retailing, Spring 1955)

Consumerism helped turn the recording industry into the most powerful part of the music business ecosystem, something which had previously been dominated by publishers. It changed music. The record player moved into the living room, then every room of the house, and the walkman (now smartphone) put music into every pocket. Music gained and lost qualities along the way.

Previously, it had been common for middle class families to have a piano in the home. Music was a social activity; music was alive. If you wanted to hear your favourite song, it would sound slightly different every time. With the recording, music became static and sounded the same way every time. And the shared songs of our culture were displaced by corporate-controlled pop music. People stopped playing the piano; and creators and ‘consumers’ became more clearly distinguished culturally.

With streaming, we are reaching the final stage of this development. Have a look at the above Victor Lebow quote and tell me streaming does not contribute to music being worn out, replaced and discarded at an ever-increasing rate.

The rules of mass production don’t apply to music anymore, since it’s no longer about pressing recordings: anything can be copied & distributed infinitely on the web. The democratisation of music production has turned many ‘consumers’ into creators again. Perhaps this started with drum computers, which helped kick off two of today’s fastest growing genres in the 70s and 80s: hiphop and house music. Today, this democratisation has turned our smartphones into music studios, with producers of worldwide hits making songs on their iPhones.

We see more people producing music, our Soundcloud feeds are constantly updated, Spotify‘s algorithms send new music out to us through daily mixes, Discover Weekly, Release Radar, Fresh Finds, and we now have the global phenomenon of New Music Fridays. With this massive amount of new music, we are simply not connecting to music in the same way as we did when music was scarce. We move on faster. As a result, music services, music providers essentially, place a big emphasis on music discovery as a result. We shift from the age of mass media, and mass production, to something more complex: many-to-many, and decentralised (music) production on a massive scale.

Has consumerism broken music culture? I don’t think so. As a matter of fact, consumerism is also what producers of music creation software and hardware depend on, which contributes to the democratisation of music and returning musical participation to the days of the piano as the default music playback device.

If streaming is the final stage of the age of the recording, then what’s next?

Embedded deep in the cultures of hiphop and house music, we can see what cultural values are important to the age of democratised music creation. Both genres heavily sampled disco and funk early on in their lifecycles. One of the most famous samples in hiphop and electronic music culture is the Amen Break. With the advent of the sampler, the drum break of the Winston‘s Amen Brother became widespread and instrumental to the birth and development of subgenres of electronic music in the 90s.

Not so long ago, ‘remix culture’ was still a notion one could discuss in abstract terms, for instance in the open-source documentary RiP!: A Remix Manifesto which discussed the topic at length. Things have changed fast however, turning the formerly abstract into a daily reality for many.

Since the documentary’s release in 2008, social networks have boomed. Back then, only 24% of the US population was active on social media, but now that’s ~80%. With the increasing socialisation of the web, as well as it being easier to manipulate images, we saw an explosion of internet memes, typically in the form of image macros which can be adjusted to fit new contexts or messages.

The same is happening to music through ‘Soundcloud culture’. Genres are born fast through remix, and people iterate on new ideas rapidly. A recent example of such a genre is moombahton which is now one of the driving sounds behind today’s pop music.

Snapchat filters and apps like Musically let users playing around with music and placing ourselves in the context of the song. Teens nowadays are not discovering music by some big budget music video broadcasted to them on MTV, they are discovering it by seeing their friend dance to it on Musically.

Music is becoming interactive, and adaptable to context.

Matching consumer trends and expectations with technology

Perhaps music is one of the first fields in which consumerist culture has hit a dead end, making it necessary for it to evolve to something beyond itself. People increasingly expect interactivity, since expressing yourself just by the music you listen to is not enough anymore to express identity.

Music production is getting easier. If combined with internet meme culture, it makes sense for people to use music for jokes or to make connections by making pop culture references through sampling. Vaporwave is a great example. But also internet rave things like this:

Instead of subcultures uniting behind bands and icons, they can now participate in setting the sound of its genre, creating a more customised type of sound that is more personally relevant to the listener and creator.

Artificial intelligence will make it even easier to quickly create music and remixes. Augmented reality, heavily emphasized in Apple’s latest product release, is basically remix as a medium. When AI, augmented reality, and the internet of things converge, our changing media culture will speed up to form new types of contexts for music.

That’s where the future of music lies. Not in the static recording, but in the adaptive. The recording industry that rose from the record looked nothing like the publishing industry. It latched on to the trend of consumerism and created a music industry of a scale never seen before. Now that we’ve reached peak-consumerism, and are at the final phase of the cycle for the static recording, there’s room for something new and adaptive. And like with the recording business before, the music business that will rise from adaptive media will look nothing like the current music industry.

Treat Twitter like a visual medium & sync your Instagram posts to it

Here’s a little hack I use to share my Instagram photos to Twitter automatically.

Many years ago, Instagram decided to disable its Twitter cards integration, meaning photos posted to Instagram and then shared on Twitter, no longer showed up as an image but instead just as a descriptive text + link. It’s a common strategy for social startups to first leverage other platforms by making highly shareable content, and then slowly making content harder to share so that people spend more time on the platform itself (where the platform can actually monetize them through ads).

For years now, Twitter has steadily been growing into a visual service, instead of a service of status updates and link sharing, and tweets that include images getting higher engagement. Yet many still treat it as the service it once was.

Sharing to Twitter from Instagram with the app’s native functionality is near-pointless. It leads to very low engagement, and you’re typically better off manually making a photo post to Twitter. But why do the same thing twice if you can easily configure a solution where all you have to do is post to Instagram.

Step 1: register with IFTTT

IFTTT is a service that lets you connect different services and automate behaviours between them. The name of the service is an abbreviation of “If This, Then That”, meaning that if one thing occurs in one service, something else is triggered elsewhere.

In our case, that thing that occurs is you posting a photo to your Instagram account. What’s triggered elsewhere is that your Twitter account will post the Instagram photo as a native Twitter photo post with a link to the Instagram post.

Step 2: create a new applet on IFTTT

When you create a new applet, you’ll see the service’s formula structure explained before.

Click on +this and select Instagram. Connect your account, and then choose a trigger. If you only want to share specific posts to Twitter, you can do so through the use of a hashtag that you only use on specific posts. Since I only post every couple of days or less, I’m selecting “Any new photo by you” since I don’t see a need to limit what I’m sharing to Twitter.

In the next step, +that, you select Twitter, connect to the service, and then pick Post a tweet with image. You can customize the tweet text in case you want to add text to your tweets. Keep in mind that any text in the caption you use on Instagram will be abbreviated to make room for the other text. You will see this:

Click Add ingredient and select Url. This way, each time you post a photo from Instagram to Twitter, it actually links back to your original Instagram post, which may help people with placing comments, or converting your Twitter followers to Instagram followers.

The next field, Image URL, should read SourceUrl. SourceUrl is the direct link to the image on Instagram, and Twitter needs this link in order to repost the image. Changing this will break the applet.

Step 3: finish your applet

Think of a nice, easy-to-understand title for your applet and hit the Finish button. You can choose to get notifications each time your applet runs, which means you get notified each time a photo is posted from Instagram to Twitter.

Step 4: see if it works

When you go to My Applets,  you should see your applet. Here’s mine on the left:

When you click on it, it will open a bigger version of it. Click on the cogwheel and you get a screen to configure the recipe. I’ve cut up the screenshot, but if you’ve followed all the steps, you should see something like this:

Make a photo, post it on Instagram, and see if it works. (it may take a while for it to appear on your account)

All done!

Happy posting.

For some examples, I’ve previously set this up for my friends at Quibus and Knarsetand, and I’ve also got it set up for my own Twitter account.

What music startup founders often get wrong

Doing a consumer facing music startup is hard. Especially if you don’t understand what gives music value.

One of the hardest aspects of building music startups is the fact that you’re dealing with a two-sided marketplace scenario. This means you have to build up one side of your marketplace in order to attract the other. It requires creativity, or a lot of funding, in order to build up the music side of your marketplace in order to attract the consumers.

This two-sided marketplace makes decision making more challenging: when to focus on what? How do you convince artists to use yet another platform, before it can really show its value through a well-populated marketplace?

But that’s not the number one thing people get wrong.

The number 1 thing music startup founders get wrong is overvaluing their content

This is the most important lesson I’ve learned while working on 3 different music streaming startups and a bunch of other non-streaming music startups. Music in itself has little value to a user (bear with me). Your value proposition needs to be better than: “come here, there’s music” and often times music startups don’t have anything better than that.

People don’t care about the music. They don’t have a problem listening to music. And if they do, they’re likely not aware of it.

Ironically, when doing consumer-facing music startups the music is an extra. It’s assumed it’s there. Not having good music on your service will kill you, but having it does not distinguish you. It’s the same with restaurants: we don’t visit a restaurant because they have the best food necessarily, but because it’s around the corner, they have something we feel like, the staff is nice, etc. Music, on a music service, is like the basic expectations of what we expect in a restaurant: food, drinks, a place to sit, and a toilet. Not having music, like not having toilets, will kill you, but it’s not the reason why people visit you.

This is why so many music discovery apps fail, why so many social jukebox or recommendation apps fail: people don’t need more content. Music’s availability is not where the problem is, the context is where the problem is.

Building music startups is about the functionality you add. That’s what people pay for, that’s how people stick to your platform. Not the ideals of better-paid artists, not ‘high quality streaming’ – these are basic expectations by now. People need to find a very simple answer to the question: what can they do with your service that they can’t do elsewhere?

Then the next question is whether it’s distinctive enough. I think that’s why high quality streaming startups tend to remain marginal: lossless streaming on its own is not enough to convince large consumer segments. It has to be about behaviour, about function. By now, lossless streaming isn’t hard to find, so people look for the checkbox and then look at what else the platform has to offer.

At the peak of its popularity, Crazy Frog as a track on iTunes was $1. As a ringtone, it was $3. The functionality is what made it valuable. (hat tip to Ed Peto for bringing this up)

I also think 360-degree concert videos are not distinctive enough from other types of video. As a matter of fact, I think the inconvenience of them outweighs the value when compared to other types of concert videos.

Let’s widen the perspective.

The value of music is elusive

A single song can mean the world to someone. It can help sell millions of products, it can inspire revolutions.

But in an ocean of millions of songs, that are easily accessible, its value is close to zero for a person as a consumer. This is why nobody cares about your free download anymore.

So how do you get the value out?

You use the music to create the environment in which you shape the type of thing people are willing to pay for. Going back to the restaurant metaphor: music means your walls, your tables, your staff, your bathrooms, your building, your ambiance. People pay for that, but indirectly: by paying for the food you serve them in that context.

More:


Just to be really clear: I think music has immense value and I dedicate most of my waking hours to it. When I talk about ‘value’ in the above piece, I talk about it from the consumer perspective, from the marketing perspective, and as a USP for a product. I am not saying that people are not willing to pay for music. Millions already are, every month, through streaming subscriptions, but also digital and physical sales. And that’s where the problem begins for music startup founders: if people are already paying for music, what more can you sell them?

The short answer: sell functionality that augments experience and behaviour.

The next 3 interfaces for music’s near future

Our changing media reality means everyone in music will have to come to grips with three important new trends.

Understanding the music business means understanding how people access, discover, and continuously listen to music. This used to be the record player, cassette player, radio, cd player, and now increasingly happens on our computers and smartphones. First by playing downloads in media players like WinampMusicmatch Jukebox, or iTunes, but now mostly via streaming services like Spotify, Apple Music, but also YouTube.

Whenever the interface for music changes, the rules of the game change. New challenges emerge, new players get to access the space, and those to best leverage the new media reality gain a significant lead over competing services or companies, like Spinnin Records‘ early YouTube success.

What is a media reality?

I was recently talking with Gigi Johnson, the Executive Director of the UCLA Center for Music Innovation, for their podcast, and as we were discussing innovation, I wanted to point out two different types of innovation. There is technological innovation, like invention, but you don’t have to be a scientist or an inventor to be innovative.

When the aforementioned categories of innovations get rolled out, they create new realities. Peer-to-peer technology helped Spotify with the distribution of music early on (one of their lead engineers is Ludvig Strigeus, creator of BitTorrent client utorrent), and for this to work, Spotify needed a media reality in which computers were linked to each other in networks with decent bandwidth (ie. the internet).

So that’s the second type of innovation: leveraging a reality created by the proliferation of a certain technology. Studios didn’t have to invent the television in order to dominate the medium. Facebook didn’t have to invent the world wide web.

A media reality is any reality in which innovation causes a shift to a new type of media. Our media reality is increasingly shifting towards smart assistants like Siri, an ‘internet of things’ (think smart home), and we’re creating, watching, and interacting through more high quality video than ever before.

Any new media reality brings with it new interfaces through which people interact with knowledge, their environment, friends, entertainment, and whatever else might be presented through these interfaces. So let’s look at the new interfaces everyone in music will have to deal with in the coming years.

Chatbots are the new apps

People don’t download as many apps as they used to and it’s getting harder to get people to install an app. According to data by comScore, most smartphone users now download fewer than 1 app per month.

So, in dealing with this new media reality, you go to where the audience is. Apparently that’s no longer in app stores, but on social networks and messaging apps. Some of the latter, and most prominently Facebook Messenger, allow for people to build chatbots, which are basically apps inside the messenger.

Companies like TransferwiseCNNAdidasNike, and many airlines already have their own bots running on Messenger. In music, well-known examples of artist chatbots are those by Katy Perry and DJ HardwellRecord Bird, a company specialized in surfacing new releases by artists you like, launched their own bot on messenger in 2016.

The challenge with chatbots is that designing for a conversational interface is quite different from designing visual user interfaces. Sometimes people will not understand what’s going on and start requesting things from your bot that you may not have anticipated. Such behaviours need to be anticipated, since people can not see the confines of the interface.

Chatbots are set to improve a lot over time, as developments in machine learning and artificial intelligence will help the systems behind the interfaces to interpret what users may mean and come up with better answers.

VUIs: Alexa, play me music from… uhmm….

I’ve been living with an Amazon Echo for over a month and together with my Philips Hue lamps it has imbedded itself into my life to the extent that I nearly asked Alexa, Amazon‘s voice assistant, to turn off the lights in a hotel room last weekend.

It’s been a pleasure to trade in the frequent returns to touch-based user interfaces for voice user interfaces (VUIs). I thought I’d feel awkward, but it’s great to quickly ask for weather updates, planned activities, the time, changing music, changing the volume, turning the lights on or off or dimming them, setting alarms, etc. without having to grab my phone.

I also thought it would be awkward having friends over and interacting with it, but it turns into a type of play, with friends trying out all kinds of requests I had never even thought of, and finding out about new features I wasn’t aware of.

And there’s the challenge for artists and businesses.

As a user, there is no home screen. There is nothing to guide you. There is only what you remember, what’s top of mind. Which is why VUIs are sometimes referred to as ‘zero UI’.

I have hundreds of playlists on Spotify, but through Alexa I’ve only listened to around a dozen different playlists. When I feel like music that may or may not be contained inside one of my playlists, it’s easier to mentally navigate to an artist that plays music like that, than to remember the playlist. So you request the artist instead.

VUIs will make the branding of playlists crucial. For example, instead of asking for Alexa to play hiphop from Spotify, I requested their RapCaviar playlist, because I felt the former query’s outcome would be too unpredictable. As the music plays, I’m less aware of the artist names, as I don’t even see them anymore and I hardly ever bother asking. For music composed by artificial intelligence, this could be a great opportunity to enter our music listening habits.

The VUI pairs well with the connected home, which is why tech giants like Google, Amazon, and Apple are all using music as the trojan horse to get their home-controlling devices into our living rooms. They’re going to be the operating system for our houses, and that operating system will provide an invisible layer that we interact with through our voice.

Although many of the experiences through VUIs feel a bit limited currently, they’re supposed to get better over time (which is why Amazon calls their Alexa apps ‘skills’). And with AI improving and becoming more widespread, these skills will get better to the point that they can anticipate our intentions before we express them.

As voice-controlled user interfaces enter more of our lives, the question for artists, music companies, and startups is: how do we stand out when there is no visual component? How can you stay top of mind? How will people remember you?

Augmented reality

Google Glass was too early. Augmented reality will be nothing like it.

Instead of issuing awkward voice commands to a kind of head mounted smartphone, the media reality that augmented reality will take shape in is one of conversational interfaces through messaging apps, and voice user interfaces, that are part of connected smart environments, all utilizing powerful artificial intelligence.

You won’t have to issue requests, because you’ll see overlays with suggested actions that you can easily trigger. Voice commands are a last resort, and a sign of AI failing to predict your intent.

So what is music in that reality? In a way, we’re already there. Kids nowadays are not discovering music by watching professional video productions on MTV; they discover music because they see friends dancing to it on Musically or they applied some music-enabled Snapchat-filter. We are making ourselves part of the narrative of the music, we step into it, and forward our version of it into the world. Music is behaving like internet memes, because it’s just so easy to remix now.

One way in which augmented reality is going to change music, is that music will become ‘smart’. It will learn to understand our behaviour, our intentions, and adapt to it, just like other aspects of our lives would. Some of Amazon Alexa‘s most popular skills already include music and sound to augment our experience.

This is in line with the trend that music listeners are increasingly exhibiting a utilitarian orientation towards music; interacting with music not just for the aesthetic, but also its practical value through playlists to study, focus, workout, clean the house, relax and drink coffee, etc.

As it becomes easier to manipulate music, and make ourselves part of the narrative, perhaps the creation of decent sounding music will become easier too. Just have a look at AI-powered music creation and mastering startups such as Jukedeck, Amper, and LANDR. More interestingly, check out Aitokaiku‘s Vimu, which lets you create videos with reactive music (the music reacts to what you film).

Imagine releasing songs in such a way that fans can interact and share them this way, but even better since you’ll be able to use all the data from the smart sensors in the environment.

Imagine being able to bring your song, or your avatar, into a space shared by a group of friends. You can be like Pokemon.

It’s hard to predict what music will look like, but it’s safe to say that the changes music went through since the proliferation of the recording as the default way to listen to music are nothing compared to what’s coming in the years ahead. Music is about to become a whole lot more intelligent.


For more on how interfaces change the way we interact with music, I’ve previously written about how the interface design choices of pirate filesharing services such as Napster influence music streaming services like Spotify to this day.

If you like the concept about media realities and would like to get a better understanding of it, I recommend spending some time to go through Marshall McLuhan‘s work, as well as Timothy Leary‘s perspective on our digital reality in the 90s.