What the End of the App Era Means for the Music Business

The average smartphone user downloads less than 1 app per month, according to comScore. The era of apps is ending, and we’re moving in an era of artificial intelligence interacting with us through messaging apps, chatbots, voice-controlled interfaces, and smart devices.

What happens to music in this context? How do you make sure your music stands out? How do you communicate your brand when the interface goes from visual to conversational? And what strategic opportunities and challenges does the conversational interface present to streaming services?

 

The next 3 interfaces for music’s near future

Our changing media reality means everyone in music will have to come to grips with three important new trends.

Understanding the music business means understanding how people access, discover, and continuously listen to music. This used to be the record player, cassette player, radio, cd player, and now increasingly happens on our computers and smartphones. First by playing downloads in media players like Winamp, Musicmatch Jukebox, or iTunes, but now mostly via streaming services like Spotify, Apple Music, but also YouTube.

Whenever the interface for music changes, the rules of the game change. New challenges emerge, new players get to access the space, and those to best leverage the new media reality gain a significant lead over competing services or companies, like Spinnin Records‘ early YouTube success.

What is a media reality?

I was recently talking with Gigi Johnson, the Executive Director of the UCLA Center for Music Innovation, for their podcast, and as we were discussing innovation, I wanted to point out two different types of innovation. There is technological innovation, like invention, but you don’t have to be a scientist or an inventor to be innovative.

When the aforementioned categories of innovations get rolled out, they create new realities. Peer-to-peer technology helped Spotify with the distribution of music early on (one of their lead engineers is Ludvig Strigeus, creator of BitTorrent client utorrent), and for this to work, Spotify needed a media reality in which computers were linked to each other in networks with decent bandwidth (ie. the internet).

So that’s the second type of innovation: leveraging a reality created by the proliferation of a certain technology. Studios didn’t have to invent the television in order to dominate the medium. Facebook didn’t have to invent the world wide web.

A media reality is any reality in which innovation causes a shift to a new type of media. Our media reality is increasingly shifting towards smart assistants like Siri, an ‘internet of things’ (think smart home), and we’re creating, watching, and interacting through more high quality video than ever before.

Any new media reality brings with it new interfaces through which people interact with knowledge, their environment, friends, entertainment, and whatever else might be presented through these interfaces. So let’s look at the new interfaces everyone in music will have to deal with in the coming years.

Chatbots are the new apps

People don’t download as many apps as they used to and it’s getting harder to get people to install an app. According to data by comScore, most smartphone users now download fewer than 1 app per month.

So, in dealing with this new media reality, you go to where the audience is. Apparently that’s no longer in app stores, but on social networks and messaging apps. Some of the latter, and most prominently Facebook Messenger, allow for people to build chatbots, which are basically apps inside the messenger.

Companies like Transferwise, CNN, Adidas, Nike, and many airlines already have their own bots running on Messenger. In music, well-known examples of artist chatbots are those by Katy Perry and DJ Hardwell. Record Bird, a company specialized in surfacing new releases by artists you like, launched their own bot on messenger in 2016.

The challenge with chatbots is that designing for a conversational interface is quite different from designing visual user interfaces. Sometimes people will not understand what’s going on and start requesting things from your bot that you may not have anticipated. Such behaviours need to be anticipated, since people can not see the confines of the interface.

Chatbots are set to improve a lot over time, as developments in machine learning and artificial intelligence will help the systems behind the interfaces to interpret what users may mean and come up with better answers.

VUIs: Alexa, play me music from… uhmm….

I’ve been living with an Amazon Echo for over a month and together with my Philips Hue lamps it has imbedded itself into my life to the extent that I nearly asked Alexa, Amazon‘s voice assistant, to turn off the lights in a hotel room last weekend.

It’s been a pleasure to trade in the frequent returns to touch-based user interfaces for voice user interfaces (VUIs). I thought I’d feel awkward, but it’s great to quickly ask for weather updates, planned activities, the time, changing music, changing the volume, turning the lights on or off or dimming them, setting alarms, etc. without having to grab my phone.

I also thought it would be awkward having friends over and interacting with it, but it turns into a type of play, with friends trying out all kinds of requests I had never even thought of, and finding out about new features I wasn’t aware of.

And there’s the challenge for artists and businesses.

As a user, there is no home screen. There is nothing to guide you. There is only what you remember, what’s top of mind. Which is why VUIs are sometimes referred to as ‘zero UI’.

I have hundreds of playlists on Spotify, but through Alexa I’ve only listened to around a dozen different playlists. When I feel like music that may or may not be contained inside one of my playlists, it’s easier to mentally navigate to an artist that plays music like that, than to remember the playlist. So you request the artist instead.

VUIs will make the branding of playlists crucial. For example, instead of asking for Alexa to play hiphop from Spotify, I requested their RapCaviar playlist, because I felt the former query’s outcome would be too unpredictable. As the music plays, I’m less aware of the artist names, as I don’t even see them anymore and I hardly ever bother asking. For music composed by artificial intelligence, this could be a great opportunity to enter our music listening habits.

The VUI pairs well with the connected home, which is why tech giants like Google, Amazon, and Apple are all using music as the trojan horse to get their home-controlling devices into our living rooms. They’re going to be the operating system for our houses, and that operating system will provide an invisible layer that we interact with through our voice.

Although many of the experiences through VUIs feel a bit limited currently, they’re supposed to get better over time (which is why Amazon calls their Alexa apps ‘skills’). And with AI improving and becoming more widespread, these skills will get better to the point that they can anticipate our intentions before we express them.

As voice-controlled user interfaces enter more of our lives, the question for artists, music companies, and startups is: how do we stand out when there is no visual component? How can you stay top of mind? How will people remember you?

Augmented reality

Google Glass was too early. Augmented reality will be nothing like it.

Instead of issuing awkward voice commands to a kind of head mounted smartphone, the media reality that augmented reality will take shape in is one of conversational interfaces through messaging apps, and voice user interfaces, that are part of connected smart environments, all utilizing powerful artificial intelligence.

You won’t have to issue requests, because you’ll see overlays with suggested actions that you can easily trigger. Voice commands are a last resort, and a sign of AI failing to predict your intent.

So what is music in that reality? In a way, we’re already there. Kids nowadays are not discovering music by watching professional video productions on MTV; they discover music because they see friends dancing to it on Musically or they applied some music-enabled Snapchat-filter. We are making ourselves part of the narrative of the music, we step into it, and forward our version of it into the world. Music is behaving like internet memes, because it’s just so easy to remix now.

One way in which augmented reality is going to change music, is that music will become ‘smart’. It will learn to understand our behaviour, our intentions, and adapt to it, just like other aspects of our lives would. Some of Amazon Alexa‘s most popular skills already include music and sound to augment our experience.

This is in line with the trend that music listeners are increasingly exhibiting a utilitarian orientation towards music; interacting with music not just for the aesthetic, but also its practical value through playlists to study, focus, workout, clean the house, relax and drink coffee, etc.

As it becomes easier to manipulate music, and make ourselves part of the narrative, perhaps the creation of decent sounding music will become easier too. Just have a look at AI-powered music creation and mastering startups such as Jukedeck, Amper, and LANDR. More interestingly, check out Aitokaiku‘s Vimu, which lets you create videos with reactive music (the music reacts to what you film).

Imagine releasing songs in such a way that fans can interact and share them this way, but even better since you’ll be able to use all the data from the smart sensors in the environment.

Imagine being able to bring your song, or your avatar, into a space shared by a group of friends. You can be like Pokemon.

It’s hard to predict what music will look like, but it’s safe to say that the changes music went through since the proliferation of the recording as the default way to listen to music are nothing compared to what’s coming in the years ahead. Music is about to become a whole lot more intelligent.


For more on how interfaces change the way we interact with music, I’ve previously written about how the interface design choices of pirate filesharing services such as Napster influence music streaming services like Spotify to this day.

If you like the concept about media realities and would like to get a better understanding of it, I recommend spending some time to go through Marshall McLuhan‘s work, as well as Timothy Leary‘s perspective on our digital reality in the 90s.

My Midem wrap-up: Chatbots + marketing Run The Jewels panels

What a week. I spent it at Midem – one of the most well-known music business conferences organised every year in Cannes. Before I’m off to SonĂĄr+D this week, I thought I’d type up a little update.

About 10 months ago, Midem‘s conference manager got in touch with me to see if we could put a panel together. We landed on the topic of chatbots and Messenger apps, because I think the trend signifies an important shift to a new generation of user interfaces (especially considering voice-activated UI, which will quickly be permeating our daily lives).

It was so great to finally be able to have all these people in the same room, and talk about what they’re doing, get their thoughts out, get them discussing with each other. And the line-up was awesome.

Panel: Messaging Apps, Bots, AI & Music: A New Frontier of Fan Engagement

A quick look at the line-up:

  • Ricardo Chamberlain, Digital Marketing Manager, Sony Music Entertainment (USA)
    Runs a very interesting label bot, which includes messages from artists such as Enrique Iglesias. He also worked on the CNCO campaign with Landmrk, which I’m a big fan of.
  • Luke Ferrar, Head of Digital, Polydor (UK)
    Launched the first chatbot with Bastille.
  • Gustavo Goldschmidt, CEO & Co-Founder, SuperPlayer (Brazil)
    Runs Brazil’s biggest streaming service which not only recommends music through a chatbot, but also builds chatbots for artists, which then drives fans to their service when they want to stream something.
  • Syd Lawrence, CEO & Co-Founder, The Bot Platform (UK)
    Launched the Hardwell bot, which is probably the most well-known example of chatbots being used in music.
  • Tim Heineke, Founder, POP (Netherlands)
    Used to run a cool startup named Shuffler.fm which turned blogs into radio stations and became a kind of StumbleUpon for music discovery, and also co-founded FUGA.
  • Nikoo Sadr, Interactive Marketing Manager, The Orchard (UK)
    One of the most brilliant minds in digital marketing, in general. Previously with Music Ally.

FULL VIDEO:

WRITE UP:

Messaging, bots, and AI’s music evolution by Music Ally’s Eamon Forde

Run The Jewels’ Marketing Panel

A few weeks ago, I was asked if I could also moderated the RTJ marketing panel — which would have been a no-brainer anyway, but having a personal connection to this, made me so excited to do it that I forgot to even introduce myself during the panel.

My first real music business job was with a startup called official.fm. As a student, I listened to a lot of underground and indie hiphop, which made me a big fan of the Definitive Jux label, which put out music by Aesop Rock, Mr. Lif, RJD2, and El-P (also one of the founders). The other founder was Amaechi Uzoigwe, who now manages Run The Jewels. I remember feeling a little starstruck at the time. Now, years later, it was so good to catch up with Amaechi and the inspiring success he’s created for RTJ and himself.

Also on the panel was Zena White, who’s MD of The Other Hand, and does great things for RTJ, Stones Throw, Ghostly, BadBadNotGood, DJ Shadow and more.

FULL VIDEO:

WRITE UP:

How Run The Jewels found fame & fortune: by focusing on fans by Music Ally’s Stuart Dredge

Google Glass

When augmented reality converges with AI and the Internet of Things

The confluence of augmented reality, artificial intelligence, and the Internet of Things is rapidly giving rise to a new digital reality.

Remember when people said mobile was going to take over?

Well, we’re there. Some of the biggest brands in our world are totally mobile: Instagram, Snapchat, Uber. 84% (!) of Facebook’s ad revenue now comes from mobile.

And mobile will, sooner or later, be replaced by augmented reality devices, and it will look nothing like Google Glass.

Google Glass
Not the future of augmented reality.

Why some predictions fail

When viewing trends in technology in isolation, it’s inevitable you end up misunderstanding them. What happens is that we freeze time, take a trend and project the trend’s future into a society that looks almost exactly like today’s society.

Past predictions about the future
Almost.

This drains topics of substance and replaces it with hype. It causes smart people to ignore it, while easily excited entrepreneurs jump on the perceived opportunity with little to no understanding of it. Three of these domains right now are blockchain, messaging bots, and virtual reality, although I count myself lucky to know a lot of brilliant people in these areas, too.

What I’m trying to say is: just because it’s hyped, doesn’t mean it doesn’t deserve your attention. Don’t believe the hype, and dig deeper.

The great convergence

In order to understand the significance of a lot of today’s hype-surrounded topics, you have to link them. Artificial intelligence, smart homes & the ‘Internet of Things’, and augmented reality will all click together seamlessly a decade from now.

And that shift is already well underway.

Artificial intelligence

The first time I heard about AI was as a kid in the 90s. The context: video games. I heard that non-playable characters (NPCs) or ‘bots’ would have scripts that learned from my behaviour, so that they’d get better at defeating me. That seemed amazing, but their behaviour remained predictable.

In recent years, there have been big advances in artificial intelligence. This has a lot to do with the availability of large data sets that can be used to train AI. A connected world is a quantified world and data sets are continuously updated. This is useful for training algorithms that are capable of learning.

This is also what has given rise to the whole chatbot explosion right now. Our user interfaces are changing: instead of doing things ourselves, explicitly, AI can be trained to interpret our requests or even predict and anticipate them.

Conversational interfaces sucked 15 years ago. They came with a booklet. You had to memorize all the voice commands. You had to train the interface to get used to your voice… Why not just use a remote control? Or a mouse & keyboard? But in the future, getting things done by tapping on our screens may look as archaic as it would be to do everything from a command-line interface (think MS-DOS).

XKCD Sudo make me a sandwich
There are certain benefits to command-line interfaces… (xkcd)

So, right now we see all the tech giants diving into conversational interfaces (Google Home, Amazon Alexa, Apple Siri, Facebook Messenger, and Microsoft, err… Tay?) and in many cases opening up APIs to let external developers build apps for them. That’s right: chatbots are APPS that live inside or on top of conversational platforms.

So we get new design disciplines: conversational interfaces, and ‘zero UI’ which refers to voice-based interfaces. Besides developing logical conversation structures, integrating AI, and anticipating users’ actions, a lot of design effort also goes into the personality of these interfaces.

But conversational interfaces are awkward, right? It’s one of the things that made people uncomfortable with Google Glass: issuing voice commands in public. Optimists argued it would become normalized, just like talking to a bluetooth headset. Yet currently only 6% of of people who use voice assistants ever do so in public… But where we’re going, we won’t need voice commands. At least not as many.

The Internet of Things

There are still a lot of security concerns around littering our lives with smart devices: from vending machines in our offices, to refrigerators in our homes, to self-driving cars… But it seems to be an unstoppable march, with Amazon (Alexa) and Google (Home) intensifying the battle for the living room last year:

Let’s converge with the trend of artificial intelligence and the advances made in that domain. Instead of having the 2016 version of voice-controlled devices in our homes and work environments, these devices’ software will develop to the point where they get a great feeling of context. Through understanding acoustics, they can gain spacial awareness. If that doesn’t do it, they could use WiFi signals like radar to understand what’s going on. Let’s not forget cameras, too.

Your smart device knows what’s in the fridge before you do, what the weather is before you even wake up, it may even see warning signs about your health before you perceive them yourself (smart toilets are real). And it can use really large data sets to help us with decision-making.

And that’s the big thing: our connected devices are always plugged into the digital layer of our reality, even when we’re not interacting with them. While we may think we’re ‘offline’ when not near our laptops, we have started to look at the world through the lens of our digital realities. We’re acutely aware of the fact that we can photograph things and share them to Instagram or Facebook, even if we haven’t used the apps in the last 24 hours. Similarly, we go places without familiarizing ourselves with the layout of the area, because we know we can just open Google Maps any time. We are online, even when we’re offline.

Your connected home will be excellent at anticipating your desires andbehaviour. It’s in that context that augmented reality will reach maturity.

Google Home

Augmented reality

You’ve probably already been using AR. For a thorough take on the trend, go read my piece on how augmented reality is overtaking mobile. Two current examples of popular augmented reality apps: Snapchat and Pokémon Go. The latter is a great example of how you can design a virtual interaction layer for the physical world.

So the context in which you have to imagine augmented reality reaching maturity is a world in which our environments are smart and understand our intentions… in some cases predicting them before we even become aware of them.

Our smart environments will interact with our AR device to pull up HUDs when we most need them. So we won’t have to do awkward voice commands, because a lot of the time, it will already be taken care of.

Examples of HUDs in video games
Head-up displays (HUDs) have long been a staple of video games.

This means we don’t actually have to wear computers on our heads. Meaning that the future of augmented reality can come through contact lenses, rather than headsets.

But who actually wants to bother with that, right? What’s the point if you can already do everything you need right now? Perhaps you’re too young to remember, but that’s exactly what people said about mobile phones years ago. Even without contact lenses, all of these trends are underway now.

Augmented reality is an audiovisual medium, so if you want to prepare, spend some time learning about video game design, conversational interfaces, and get used to sticking your head in front of a camera.

There will be so many opportunities emerging on the way there, from experts on privacy and security (even political movements), to designing the experiences, to new personalities… because AR will have its own PewDiePie.

It’s why I just bought a mic and am figuring out a way to add audiovisual content to the mix of what I produce for MUSIC x TECH x FUTURE. Not to be the next PewDiePie, but to be able to embrace mediums that will extend into trends that will shape our digital landscapes for the next 20 years. More on that soon.

And if you’re reading this and you’re in music, then you’re in luck:
People already use music to augment their reality.

More on augmented reality by me on the Synchtank blog:
Projecting Trends: Augmented Reality is Overcoming its Hurdles to Overtake Mobile.

Monetizing virtual face time with fans

How the convergence of 2 trends opens up new business model opportunities for artists.

When I landed in Russia to get involved with music streaming service Zvooq, my goal was to look beyond streaming. The streaming layer would be the layer that brings everything together: fans, artists, and data. We started envisioning a layer on top of that, which we never fully got to roll out, in big part due to the challenges of the streaming business.

It was probably too early.

For the last decade, a lot of people have been envisioning ambitious direct-to-fan business models. The problem was that many of these were only viable for niche artists with early adopter audiences, but as technology develops, this is less so the case today.

Let’s have look at a few breakthrough trends in the last year:

  • Messaging apps are rapidly replacing social networks as the primary way for people to socialize online;
  • Better data plans & faster internet speeds have led to an increase in live streams, further enabled by product choices by Facebook & YouTube.

Messaging apps overtaking social networks is a trend that’s been underway for years now. It’s why Facebook acquired WhatsApp in 2014 for a whopping $19 billion. While 2.5 billion people had a messaging app installed earlier this year, that’s expected to rise to 3.6 billion in coming years. In part, this is driven by people coming online and messaging apps being relatively light weight in terms of data use.

In more developed markets, the trend for messaging apps is beyond text. WhatsApp, Facebook Messenger, and Slack have all recently enabled video calling. Other apps, like Instagram, Snapchat, Live.ly, and Tribe are finding new ways to give shape to mobile video experiences, from broadcasting short video stories, to live streaming to friends, to video group chats.

For artists that stay on top of trends, the potential for immediacy and intimacy with their fanbase is expanding.

Messaging apps make it easier to ping fans to get them involved in something, right away. And going live is one of the most engaging ways to do so.

Justin Kan, who founded Justin.tv which later became video game streaming platform Twitch (sold to Amazon for just under $1 billion), launched a new app recently which I think deserves the attention of the music business.

Whale is a Q&A app which lets people pose questions to ‘influencers’. To have your question answered, you have to pay a fee which is supposed to help your question “rise above the noise of social media”. And Whale is not the only app with this proposition.

Yam is another Q&A app which places more emphasis on personalities, who can answer fans’ questions through video, but also self-publish answers to questions they think people may be curious about.

Watching a reply to a question on Yam costs 5 cents, which is evenly split between the person who asked and the person who answered. It’s a good scheme to get people to come together to create content and for the person answering the questions to prioritize questions they think will lead to the most engagement.

What both of these apps do is that they monetize one of the truly scarce things in the digital age.

Any type of digital media is easily made abundant, but attention can only be spent once.

These trends enable creating an effective system for fans to compete for artists’ attention. I strongly believe this is where the most interesting business opportunities lie in the music business at the level of the artist, but also for those looking to create innovative new tools.

  1. Make great music.
  2. Grow your fan base.
  3. Monetize your most limited resource.

This can take so many shapes or forms:

  • Simply knowing that your idol saw your drawing or letter;
  • Having your demo reviewed by an artist you look up to;
  • Getting a special video greeting;
  • Learning more about an artist through a Q&A;
  • Being able to tell an artist about a local fan community & “come to our city!”;
  • Having the top rank as a fan & receiving a perk for that.

Each of these can be a product on their own and all of these products will likely look like messaging apps, video apps, or a mix.

A lot of fan engagement platforms failed, because they were looking for money in a niche behaviour that was difficult to exploit. People had to be taught new behaviours and new interfaces, which is hard when everyone’s competing for your attention.

Now this is becoming easier, because on mobile it can be as simple as a tap on the screen. Tuning into a live stream can be as simple as opening a push notification. Asking a question to an artist can be as simple as messaging a friend.

So, the question for the platforms early to the party is whether they’ll be able to adjust to the current (social) media landscape, or whether they let sunk cost fallacy entrench them in a vision based on how things used to be.

There’s tremendous value in big platforms figuring out new ways for artists and fans to exchange value. They already have the data and the fan connections. Imagine if streaming services were to build a new engagement layer on top of what already exists.

Until then, artists will have to stay lean and use specific tools that do one thing really well. Keep Product Hunt bookmarked.

5 Bots You’ll Love

Since launching its chatbot API last April, Facebook’s Messenger platform has already spawned 11,000 bots. Bots are popular, because they allow brands to offer more personalized service to existing and potential customers. Instead of getting people to install an app or visit your website, they can do so from the comfort of their preferred platform, whether that’s WhatsApp, Messenger, Twitter or something else.

Bots, basically automated scripts with varying levels of complexity, are ushering a new wave of user experience design. Here are some of my favourite bots.

AutoTLDR – Reddit

AutoTLDR bot

AutoTLDR is a bot on Reddit that automatically posts summaries of news articles in comment threads. tl;dr is internet slang for “too long, didn’t read” and is often used at the top or bottom of posts to give a one-line summary or conclusion of a longer text. It uses SMMRY‘s API for shortening long texts.

The key to its success is Reddit’s digital darwinism of upvotes and downvotes. Good summaries by AutoTLDR can usually be found within the top 5 comments. If it summarizes poorly, you’re unlikely to come across its contribution.

Explaining the theory behind AutoTLDR bot.

Subreddit Simulator – Reddit

Subreddits on Reddit center around certain topics or types of content. Subreddit Simulator is a collection of bots that source material from other Reddits and, often quite randomly, create new posts and material based on that. Its most popular post is sourced from the “aww” Subreddit and most likely sourced two different posts to create this:

Rescued a stray cat

Check out other top posts here. Again, the reason why it works well is because of human curation. People closely follow Subreddit Simulator and upvote remarkable outcomes, like the above.

wayback_exe – Twitter

Remember the internet when it had an intro tune? wayback_exe takes you back to the days of dial up and provides your Twitter feed with regular screenshots of retro websites. By now, it’s basically art.

It uses the Internet Archive’s Wayback Machine, which has saved historic snapshots of websites.

old site 1

old site 2

pixelsorter – Twitter

If you’re into glitch art, you’ll love pixelsorter. It’s a bot that re-encodes images. You can tweet it an image and get a glitched out version back. Sometimes it talks to other image bots like badpng, cga.graphics, BMPbug, Lowpoly Bot, or Arty Bots. With amazing algorithmic results.

 

Generative bot – Twitter

Generative bot

Generative Bot is one of those bots that makes you realize: algorithms are able to produce art that trumps 90% of all other art. It uses some quite advanced mathematics to create a new piece every 2 hours. Seeding your Twitter feed with occasional computer-generated bits of inspiration.

Want more inspiration? We previously wrote about DJ Hardwell’s bot.

What are your favourite bots? Ping me on Twitter.