How can we restore music’s status as social glue in the age of streaming?

The case for a passive discovery mechanism for friends’ playlists on Spotify.

This article started with a tweet on a Saturday evening. Simply put: I wish I had a better interface to discover playlists that are popular among my friends.

Mark Newman rightfully pointed out that Spotify doesn’t show much interest in surfacing user-created playlists. As a matter of fact, they have even been deemphasising them over the years. Instead they opt for sending people to their own playlists. And their priority makes sense. They have to compete with giants like Apple, Google, Amazon: companies that have money to waste, while Spotify has money to raise.

Streaming is going mainstream

I’m sure to most of us it feels like it’s mainstream already. Hear me out.

Spotify, and other streaming services, are now focusing on consumers beyond the early adopter. These are people that are happy listening to the hits from the radio. These are people that like predictable music experiences. And they’re the bulk of the market.

In order to successfully compete for them, streaming services have to deliver very consistent streaming experiences to these people. This comes in the form of speed, functionality, but also content and programming.

User-created playlists fall outside of Spotify‘s editorial guidelines and metrics that they set for their editors, so it makes it unpredictable. Then again, features like Discover Weekly carry some inherent unpredictability with them: it’s what makes them fun and addictive.

The metrics that a feature like this probably needs to deliver on would look like:

  • Amount of time spent listening to music on Spotify in a specified timeframe (the feature should not lead to less playback);
  • Some kind of retention metric (should lead to a more engaging product, with less people stopping to use it).

Spotify’s friend activity & navigation

I like seeing what my friends are listening to in the right hand bar. Occasionally, but hardly ever, I click on something someone is listening to, and musically stalk my friend.

The reason why I hardly ever tune into my friends that way, and why I think it’s probably not an often-used feature, is because you tend to see it when you’re already listening to something. It’s not really positioned inside the product as a starting point; it’s more of a distraction.

Starting points, in Spotify, are either search or are presented in the left-hand menu. They are your playlists, or the other navigation points, such as podcasts, browse, and Daily Mix.

The prominent placing of Your Daily Mix stands out to me. I find the feature a bit dull and repetitive, but perhaps that’s because I’m on the end of the user spectrum that explores more than returns to the same music. The point is: Spotify gives prominence to an algorithm that generates 5 daily playlists for users. It’s somewhat unpredictable, compared to what they feature in Browse, but it tries to get people into a daily habit, and its prominent placing suggests that this may be working.

What should also be noted is that none of these navigation items include anything social, despite the entire right-hand bar being dedicated to it.

Browse is boring

I’m always disappointed when I open the Browse tab. I never really see anything surprising and I keep seeing the same things over and over, despite not engaging with them.

There are so many super interesting playlists on search, particularly those by third parties, and I need a way to surface them without finding out on curators’ websites, social media, by using search, or by visiting artist profiles.

Your Daily Friend Mix

So, back to my original tweet, and the requirements for getting a social feature to work well:

  • Should lead to people regularly coming back;
  • Should lead to increased playback (or at least no decrease).

What are the constraints?

  • Not enough friends to meaningfully populate an area;
  • Friends don’t listen to playlists;
  • Friends only listen to the same playlists as you;
  • Friends’ tastes are too dissimilar.

The first issue here is already tackled by the way Spotify handles Discover Weekly and its Daily Mixes: if they don’t have enough data on you, they won’t present these features to you. So in short: if there’s not enough useful data to present meaningful results to you, the feature should not be shown.

However for many users there would be meaningful data, so how to make sure that the suggested content is also meaningful?

The UX of recommendations is a big topic, but in simple terms, there should be thresholds and ceilings on similarity:

  • Recommended content should not have a similarity higher than 90% to user’s collection;
  • Recommended content should not have a similarity lower than 10% to user’s collection & listening history.

The recommended content can be playlists made by friends, or ones that friends regularly listen to and / or are subscribed to. The percentages are made-up, and there are a lot more things you could factor in, but this way you make sure that:

  1. Content in the section is interesting, because you’ll discover something new;
  2. And it’s not too random or too far from your taste, so you’ll always find something you’d want to listen to while opening the section.

If that’s taken care of, then people will keep coming back. Why?

Because it’s super fun to discover how your taste overlaps with friends, or to discover new music with friends. I also think such a feature would work better for Spotify‘s demographic than the more active one-on-one music sharing type of functionality (that Spotify removed recently).

Spotify needs a passive way to connect with music through friends

The messaging functionality that Spotify removed showed low engagement. That’s because music one-on-one recommendations are demanding on both sides. Instead, what has shown to work best on big streaming platforms, are lean back experiences. Discover Weekly is an example of that: it’s focused on the result, rather than the action. The action for discovery is exploration: with Discover Weekly, it’s Spotify‘s albums that do most of the exploring for the user.

That’s what the social side of the service needs. The Friend Activity feed is boring. It hardly ever shows something I’d like to listen to, but I do know my friends listen to music I’d be interested in…

What I need is a section that I can go to when I’m looking for something new to listen to, and then shows friends as social proof for that content. It allows me to connect to friends in new ways. Perhaps even strike up a conversation with them on Facebook Messenger.

Which would pair well with Spotify‘s strategy to drive more engagement through Messenger.

What music startup founders often get wrong

Doing a consumer facing music startup is hard. Especially if you don’t understand what gives music value.

One of the hardest aspects of building music startups is the fact that you’re dealing with a two-sided marketplace scenario. This means you have to build up one side of your marketplace in order to attract the other. It requires creativity, or a lot of funding, in order to build up the music side of your marketplace in order to attract the consumers.

This two-sided marketplace makes decision making more challenging: when to focus on what? How do you convince artists to use yet another platform, before it can really show its value through a well-populated marketplace?

But that’s not the number one thing people get wrong.

The number 1 thing music startup founders get wrong is overvaluing their content

This is the most important lesson I’ve learned while working on 3 different music streaming startups and a bunch of other non-streaming music startups. Music in itself has little value to a user (bear with me). Your value proposition needs to be better than: “come here, there’s music” and often times music startups don’t have anything better than that.

People don’t care about the music. They don’t have a problem listening to music. And if they do, they’re likely not aware of it.

Ironically, when doing consumer-facing music startups the music is an extra. It’s assumed it’s there. Not having good music on your service will kill you, but having it does not distinguish you. It’s the same with restaurants: we don’t visit a restaurant because they have the best food necessarily, but because it’s around the corner, they have something we feel like, the staff is nice, etc. Music, on a music service, is like the basic expectations of what we expect in a restaurant: food, drinks, a place to sit, and a toilet. Not having music, like not having toilets, will kill you, but it’s not the reason why people visit you.

This is why so many music discovery apps fail, why so many social jukebox or recommendation apps fail: people don’t need more content. Music’s availability is not where the problem is, the context is where the problem is.

Building music startups is about the functionality you add. That’s what people pay for, that’s how people stick to your platform. Not the ideals of better-paid artists, not ‘high quality streaming’ – these are basic expectations by now. People need to find a very simple answer to the question: what can they do with your service that they can’t do elsewhere?

Then the next question is whether it’s distinctive enough. I think that’s why high quality streaming startups tend to remain marginal: lossless streaming on its own is not enough to convince large consumer segments. It has to be about behaviour, about function. By now, lossless streaming isn’t hard to find, so people look for the checkbox and then look at what else the platform has to offer.

At the peak of its popularity, Crazy Frog as a track on iTunes was $1. As a ringtone, it was $3. The functionality is what made it valuable. (hat tip to Ed Peto for bringing this up)

I also think 360-degree concert videos are not distinctive enough from other types of video. As a matter of fact, I think the inconvenience of them outweighs the value when compared to other types of concert videos.

Let’s widen the perspective.

The value of music is elusive

A single song can mean the world to someone. It can help sell millions of products, it can inspire revolutions.

But in an ocean of millions of songs, that are easily accessible, its value is close to zero for a person as a consumer. This is why nobody cares about your free download anymore.

So how do you get the value out?

You use the music to create the environment in which you shape the type of thing people are willing to pay for. Going back to the restaurant metaphor: music means your walls, your tables, your staff, your bathrooms, your building, your ambiance. People pay for that, but indirectly: by paying for the food you serve them in that context.

More:


Just to be really clear: I think music has immense value and I dedicate most of my waking hours to it. When I talk about ‘value’ in the above piece, I talk about it from the consumer perspective, from the marketing perspective, and as a USP for a product. I am not saying that people are not willing to pay for music. Millions already are, every month, through streaming subscriptions, but also digital and physical sales. And that’s where the problem begins for music startup founders: if people are already paying for music, what more can you sell them?

The short answer: sell functionality that augments experience and behaviour.

The next 3 interfaces for music’s near future

Our changing media reality means everyone in music will have to come to grips with three important new trends.

Understanding the music business means understanding how people access, discover, and continuously listen to music. This used to be the record player, cassette player, radio, cd player, and now increasingly happens on our computers and smartphones. First by playing downloads in media players like WinampMusicmatch Jukebox, or iTunes, but now mostly via streaming services like Spotify, Apple Music, but also YouTube.

Whenever the interface for music changes, the rules of the game change. New challenges emerge, new players get to access the space, and those to best leverage the new media reality gain a significant lead over competing services or companies, like Spinnin Records‘ early YouTube success.

What is a media reality?

I was recently talking with Gigi Johnson, the Executive Director of the UCLA Center for Music Innovation, for their podcast, and as we were discussing innovation, I wanted to point out two different types of innovation. There is technological innovation, like invention, but you don’t have to be a scientist or an inventor to be innovative.

When the aforementioned categories of innovations get rolled out, they create new realities. Peer-to-peer technology helped Spotify with the distribution of music early on (one of their lead engineers is Ludvig Strigeus, creator of BitTorrent client utorrent), and for this to work, Spotify needed a media reality in which computers were linked to each other in networks with decent bandwidth (ie. the internet).

So that’s the second type of innovation: leveraging a reality created by the proliferation of a certain technology. Studios didn’t have to invent the television in order to dominate the medium. Facebook didn’t have to invent the world wide web.

A media reality is any reality in which innovation causes a shift to a new type of media. Our media reality is increasingly shifting towards smart assistants like Siri, an ‘internet of things’ (think smart home), and we’re creating, watching, and interacting through more high quality video than ever before.

Any new media reality brings with it new interfaces through which people interact with knowledge, their environment, friends, entertainment, and whatever else might be presented through these interfaces. So let’s look at the new interfaces everyone in music will have to deal with in the coming years.

Chatbots are the new apps

People don’t download as many apps as they used to and it’s getting harder to get people to install an app. According to data by comScore, most smartphone users now download fewer than 1 app per month.

So, in dealing with this new media reality, you go to where the audience is. Apparently that’s no longer in app stores, but on social networks and messaging apps. Some of the latter, and most prominently Facebook Messenger, allow for people to build chatbots, which are basically apps inside the messenger.

Companies like TransferwiseCNNAdidasNike, and many airlines already have their own bots running on Messenger. In music, well-known examples of artist chatbots are those by Katy Perry and DJ HardwellRecord Bird, a company specialized in surfacing new releases by artists you like, launched their own bot on messenger in 2016.

The challenge with chatbots is that designing for a conversational interface is quite different from designing visual user interfaces. Sometimes people will not understand what’s going on and start requesting things from your bot that you may not have anticipated. Such behaviours need to be anticipated, since people can not see the confines of the interface.

Chatbots are set to improve a lot over time, as developments in machine learning and artificial intelligence will help the systems behind the interfaces to interpret what users may mean and come up with better answers.

VUIs: Alexa, play me music from… uhmm….

I’ve been living with an Amazon Echo for over a month and together with my Philips Hue lamps it has imbedded itself into my life to the extent that I nearly asked Alexa, Amazon‘s voice assistant, to turn off the lights in a hotel room last weekend.

It’s been a pleasure to trade in the frequent returns to touch-based user interfaces for voice user interfaces (VUIs). I thought I’d feel awkward, but it’s great to quickly ask for weather updates, planned activities, the time, changing music, changing the volume, turning the lights on or off or dimming them, setting alarms, etc. without having to grab my phone.

I also thought it would be awkward having friends over and interacting with it, but it turns into a type of play, with friends trying out all kinds of requests I had never even thought of, and finding out about new features I wasn’t aware of.

And there’s the challenge for artists and businesses.

As a user, there is no home screen. There is nothing to guide you. There is only what you remember, what’s top of mind. Which is why VUIs are sometimes referred to as ‘zero UI’.

I have hundreds of playlists on Spotify, but through Alexa I’ve only listened to around a dozen different playlists. When I feel like music that may or may not be contained inside one of my playlists, it’s easier to mentally navigate to an artist that plays music like that, than to remember the playlist. So you request the artist instead.

VUIs will make the branding of playlists crucial. For example, instead of asking for Alexa to play hiphop from Spotify, I requested their RapCaviar playlist, because I felt the former query’s outcome would be too unpredictable. As the music plays, I’m less aware of the artist names, as I don’t even see them anymore and I hardly ever bother asking. For music composed by artificial intelligence, this could be a great opportunity to enter our music listening habits.

The VUI pairs well with the connected home, which is why tech giants like Google, Amazon, and Apple are all using music as the trojan horse to get their home-controlling devices into our living rooms. They’re going to be the operating system for our houses, and that operating system will provide an invisible layer that we interact with through our voice.

Although many of the experiences through VUIs feel a bit limited currently, they’re supposed to get better over time (which is why Amazon calls their Alexa apps ‘skills’). And with AI improving and becoming more widespread, these skills will get better to the point that they can anticipate our intentions before we express them.

As voice-controlled user interfaces enter more of our lives, the question for artists, music companies, and startups is: how do we stand out when there is no visual component? How can you stay top of mind? How will people remember you?

Augmented reality

Google Glass was too early. Augmented reality will be nothing like it.

Instead of issuing awkward voice commands to a kind of head mounted smartphone, the media reality that augmented reality will take shape in is one of conversational interfaces through messaging apps, and voice user interfaces, that are part of connected smart environments, all utilizing powerful artificial intelligence.

You won’t have to issue requests, because you’ll see overlays with suggested actions that you can easily trigger. Voice commands are a last resort, and a sign of AI failing to predict your intent.

So what is music in that reality? In a way, we’re already there. Kids nowadays are not discovering music by watching professional video productions on MTV; they discover music because they see friends dancing to it on Musically or they applied some music-enabled Snapchat-filter. We are making ourselves part of the narrative of the music, we step into it, and forward our version of it into the world. Music is behaving like internet memes, because it’s just so easy to remix now.

One way in which augmented reality is going to change music, is that music will become ‘smart’. It will learn to understand our behaviour, our intentions, and adapt to it, just like other aspects of our lives would. Some of Amazon Alexa‘s most popular skills already include music and sound to augment our experience.

This is in line with the trend that music listeners are increasingly exhibiting a utilitarian orientation towards music; interacting with music not just for the aesthetic, but also its practical value through playlists to study, focus, workout, clean the house, relax and drink coffee, etc.

As it becomes easier to manipulate music, and make ourselves part of the narrative, perhaps the creation of decent sounding music will become easier too. Just have a look at AI-powered music creation and mastering startups such as Jukedeck, Amper, and LANDR. More interestingly, check out Aitokaiku‘s Vimu, which lets you create videos with reactive music (the music reacts to what you film).

Imagine releasing songs in such a way that fans can interact and share them this way, but even better since you’ll be able to use all the data from the smart sensors in the environment.

Imagine being able to bring your song, or your avatar, into a space shared by a group of friends. You can be like Pokemon.

It’s hard to predict what music will look like, but it’s safe to say that the changes music went through since the proliferation of the recording as the default way to listen to music are nothing compared to what’s coming in the years ahead. Music is about to become a whole lot more intelligent.


For more on how interfaces change the way we interact with music, I’ve previously written about how the interface design choices of pirate filesharing services such as Napster influence music streaming services like Spotify to this day.

If you like the concept about media realities and would like to get a better understanding of it, I recommend spending some time to go through Marshall McLuhan‘s work, as well as Timothy Leary‘s perspective on our digital reality in the 90s.

Why I’m joining IDAGIO  —  a classical music startup — and moving to Berlin

Today I’m excited to announce that I’m joining IDAGIO, a streaming service for classical music lovers, as Director of Product. I’m already in the process of relocating to Berlin, where I’ll be joining the team later this month.

In this post, I want to explain why I so strongly believe in this niche focused music service and IDAGIO’s mission. I also want to shed light on the future of MUSIC x TECH x FUTURE as a newsletter, a type of media, and an agency. (tl;dr: the newsletter lives on!)

Two months ago, a friend whom I had worked with in Moscow, at music streaming service Zvooq, forwarded me a vacancy as a Twitter DM. By then, I had developed a kind of mental auto-ignore, because friends kept sending me junior level vacancies in music companies. I was never looking for a ‘job’ — I had a job (but thanks for thinking of me ❤️). However, I trusted that this friend knew me better as a professional, so I opened the link.

I was immediately intrigued. I hadn’t heard of IDAGIO before, but I’ve spent a lot of time thinking about niche services. At one point, the plan for Zvooq was to not build a typical one-size-fits-all app like all the other music streaming services are doing, but instead it would be to split different types of music-related behaviours into smaller apps. The goal would then become to monopolize those behaviours: like Google has monopolized search behaviour (now called Googling), and Shazam has monopolized Shazaming. Long term, it would allow us to expand that ecosystem of apps beyond streaming content, so we would be able to monetize behaviours with higher margins than behaviours related to music listening.

We ended up building just one, Fonoteka, before we had to switch strategies due to a mix of market reality, licensing terms, and burn rate. That was fine: it was what the business needed, and what Russia as a market needed.

Since then, there have been a number of niche music ideas, like services for indie rock, high quality streaming, etc. And while those are all commendable, I was never quite interested in them, because it just seemed like those services would not have a strong enough strategic competitive advantage in the face of tech giants with bulging coffers. Their offers were often also just marginally better, but getting people to install an app and build a habit around your service, unlearn their old solution, learn to do it your way… that’s a huge thing to ask of people, especially once you need to go beyond the super early adopters.

But niche works on a local level. You can see it with Yandex.Music and Zvooq in Russia, with Anghami in the Middle East, and Gaana in India.

Over the last decade, I’ve lived in Russia, Bulgaria, Turkey, and The Netherlands (where I’m from). Each country has unique ways of interacting with music. Music has a different place in each culture. I think local music services work, because they combine catalogues and local taste with a deep understanding of how their target audience connects to music. It allows them to build something catering exactly to those behaviours. It’s music and behaviour combined.

When I started talking to the IDAGIO team, I soon understood that they too combine these elements. Classical music, in all its shapes and forms, has many peculiarities, which will remain an object of study for me for the next years. The fact that the same work often has a multitude of recordings by different performers already sets it apart. One can map a lot of behaviours around navigation, exploring, and comparison to just this one fact.

An example of one way in which IDAGIO lets people explore various performances of the same work.

Despite being younger and having more modest funding, IDAGIO has already built a product that caters better to classical music fans than the other streaming services do (and also serves lossless streams). Understanding that, I was fast convinced that this was something I seriously needed to consider.

So I got on a plane and met the team. Over the course of three days, we ran a condensed design sprint, isolated a problem we wanted to tackle together, interviewed expert team members, explored options, drew up solutions, and prototyped a demo to test with the target audience. It’s an intense exercise, especially when you’re also being sized up as a potential team member, but the team did such a good job at making me feel welcome and at home (❤️). Through our conversations, lunches, and collaboration, I was impressed with the team’s intelligence, creativity, and general thoughtfulness.

Then I spent some extra time in Berlin — after all, I’d be moving there. Aforementioned friend took me to a medical museum with a room full of glass cabinets containing jars with contents which will give me nightmares for years to come. Besides that, I met a bunch of other friends, music tech professionals, and entrepreneurs, who collectively convinced me of the high caliber of talent and creative inspiration in the city.

Returning home, I made a decision I didn’t expect to make this year, nor in the years to come. A decision to make a radical switch in priorities.

Motivation, for me, comes from the capacity to grow and to do things with meaningful impact. MUSIC x TECH x FUTURE has exposed me to a lot of different people, a lot of different problems, and has allowed me to do what I find interesting, what I’m good at, but also what I grow and learn from. With IDAGIO I can do all of the latter, but with depth, and with a team.

Classical music online has been sidelined a bit. It makes a lot of sense when you place it in a historical perspective: a lot has changed in recent years. The web’s demographic skews older now. You can notice this by counting the number of family members on Facebook. The internet used to be something most adults would just use for work, so if you were building entertainment services, you target the young, early adopter demographic. That’s pop music, rock, electronic, hiphop, etc. Classical was there, sure, but Spotify wasn’t designed around it, iTunes wasn’t, YouTube wasn’t.

Now we’re actually reaching a new phase for music online. The streaming foundation has been built. Streaming is going mainstream. The platforms from the 2007–2009 wave are maturing and looking beyond their original early adopter audiences… So we’re going to see a lot of early adopters that are not properly served anymore. They’re going to migrate, look for new homes. A very important segment there, one that has always been underserved, are classical music fans. And now, this niche audience is sizeable enough to actually build a service around.

Why? Well, the internet has changed since the large last wave of music startups. Mobile is becoming the default way people connect to the web. For adults, this has made the web less of a thing for ‘work’, and has made entertainment more accessible. Connected environments make it easy to send your mobile audio to your home hifi set, or car speakers. The amount of people on the internet has more than doubled.

This makes the niche play so much more viable than just a few years ago. It has to be done with love, care, and a very good understanding of whose problem you’re trying to solve (and what that problem is). IDAGIO has exactly the right brilliant minds in place to pull this off and I’m flattered that in 2 weeks time, I’ll get to spend 2,000 hours a year with them.

What happens to the agency?

I’ll be winding down the agency side of MxTxF. This means I’m not taking on any more clients, but I’m happy to refer you to great people I know. Some longer term projects, that just take a couple of hours per week, I’ll keep on to bring to completion.

What happens to the newsletter?

The newsletter goes on! I get a lot of personal fulfilment out of it. The agency was born out of the newsletter, so who knows what more it will spawn. I’m actually figuring out a way to add audio and video content to the mix. I expect Midem and Sónar+D next June will be pilots for that. Berlin is a great place for music tech, so if anything, I hope the newsletter will only get more interesting as time goes on.

Besides the personal fulfilment, it allows me to be in touch with this wonderful community, to meet fascinating people, and occasionally to help organise a panel and bring some of my favourite minds into the same room at the same time.

If you’d like to support the newsletter, you can help me out on Patreon. You can become a patron of the newsletter — with your support, I can add extra resources to the newsletter, which will let me push the content to the next level (high on the list: a decent camera).

Elgar making an early recording of the work in 1920. Those pipes are acoustic recording horns, which were piped to a diaphragm which would vibrate a cutting stylus — directly turning sound waves into a material recording.

I leave you with Edward Elgar’s Cello Concerto in E Minor, Op. 85, which I discovered as a student, listening to the brilliant Szamár Madár by Venetian Snares in which it is sampled.

▶️ Cello Concerto in E Minor, Op. 85

You can listen to the work, in full, on IDAGIO.

I’d love to hear about your favourite works and recordings. Feel free to email me on bg@idagio.com, with a link, and tell me what I should listen for.

Music for the Snapchat generation: conceptualizing Music Stories

Whether you’ve ever used Snapchat or not, you have felt the influence of the social app’s design choices. How will it shape the future of music?

Snapchat is perhaps best known for its photo filters

Snapchat created something called ‘Stories’. Stories are composed of photos and short videos that stay available for 24 hours. They allow people to get a look into other people’s days, including celebrities. The feature has been shamelessly copied by Facebook and integrated in Instagram, but the low-barrier channel-flicking content format is now seeing integration in unexpected places.

Forbes launched Cards, Huffington Post launched storybooks, and Medium launched Series. This led David Emery, VP Global Marketing Strategy of Kobalt Label Services, to ask the question: what will the Snapchat for music look like?

I decided to take a stab at the challenge and conceptualize how people may interact with music in the future.

How people engage with content

I specifically looked at Soundcloud, Instagram, and Tinder for some of the most innovative and influential design choices for navigating, sharing, and engaging with content. Soundcloud for the music, Instagram for visuals, and Tinder for how it lets people sift through ‘content’. I apologize in advance for all the times I’m going to refer to people on Tinder as ‘content’, but that’s the most effective way to approach Tinder for the sake of this article.

Learning from Soundcloud

One key strength of Soundcloud is that every time you open the app or web client there’s new content for you. Either from the artists you follow, through its Explore feature, or through personalized recommendations. People should be able to check out content as soon as they open the app.

Text is easy to engage with: you can copy the parts you want to comment on, quote it, and comment. With audio this is harder. Soundcloud lets people comment on the timeline of tracks, which makes it much more fun to engage with content. YouTube solves this problem by letting people put time tags in comments.

If you really love the content, you can repost it to your network. This makes the service attractive to content creators, but also to fans, because the feature gives them a way to express themselves and build up their profiles without actually having to create music themselves. Compare this to Spotify, where the barrier to build up your profile as a user is much higher due to the energy that you have to put into creating (and maintaining) playlists.

Recommendations mean that people can jump in, hit play and stop thinking. Soundcloud is one of the few music services that seem to have found a great balance between very active types of behaviour, as well as more passive modes.

Learning from Instagram

There’s a reason why I’m highlighting Instagram instead of Snapchat. Instagram has two modes of creation and navigation. You can either scroll down your main feed, where people will typically only post their best content OR you can tap one of the stories at the top and watch a feed of Snapchat-like Stories. Tap to skip!

Instagram makes it really easy to create and navigate through content. Stories’ ephemeral quality reduces the barrier to sharing moments (creating) and makes people worry less that they’re ‘oversharing’. Snapchat’s filters, which Instagram hasn’t been able to clone well (yet), make it easy to create fun content. People open up their camera, see what filters are available, and create something funny. No effort, and it’s still fun for their friends or followers to watch.

Learning from Tinder

The brutal nature of dating services is that profiles (people) are content, which also means that the majority of users will not be interested in the majority of content offered on the service. So you can do two things: make going through content as effortless as possible and build a recommendation engine which delivers the most relevant content to users. Tinder’s focus on the former made them the addictive dating app they are today.

Quickly liking and disliking content is like a bookmarking function which also helps to feed information to recommendation algorithms.

If you really want to dive deeper into a piece of content, you can tap to expand it (open profile), but basically the app’s figured out a great way to present huge amounts of content to people, of which the majority is ‘irrelevant’, and make it engaging to quickly navigate through it.

Must haves

The key qualities of social content apps right now are a high volume of content, easy creation and interactivity, and fast navigation. Bookmarking and reposting allows for users to express themselves with little effort.

Breaking it down

This is the most important feature for the end user. There are already a lot of good services in order to access large catalogues, to dive deep, to search for specific content… Music Stories should not try to compete with that. Instead it is a new form of media, which needs to be so engaging that it will affect the creative decisions of artists.

Soundcloud’s feed is a good example, but so is Snapchat’s main Stories screen (pictured below). Both show the user a variety of content that they can engage with immediately by hitting the play button or by tapping on a profile image.

The content in the app needs to be bite-size so users can get a quick idea of the content immediately and decide whether they like it or not. If yes, they should be able to go deeper (eg. Tinder‘s ‘tap to expand’) or interact, like reposting. If not, they need to be able to skip and move on.

When a user has an empty content feed, you can serve recommendations. When a user went through all new content already, you should invite them to create something.

You want people to be able to lean back, but ideally you’ll pull people into your app a few times a day and get them to browse through some fresh content. To get them to re-open the app, there needs to be meaningful interaction. That can come in the form of swipes, comments, or remixing.

One of the cool things about Snapchat is that you can discover new filters through your friends. Think:

“Woah, you can be Harry Potter? I want to be Harry Potter, too!”

So if we extend that to Music Stories, creating some music idea needs to be as simple as making yourself look like Harry Potter or face-swapping with a painting or statue in a museum.

Snapchat is why millennials visit museums. (jk)

This means that artists should be able to add music to the app in a way that allows people to remix it, to make it their own. All remixes can stay linked to the original. You could even track a remix of a remix of a remix in the same way you can see repost-chains on Tumblr.

How do you make it easy to create and to interact with music?

That’s the biggest challenge. People are shy or may not feel creative.  You could let them use images or video (like Musically), or you could let them replace one of the samples in the beat with a sound from their environment (imagine replacing the “yeah” from Justin Timberlake‘s SexyBack with your own sound), or you could let them play with the pitch of the vocals.

Options need to be limited, easy-to-understand and manipulate, and inviting. It should be as simple as swiping through Snapchat filter options.

Through creation and interactivity, users build up a profile to show off their music identity. Content is ephemeral, unless you choose differently (like on Instagram). I’d go for ephemeral by default and then give users the option to ‘add to profile’ once content reaches a certain engagement threshold. This will need a lot of tweaking and testing to get right.

Interactions are not ephemeral. Reposts go straight to profile, until you undo them.

Stories are all about being able to jump through content quickly. Tinder’s Like / Dislike function could work in Music Stories as a ‘skip’ and ‘bookmark’ function. By letting people bookmark stuff they’ll have content to come back to when they’re in a more passive mode. Perhaps an initial Like would send music to a personal inbox which stays available for a limited time, then when you Like content that’s in that inbox it gets shared to your profile, or saved in some other manner.

Music Stories should NOT be a Tinder for Music. Tinder’s strength is to let users navigate through a lot of content that doesn’t appeal to them, while making the interaction interesting. It’s an interesting model that manages to create value from content that may be irrelevant to some users.

Translating to features

The next steps are to start translating the concept into features. This means user stories (what you want users to be able to do with the app) need to be articulated clearly. Mock ups of specific interactions need to be drawn and tested with audiences. Challenges need to be considered, like the classic issue of getting people to start creating content when there’s no audience in the app yet (Instagram solved this by letting people share content to other social networks).

Now I invite YOU to take this challenge and develop the vision for Music Stories.

(Don’t forget to read David Emery’s original post, which prompted me to write this piece)

 

Google Glass

When augmented reality converges with AI and the Internet of Things

The confluence of augmented reality, artificial intelligence, and the Internet of Things is rapidly giving rise to a new digital reality.

Remember when people said mobile was going to take over?

Well, we’re there. Some of the biggest brands in our world are totally mobile: Instagram, Snapchat, Uber. 84% (!) of Facebook’s ad revenue now comes from mobile.

And mobile will, sooner or later, be replaced by augmented reality devices, and it will look nothing like Google Glass.

Google Glass
Not the future of augmented reality.

Why some predictions fail

When viewing trends in technology in isolation, it’s inevitable you end up misunderstanding them. What happens is that we freeze time, take a trend and project the trend’s future into a society that looks almost exactly like today’s society.

Past predictions about the future
Almost.

This drains topics of substance and replaces it with hype. It causes smart people to ignore it, while easily excited entrepreneurs jump on the perceived opportunity with little to no understanding of it. Three of these domains right now are blockchain, messaging bots, and virtual reality, although I count myself lucky to know a lot of brilliant people in these areas, too.

What I’m trying to say is: just because it’s hyped, doesn’t mean it doesn’t deserve your attention. Don’t believe the hype, and dig deeper.

The great convergence

In order to understand the significance of a lot of today’s hype-surrounded topics, you have to link them. Artificial intelligence, smart homes & the ‘Internet of Things’, and augmented reality will all click together seamlessly a decade from now.

And that shift is already well underway.

Artificial intelligence

The first time I heard about AI was as a kid in the 90s. The context: video games. I heard that non-playable characters (NPCs) or ‘bots’ would have scripts that learned from my behaviour, so that they’d get better at defeating me. That seemed amazing, but their behaviour remained predictable.

In recent years, there have been big advances in artificial intelligence. This has a lot to do with the availability of large data sets that can be used to train AI. A connected world is a quantified world and data sets are continuously updated. This is useful for training algorithms that are capable of learning.

This is also what has given rise to the whole chatbot explosion right now. Our user interfaces are changing: instead of doing things ourselves, explicitly, AI can be trained to interpret our requests or even predict and anticipate them.

Conversational interfaces sucked 15 years ago. They came with a booklet. You had to memorize all the voice commands. You had to train the interface to get used to your voice… Why not just use a remote control? Or a mouse & keyboard? But in the future, getting things done by tapping on our screens may look as archaic as it would be to do everything from a command-line interface (think MS-DOS).

XKCD Sudo make me a sandwich
There are certain benefits to command-line interfaces… (xkcd)

So, right now we see all the tech giants diving into conversational interfaces (Google Home, Amazon Alexa, Apple Siri, Facebook Messenger, and Microsoft, err… Tay?) and in many cases opening up APIs to let external developers build apps for them. That’s right: chatbots are APPS that live inside or on top of conversational platforms.

So we get new design disciplines: conversational interfaces, and ‘zero UI’ which refers to voice-based interfaces. Besides developing logical conversation structures, integrating AI, and anticipating users’ actions, a lot of design effort also goes into the personality of these interfaces.

But conversational interfaces are awkward, right? It’s one of the things that made people uncomfortable with Google Glass: issuing voice commands in public. Optimists argued it would become normalized, just like talking to a bluetooth headset. Yet currently only 6% of of people who use voice assistants ever do so in public… But where we’re going, we won’t need voice commands. At least not as many.

The Internet of Things

There are still a lot of security concerns around littering our lives with smart devices: from vending machines in our offices, to refrigerators in our homes, to self-driving cars… But it seems to be an unstoppable march, with Amazon (Alexa) and Google (Home) intensifying the battle for the living room last year:

Let’s converge with the trend of artificial intelligence and the advances made in that domain. Instead of having the 2016 version of voice-controlled devices in our homes and work environments, these devices’ software will develop to the point where they get a great feeling of context. Through understanding acoustics, they can gain spacial awareness. If that doesn’t do it, they could use WiFi signals like radar to understand what’s going on. Let’s not forget cameras, too.

Your smart device knows what’s in the fridge before you do, what the weather is before you even wake up, it may even see warning signs about your health before you perceive them yourself (smart toilets are real). And it can use really large data sets to help us with decision-making.

And that’s the big thing: our connected devices are always plugged into the digital layer of our reality, even when we’re not interacting with them. While we may think we’re ‘offline’ when not near our laptops, we have started to look at the world through the lens of our digital realities. We’re acutely aware of the fact that we can photograph things and share them to Instagram or Facebook, even if we haven’t used the apps in the last 24 hours. Similarly, we go places without familiarizing ourselves with the layout of the area, because we know we can just open Google Maps any time. We are online, even when we’re offline.

Your connected home will be excellent at anticipating your desires andbehaviour. It’s in that context that augmented reality will reach maturity.

Google Home

Augmented reality

You’ve probably already been using AR. For a thorough take on the trend, go read my piece on how augmented reality is overtaking mobile. Two current examples of popular augmented reality apps: Snapchat and Pokémon Go. The latter is a great example of how you can design a virtual interaction layer for the physical world.

So the context in which you have to imagine augmented reality reaching maturity is a world in which our environments are smart and understand our intentions… in some cases predicting them before we even become aware of them.

Our smart environments will interact with our AR device to pull up HUDs when we most need them. So we won’t have to do awkward voice commands, because a lot of the time, it will already be taken care of.

Examples of HUDs in video games
Head-up displays (HUDs) have long been a staple of video games.

This means we don’t actually have to wear computers on our heads. Meaning that the future of augmented reality can come through contact lenses, rather than headsets.

But who actually wants to bother with that, right? What’s the point if you can already do everything you need right now? Perhaps you’re too young to remember, but that’s exactly what people said about mobile phones years ago. Even without contact lenses, all of these trends are underway now.

Augmented reality is an audiovisual medium, so if you want to prepare, spend some time learning about video game design, conversational interfaces, and get used to sticking your head in front of a camera.

There will be so many opportunities emerging on the way there, from experts on privacy and security (even political movements), to designing the experiences, to new personalities… because AR will have its own PewDiePie.

It’s why I just bought a mic and am figuring out a way to add audiovisual content to the mix of what I produce for MUSIC x TECH x FUTURE. Not to be the next PewDiePie, but to be able to embrace mediums that will extend into trends that will shape our digital landscapes for the next 20 years. More on that soon.

And if you’re reading this and you’re in music, then you’re in luck:
People already use music to augment their reality.

More on augmented reality by me on the Synchtank blog:
Projecting Trends: Augmented Reality is Overcoming its Hurdles to Overtake Mobile.