Does Apple’s lossless streaming move impair fairer subscription prices?

Apple just announced they’ll be launching spatial and lossless audio, at no extra cost, to all Apple Music subscribers starting in June. Seemingly in response, Amazon announced that they’ll be folding their lossless quality tier into the standard Amazon Music subscription tier. Lossless quality music is $9.99 now.

Amazon & Apple are not music companies

Neither Amazon nor Apple need to make money with their music businesses. They utilize these aspects for greater ecosystem tie-in and can afford to use music as a loss leader. Not even considering Apple’s iPhone, App Store or MacBook business… Apple’s revenue for their Airpods equals the revenue of Spotify, Twitter, Snap, and Shopify combined (2019).

AirPods make more money than Spotify, Twitter, Snap, and Shopify combined

Another analyst puts the 2019 revenue for Airpods at $7.5 billion, rather than $12 billion. Still enormous. Airpods are becoming a platform. With its iTunes Store it sought to get more people on the iPod, which created a consumer lock-in that extended to the iPhone and the App Store. Steve Jobs‘ deal terms for iTunes also had a profound effect on the economics of music – laying the foundation for many of today’s discussions.

Unit Sales Out of the Gate (Above Avalon)

Lossless as a loss leader

Unless Apple and Amazon signed some very unique deals with labels, lossless streaming comes at a higher price than standard quality. That means that for now, Apple and Amazon are deciding to eat the cost in order to tie more people into their ecosystems. Amazon was previously criticized for this in 2011, subsidizing Lady Gaga‘s album sales of Born This Way by discounting it to $0.99:

“The digital retailer used the album as a loss leader to promote their Cloud Drive storage service and paid Gaga’s label full wholesale price for each album sold.”

Apple has been taking aim at Spotify since the launch of Apple Music. That started with rhetoric around how human curation is better than algorithms. More recently it took the form of a letter to artists about Apple Music’s royalty rates. Spotify’s antitrust complaints in the EU about Apple’s App Store practices means Apple faces fines as high as $27 billion. Spotify announced they have a lossless tier coming up later this year. Most people assumed this would come at an extra cost. Apple’s decision to use their $200B war chest to eat the cost of lossless quality audio is very much a move against Spotify.

Growing the pie – undermined?

Spotify had the courage to move first and start increasing prices of its existing tiers. Streaming subscription prices have long been stuck at the same price, losing 26% of value due to inflation. The market has become mature enough to raise prices and that’s something that needs to be normalized in a way similar to Netflix’ price hikes.

Cover image for Here’s How Spotify Can Fix Its Songwriter Woes (Hint: It’s All About Pricing)

Apple & Amazon’s strategy puts that at risk. Two questions to ponder: is music currently sustainable with so many companies relying on revenues from streaming services that are making a loss and are subsidized by tech giants or investors? Can this digital music landscape be sustainable without asking consumers for a fairer price?

Music Ally‘s Stuart Dredge has an optimistic take:

“Perhaps hi-res music’s true value in streaming will be to enable the big DSPs to charge all their subscribers another dollar or two a month, rather than just to persuade a small fraction of them to pay five dollars more a month. If that strategy pays off, today’s news will have been a positive moment indeed.”

I’m less optimistic and think that if this was the strategy, they would have paired the news with a price hike. This is about ecosystem tie-in and hitting Spotify where it hurts in a way that’s likely to impair efforts to normalize fairer subscription pricing.

What if iTunes didn’t happen the way it did?

We all love to think “what if…”

What if Napster had managed to get its legal issues resolved? Would there be a Spotify now? What ecosystem would have emerged?

Last week I listened to a podcast interview between Tim Ferriss and Tony Fadell (“the father of the iPod”). They went into a piece of music tech history I wasn’t familiar with. Turns out iTunes launched as a somewhat re-engineered version of a startup’s software Apple acquired. This startup was called SoundJam and they had made some music software that would run on Macs, and could sync libraries with Rio music players. There’s a screenshot of it below and it kind of reminds me of WinAmp which I avidly used until Spotify came around. Note the chrome UI element which was characteristic for iTunes for a long time.

Image

But there was another company Spotify was looking into acquiring. They were called Panic and developed a player named Audion. Also similar to WinAmp, it was more feature-rich than SoundJam and counted skins and visualizations among its features.

Image

Audion didn’t end up getting acquired by Apple, because they never ended up meeting. The Audion team was already in talks with AOL and wanted to bring them together with Apple for a meeting. That meeting got canceled when AOL couldn’t make it, and that was the end of that.

The team behind SoundJam became the first developers to work on iTunes and after being lead developer for iTunes, one of SoundJam’s creators is now Apple’s VP of consumer applications.

Every product has a philosophy behind it and sometimes this philosophy can change the interfaces of a whole space. Look at how Tinder changed dating with its left-right swipe interface: not only a newcomer like Bumble decided to go for that, but so did the incumbent OkCupid. Or take Snapchat and the way its format influenced Instagram Stories and TikTok. This happens in music too, where some of the biggest influences can be traced back to IRC and Napster.

I think iTunes’ legacy is playlists. It really put the playlist front and center, which later on was also at the base of early Spotify. Spotify initially had no way to save artists or albums: you could star tracks and drag stuff into playlists. That was it.

spotify-1253

It makes me so curious: if Apple had acquired Audion instead of SoundJam, would iTunes have been playlist-centric? Would the unbundling of the album have come about in the same way? Would we have the same type of ‘playlist economy’ as we see now?

If you’re curious to see what iTunes looked like upon launch, here’s a video of Steve Jobs demoing it (from 4:32 – excuse the pixels, we’re digging deep into YouTube’s archives):

Another obscure bit of Apple / iTunes history: watch Steve Jobs present the Motorola iTunes phone.

How will we remember bands when interfaces are voice-controlled?

I have phrased the above question as a problem for listeners, but this is a much bigger problem for artists.

The last few weeks have been filled with big news for those closely following voice interfaces. Amazon just announced a bunch of new devices, including a cheaper version of the Echo and a new Echo Plus, that utilize Amazon’s voice assistant Alexa. Google has upgraded its voice assistant, and has included it in new headphones which can automatically translate what people are saying, alongside a bunch of other devices that quite frankly look more exciting than Apple‘s. And to top that all off, multi-room hifi-set producer, Sonos, has just integrated Alexa in its speakers.

The problem in the title is actually easily solved for a listener: you can simply ask what’s playing. However you simply can’t be bothered to ask what’s playing every other song. So this problem is much more important for the artist, than for the listener.

If you haven’t used these devices yet, you may not be aware of some of the challenges, but here they are:

  1. It’s already hard to be remembered – how will people remember you when they don’t even see your name? On our phones or laptops, we occasionally see what’s playing. When we select a playlist, we often see what artists are on there. Something may stick. When we play ask Alexa to play Spotify‘s RapCaviar playlist, we don’t get clues of what’s playing. It’s basically the same as with radio, but at least there you have DJs who will tell you what’s playing. Any music or artist that you don’t care to Shazam will be forgotten.
  2. How do you stay top of mind enough for people to replay you? People often start playing music without looking at their phones or music libraries. This means they request what’s top of mind: artists they remember in that moment, or big brands in music and playlists, such as aforementioned Spotify playlist, Majestic Casual, or Diplo & Friends.
  3. How do you compete with ‘functional music’? The most popular ‘music’ apps on Alexa are all kinds of sleep and meditation sound apps. This list excludes Spotify and other music services, due to a deeper integration with Alexa, but it’s telling: people use these voice interfaces to request music to augment specific activities. Sleeping, bathing, meditating, cooking, whatever.

There are great solutions to these problems. And they’re not hard to figure out (people in hiphop have been shouting their name and their label’s name on tracks for decades).

I may do a follow-up on tactics and strategy for the age of “zero UI”, when the user interface is mostly controlled by voice and artificial intelligence, but for now, I’d love to hear about what you think. Ping me on Twitter: @basgras.

Painting: Wojtek Siudmak – “Le regard gourmand”

The next 3 interfaces for music’s near future

Our changing media reality means everyone in music will have to come to grips with three important new trends.

Understanding the music business means understanding how people access, discover, and continuously listen to music. This used to be the record player, cassette player, radio, cd player, and now increasingly happens on our computers and smartphones. First by playing downloads in media players like Winamp, Musicmatch Jukebox, or iTunes, but now mostly via streaming services like Spotify, Apple Music, but also YouTube.

Whenever the interface for music changes, the rules of the game change. New challenges emerge, new players get to access the space, and those to best leverage the new media reality gain a significant lead over competing services or companies, like Spinnin Records‘ early YouTube success.

What is a media reality?

I was recently talking with Gigi Johnson, the Executive Director of the UCLA Center for Music Innovation, for their podcast, and as we were discussing innovation, I wanted to point out two different types of innovation. There is technological innovation, like invention, but you don’t have to be a scientist or an inventor to be innovative.

When the aforementioned categories of innovations get rolled out, they create new realities. Peer-to-peer technology helped Spotify with the distribution of music early on (one of their lead engineers is Ludvig Strigeus, creator of BitTorrent client utorrent), and for this to work, Spotify needed a media reality in which computers were linked to each other in networks with decent bandwidth (ie. the internet).

So that’s the second type of innovation: leveraging a reality created by the proliferation of a certain technology. Studios didn’t have to invent the television in order to dominate the medium. Facebook didn’t have to invent the world wide web.

A media reality is any reality in which innovation causes a shift to a new type of media. Our media reality is increasingly shifting towards smart assistants like Siri, an ‘internet of things’ (think smart home), and we’re creating, watching, and interacting through more high quality video than ever before.

Any new media reality brings with it new interfaces through which people interact with knowledge, their environment, friends, entertainment, and whatever else might be presented through these interfaces. So let’s look at the new interfaces everyone in music will have to deal with in the coming years.

Chatbots are the new apps

People don’t download as many apps as they used to and it’s getting harder to get people to install an app. According to data by comScore, most smartphone users now download fewer than 1 app per month.

So, in dealing with this new media reality, you go to where the audience is. Apparently that’s no longer in app stores, but on social networks and messaging apps. Some of the latter, and most prominently Facebook Messenger, allow for people to build chatbots, which are basically apps inside the messenger.

Companies like Transferwise, CNN, Adidas, Nike, and many airlines already have their own bots running on Messenger. In music, well-known examples of artist chatbots are those by Katy Perry and DJ Hardwell. Record Bird, a company specialized in surfacing new releases by artists you like, launched their own bot on messenger in 2016.

The challenge with chatbots is that designing for a conversational interface is quite different from designing visual user interfaces. Sometimes people will not understand what’s going on and start requesting things from your bot that you may not have anticipated. Such behaviours need to be anticipated, since people can not see the confines of the interface.

Chatbots are set to improve a lot over time, as developments in machine learning and artificial intelligence will help the systems behind the interfaces to interpret what users may mean and come up with better answers.

VUIs: Alexa, play me music from… uhmm….

I’ve been living with an Amazon Echo for over a month and together with my Philips Hue lamps it has imbedded itself into my life to the extent that I nearly asked Alexa, Amazon‘s voice assistant, to turn off the lights in a hotel room last weekend.

It’s been a pleasure to trade in the frequent returns to touch-based user interfaces for voice user interfaces (VUIs). I thought I’d feel awkward, but it’s great to quickly ask for weather updates, planned activities, the time, changing music, changing the volume, turning the lights on or off or dimming them, setting alarms, etc. without having to grab my phone.

I also thought it would be awkward having friends over and interacting with it, but it turns into a type of play, with friends trying out all kinds of requests I had never even thought of, and finding out about new features I wasn’t aware of.

And there’s the challenge for artists and businesses.

As a user, there is no home screen. There is nothing to guide you. There is only what you remember, what’s top of mind. Which is why VUIs are sometimes referred to as ‘zero UI’.

I have hundreds of playlists on Spotify, but through Alexa I’ve only listened to around a dozen different playlists. When I feel like music that may or may not be contained inside one of my playlists, it’s easier to mentally navigate to an artist that plays music like that, than to remember the playlist. So you request the artist instead.

VUIs will make the branding of playlists crucial. For example, instead of asking for Alexa to play hiphop from Spotify, I requested their RapCaviar playlist, because I felt the former query’s outcome would be too unpredictable. As the music plays, I’m less aware of the artist names, as I don’t even see them anymore and I hardly ever bother asking. For music composed by artificial intelligence, this could be a great opportunity to enter our music listening habits.

The VUI pairs well with the connected home, which is why tech giants like Google, Amazon, and Apple are all using music as the trojan horse to get their home-controlling devices into our living rooms. They’re going to be the operating system for our houses, and that operating system will provide an invisible layer that we interact with through our voice.

Although many of the experiences through VUIs feel a bit limited currently, they’re supposed to get better over time (which is why Amazon calls their Alexa apps ‘skills’). And with AI improving and becoming more widespread, these skills will get better to the point that they can anticipate our intentions before we express them.

As voice-controlled user interfaces enter more of our lives, the question for artists, music companies, and startups is: how do we stand out when there is no visual component? How can you stay top of mind? How will people remember you?

Augmented reality

Google Glass was too early. Augmented reality will be nothing like it.

Instead of issuing awkward voice commands to a kind of head mounted smartphone, the media reality that augmented reality will take shape in is one of conversational interfaces through messaging apps, and voice user interfaces, that are part of connected smart environments, all utilizing powerful artificial intelligence.

You won’t have to issue requests, because you’ll see overlays with suggested actions that you can easily trigger. Voice commands are a last resort, and a sign of AI failing to predict your intent.

So what is music in that reality? In a way, we’re already there. Kids nowadays are not discovering music by watching professional video productions on MTV; they discover music because they see friends dancing to it on Musically or they applied some music-enabled Snapchat-filter. We are making ourselves part of the narrative of the music, we step into it, and forward our version of it into the world. Music is behaving like internet memes, because it’s just so easy to remix now.

One way in which augmented reality is going to change music, is that music will become ‘smart’. It will learn to understand our behaviour, our intentions, and adapt to it, just like other aspects of our lives would. Some of Amazon Alexa‘s most popular skills already include music and sound to augment our experience.

This is in line with the trend that music listeners are increasingly exhibiting a utilitarian orientation towards music; interacting with music not just for the aesthetic, but also its practical value through playlists to study, focus, workout, clean the house, relax and drink coffee, etc.

As it becomes easier to manipulate music, and make ourselves part of the narrative, perhaps the creation of decent sounding music will become easier too. Just have a look at AI-powered music creation and mastering startups such as Jukedeck, Amper, and LANDR. More interestingly, check out Aitokaiku‘s Vimu, which lets you create videos with reactive music (the music reacts to what you film).

Imagine releasing songs in such a way that fans can interact and share them this way, but even better since you’ll be able to use all the data from the smart sensors in the environment.

Imagine being able to bring your song, or your avatar, into a space shared by a group of friends. You can be like Pokemon.

It’s hard to predict what music will look like, but it’s safe to say that the changes music went through since the proliferation of the recording as the default way to listen to music are nothing compared to what’s coming in the years ahead. Music is about to become a whole lot more intelligent.


For more on how interfaces change the way we interact with music, I’ve previously written about how the interface design choices of pirate filesharing services such as Napster influence music streaming services like Spotify to this day.

If you like the concept about media realities and would like to get a better understanding of it, I recommend spending some time to go through Marshall McLuhan‘s work, as well as Timothy Leary‘s perspective on our digital reality in the 90s.