11 startups innovating the future of music

Techstars Music just announced their first batch. A quick look at the selected startups.

It feels like we’re seeing a new wave of music startups. A lot of the excitement that marked the time around 2007–2010 is back in the air, and it’s great to see an acclaimed startup accelerator like Techstars dedicating a program to music.

As platforms from that age, like Spotify and Soundcloud, are reaching maturity and estranging early adopters, a new generation of music startups is starting to emerge. Techstars Music just announced their first batch of music startups, so I wanted to highlight each of them — as what these startups do may well end up profoundly shaping the business of music in years to come.

Alphabetically:

Amper — ampermusic.com

A tool to create AI-composed music for videos and other professional content. Unfortunately I haven’t been able to test out the product yet, and their only demo video doesn’t reveal much. It seems like they’re working on something similar to Jukedeck, but possibly in a way where users have a higher degree of influence on the final outcome.

AI-composed music is an important trend for years to come and Amper‘s working with an impressive team which includes accomplished Hollywood sound designers and composers.

Hurdl — hurdl.com

LED wearables to enable interactive audience experiences at live events. They let artists light up entire audiences, or just one fan. Their pitch deck suggests lighting up people based on gender, Spotify top fans, or sports team preference. It also allows for direct messaging to fans during or after shows.

Hurdl Ecosystem

JAAK — jaak.io

I first heard about JAAK when I met the founders at Music Tech Fest’s blockchain roundtable in Berlin last year. They’re using blockchain technology to connect music, metadata, and rights information. They’ve been working on pilots with Viacom, PRS for Music, and PPL. One of their founders is a core developer for Ethereum and is behind Swarm, a distributed storage platform, creating a kind of peer-to-peer web, instead of server-centric.

Pacemaker — pacemaker.net

I’ve actually urged people to use this app in a recent piece about being an early adopter. It uses smart algorithms to turn your Spotify playlists into DJ mixes. You can then edit transitions and play around with effects. It also has a social component: you can comment on and like other people’s mixes in the app.

There’s a DJ by the name of bas on Pacemaker who has some particularly awesome mixes, so be sure to follow him 😉

Pacemaker apps

With Techstars’ support, I hope they figure out how to reach that exponential growth. I think it’s a really good time to start using the app and build a profile for yourself, so you can benefit optimally when they reach that growth.

(Personal wishlist: more editing controls on transitions on mobile, particularly exact timing, rather than snapping to markers 😇)

Interactivity and adaptivity of music is an important trend. I see Pacemaker as one of the first companies who has a great chance of being one of the first leaders in this domain.

Pippa — pippa.io

The pitch on Pippa’s homepage differs a bit from what I’ve read elsewhere, so I assume they’re pivoting. They currently present themselves as a platform which helps to distribute your podcasts and analyze data based upon that. What I’ve read elsewhere sounds very promising:

“Pippa makes podcasting simpler, smarter, and more profitable by enabling targeted ads to be delivered dynamically to listeners. Pippa technology can also be used to remove ads from podcasts, enabling future subscription revenue products.”

PopGun — wearepopgun.com

Another startup specializing in AI-composed music. PopGun uses deep learning to create original pop music. One of its founders is well-known in music tech circles, having previously founded We Are Hunted, which sold to Twitter and eventually became Twitter Music.

Have been having some great conversations about Creative AI recently. Particularly discussing the human element: some argue computers will not be capable of creativity, but in the way we perceive the world around us, we as humans will use our creativity anyway… I believe that opens up the possibility for a future in which AI-created art can become mainstream.

Robin — tryrobin.co

The pitch:

“Robin is a personal concierge for concerts and live events. Robin reserves and secures tickets on behalf of fans while providing real-time demand data to artists and event organizers.”

It’s an interesting proposition in times of secondary ticketing… I’m concerned they may be met with some skepticism, but the idea of having fans personally connect to a tool like this and then securing tickets before scalpers can get to them seems like a good addition to the ticketing landscape.

They’re currently available in the US and Canada, and will be expanding to the UK early 2017.

Shimmur — shimmur.com

This may be the app I’m most excited about in this batch. Shimmur is a social network for fans and ‘influencers’ to connect. It’s currently comprised of a lot of Musical.ly stars and their fans, so the demographic is very young.

Instead of having the artist communicating to fans, Shimmur turns it around. Tribes of fans can create comment to which the influencers react. Very appealing and the social competition that may emerge in vying for influencers’ attention may create interesting business models.

Shimmur
Concepts popularized by Reddit AMAs can be found in Shimmur

There are also some interesting concepts that could be introduced from gaming, like vanity items, rival goods, and quests.

Hope to see someone finally get this right.

Superpowered — superpowered.com

A mobile audio engine that provides low-latency audio for games, VR, and interactive audio apps. It’s apparently already used by DJ app Crossfader, Uber, and a number of games and other apps, together totalling at hundreds of millions of app installs.

Syncspot — syncspot.net

Syncspot uses an “AI assistant to create and fulfil free-gift media rewards for in-store promotions”. Their homepage lists a campaign that reminds me of Landmrk: users get a call to action to go to a certain location on the map (like a store) to receive a reward. Think Pokémon Go.

Weav — weav.io

This startup has been on my radar for a long time. It lets creators make adaptive music that recomposes itself in real-time, based on whatever the user is doing. I’m a firm believer in adaptive music that adapts to the user’s context and believe the way people currently use music to augment their moods shows the opportunity for adaptive audio.

They’ve built a tool for musicians to create this type of music, as well as an SDK for developers, so they can add a player to their apps which is capable of playing this type of media.

Weav

Fun fact: Weav is co-founded by one of the creators of Google Maps.

Best of luck to Techstars & all the startups.

Google Glass

When augmented reality converges with AI and the Internet of Things

The confluence of augmented reality, artificial intelligence, and the Internet of Things is rapidly giving rise to a new digital reality.

Remember when people said mobile was going to take over?

Well, we’re there. Some of the biggest brands in our world are totally mobile: Instagram, Snapchat, Uber. 84% (!) of Facebook’s ad revenue now comes from mobile.

And mobile will, sooner or later, be replaced by augmented reality devices, and it will look nothing like Google Glass.

Google Glass
Not the future of augmented reality.

Why some predictions fail

When viewing trends in technology in isolation, it’s inevitable you end up misunderstanding them. What happens is that we freeze time, take a trend and project the trend’s future into a society that looks almost exactly like today’s society.

Past predictions about the future
Almost.

This drains topics of substance and replaces it with hype. It causes smart people to ignore it, while easily excited entrepreneurs jump on the perceived opportunity with little to no understanding of it. Three of these domains right now are blockchain, messaging bots, and virtual reality, although I count myself lucky to know a lot of brilliant people in these areas, too.

What I’m trying to say is: just because it’s hyped, doesn’t mean it doesn’t deserve your attention. Don’t believe the hype, and dig deeper.

The great convergence

In order to understand the significance of a lot of today’s hype-surrounded topics, you have to link them. Artificial intelligence, smart homes & the ‘Internet of Things’, and augmented reality will all click together seamlessly a decade from now.

And that shift is already well underway.

Artificial intelligence

The first time I heard about AI was as a kid in the 90s. The context: video games. I heard that non-playable characters (NPCs) or ‘bots’ would have scripts that learned from my behaviour, so that they’d get better at defeating me. That seemed amazing, but their behaviour remained predictable.

In recent years, there have been big advances in artificial intelligence. This has a lot to do with the availability of large data sets that can be used to train AI. A connected world is a quantified world and data sets are continuously updated. This is useful for training algorithms that are capable of learning.

This is also what has given rise to the whole chatbot explosion right now. Our user interfaces are changing: instead of doing things ourselves, explicitly, AI can be trained to interpret our requests or even predict and anticipate them.

Conversational interfaces sucked 15 years ago. They came with a booklet. You had to memorize all the voice commands. You had to train the interface to get used to your voice… Why not just use a remote control? Or a mouse & keyboard? But in the future, getting things done by tapping on our screens may look as archaic as it would be to do everything from a command-line interface (think MS-DOS).

XKCD Sudo make me a sandwich
There are certain benefits to command-line interfaces… (xkcd)

So, right now we see all the tech giants diving into conversational interfaces (Google Home, Amazon Alexa, Apple Siri, Facebook Messenger, and Microsoft, err… Tay?) and in many cases opening up APIs to let external developers build apps for them. That’s right: chatbots are APPS that live inside or on top of conversational platforms.

So we get new design disciplines: conversational interfaces, and ‘zero UI’ which refers to voice-based interfaces. Besides developing logical conversation structures, integrating AI, and anticipating users’ actions, a lot of design effort also goes into the personality of these interfaces.

But conversational interfaces are awkward, right? It’s one of the things that made people uncomfortable with Google Glass: issuing voice commands in public. Optimists argued it would become normalized, just like talking to a bluetooth headset. Yet currently only 6% of of people who use voice assistants ever do so in public… But where we’re going, we won’t need voice commands. At least not as many.

The Internet of Things

There are still a lot of security concerns around littering our lives with smart devices: from vending machines in our offices, to refrigerators in our homes, to self-driving cars… But it seems to be an unstoppable march, with Amazon (Alexa) and Google (Home) intensifying the battle for the living room last year:

Let’s converge with the trend of artificial intelligence and the advances made in that domain. Instead of having the 2016 version of voice-controlled devices in our homes and work environments, these devices’ software will develop to the point where they get a great feeling of context. Through understanding acoustics, they can gain spacial awareness. If that doesn’t do it, they could use WiFi signals like radar to understand what’s going on. Let’s not forget cameras, too.

Your smart device knows what’s in the fridge before you do, what the weather is before you even wake up, it may even see warning signs about your health before you perceive them yourself (smart toilets are real). And it can use really large data sets to help us with decision-making.

And that’s the big thing: our connected devices are always plugged into the digital layer of our reality, even when we’re not interacting with them. While we may think we’re ‘offline’ when not near our laptops, we have started to look at the world through the lens of our digital realities. We’re acutely aware of the fact that we can photograph things and share them to Instagram or Facebook, even if we haven’t used the apps in the last 24 hours. Similarly, we go places without familiarizing ourselves with the layout of the area, because we know we can just open Google Maps any time. We are online, even when we’re offline.

Your connected home will be excellent at anticipating your desires andbehaviour. It’s in that context that augmented reality will reach maturity.

Google Home

Augmented reality

You’ve probably already been using AR. For a thorough take on the trend, go read my piece on how augmented reality is overtaking mobile. Two current examples of popular augmented reality apps: Snapchat and Pokémon Go. The latter is a great example of how you can design a virtual interaction layer for the physical world.

So the context in which you have to imagine augmented reality reaching maturity is a world in which our environments are smart and understand our intentions… in some cases predicting them before we even become aware of them.

Our smart environments will interact with our AR device to pull up HUDs when we most need them. So we won’t have to do awkward voice commands, because a lot of the time, it will already be taken care of.

Examples of HUDs in video games
Head-up displays (HUDs) have long been a staple of video games.

This means we don’t actually have to wear computers on our heads. Meaning that the future of augmented reality can come through contact lenses, rather than headsets.

But who actually wants to bother with that, right? What’s the point if you can already do everything you need right now? Perhaps you’re too young to remember, but that’s exactly what people said about mobile phones years ago. Even without contact lenses, all of these trends are underway now.

Augmented reality is an audiovisual medium, so if you want to prepare, spend some time learning about video game design, conversational interfaces, and get used to sticking your head in front of a camera.

There will be so many opportunities emerging on the way there, from experts on privacy and security (even political movements), to designing the experiences, to new personalities… because AR will have its own PewDiePie.

It’s why I just bought a mic and am figuring out a way to add audiovisual content to the mix of what I produce for MUSIC x TECH x FUTURE. Not to be the next PewDiePie, but to be able to embrace mediums that will extend into trends that will shape our digital landscapes for the next 20 years. More on that soon.

And if you’re reading this and you’re in music, then you’re in luck:
People already use music to augment their reality.

More on augmented reality by me on the Synchtank blog:
Projecting Trends: Augmented Reality is Overcoming its Hurdles to Overtake Mobile.

Mood augmentation and non static music

Why the next big innovation in music will change music itself — and how our moods are in the driver’s seat for that development.

Over the last half year, I’ve had the pleasure to publish two guest contributions in MUSIC x TECH x FUTURE about our changing relationship with music.

The first had Thiago R. Pinto pointing out how we’re now using music to augment our experiences and that we have developed a utilitarian relation with regards to music.

Then last week, James Lynden shared his research into how Spotify affects mood and found out that people are mood-aware when they make choices on the service (emphasis mine):

Overall, mood is a vital aspect of participants’ behaviour on Spotify, and it seems that participants listen to music through the platform to manage or at least react to their moods. Yet the role of mood is normally implicit and unconscious in the participants’ listening.

Having developed music streaming products myself, like Fonoteka, when I was at Zvooq, I’m obviously very interested in this topic and what it means for the way we structure music experiences.

Another topic I love to think about is artificial intelligence, generative music, as well as adaptive and interactive music experiences. Particularly, I’m interested at how non-static music experiences can be brought to a mass market. So when I saw the following finding (emphasis mine), things instantly clicked:

In the same way as we outsource some of our cognitive load to the computer (e.g. notes and reminders, calculators etc.) perhaps some of our emotional state could also be seen as being outsourced to the machine.

For the music industry, I think explicitly mood-based listening is an interesting, emerging consumption dynamic.

Mood augmentation is the best way for non-static music to reach a mass market

James is spot-on when he says mood-based listening is an emerging consumption dynamic. Taking a wider view: the way services construct music experiences also changes the way music is made.

The playlist economy is leading to longer albums, but also optimization of tracks to have lower skip rates in the first 30 seconds. This is nothing compared to the change music went through in the 20th century:

The proliferation of the record as the default way to listen to music meant that music became a consumer product. Something you could collect, like comic books, and something that could be manufactured at a steady flow. This reality gave music new characteristics:

  • Music became static by default: a song sounding exactly the same as all the times you’ve heard it before is a relatively new quality.
  • Music became a receiving experience: music lost its default participative quality. If you wanted to hear your favourite song, you better be able to play it, or a friend or family member better have a nice voice.
  • Music became increasingly individual: while communal experiences, like concerts, raves and festivals flourished, music also went through individualization. People listen to music from their own devices, often through their headphones.

Personalized music is the next step

I like my favourite artist for different reasons than my friend does. I connect to it differently. I listen to it at different moments. Our experience is already different, so why should the music not be more personalized?

I’ve argued before that features are more interesting to monetize than pure access to content. $10 per month for all the music in the world: and then?

The gaming industry has figured out a different model: give people experience to the base game for free, and then charge them to unlock certain features. Examples of music apps that do this are Bjork’s Biophilia as well as mixing app Pacemaker.

In the streaming landscape, TIDAL has recently given users a way to change the length and tempo of tracks. I’m surprised that it wasn’t Spotify, since they have The Echo Nest team aboard, including Paul Lamere who built who built the Infinite Jukebox (among many other great music hacks).

But it’s early days. And the real challenge in creating these experiences is that listeners don’t know they’re interested in them. As quoted earlier from James Lynden:

The role of mood is normally implicit and unconscious in the participants’ listening.

The most successful apps for generative music and soundscapes so far, have been apps that generate sound to help you meditate or focus.

But as we seek to augment our human experience through nootropics and the implementation of technology to improve our senses, it’s clear that music as a static format no longer has to be default.

Further reading: Moving Beyond the Static Music Experience.

5 Bots You’ll Love

Since launching its chatbot API last April, Facebook’s Messenger platform has already spawned 11,000 bots. Bots are popular, because they allow brands to offer more personalized service to existing and potential customers. Instead of getting people to install an app or visit your website, they can do so from the comfort of their preferred platform, whether that’s WhatsApp, Messenger, Twitter or something else.

Bots, basically automated scripts with varying levels of complexity, are ushering a new wave of user experience design. Here are some of my favourite bots.

AutoTLDR – Reddit

AutoTLDR bot

AutoTLDR is a bot on Reddit that automatically posts summaries of news articles in comment threads. tl;dr is internet slang for “too long, didn’t read” and is often used at the top or bottom of posts to give a one-line summary or conclusion of a longer text. It uses SMMRY‘s API for shortening long texts.

The key to its success is Reddit’s digital darwinism of upvotes and downvotes. Good summaries by AutoTLDR can usually be found within the top 5 comments. If it summarizes poorly, you’re unlikely to come across its contribution.

Explaining the theory behind AutoTLDR bot.

Subreddit Simulator – Reddit

Subreddits on Reddit center around certain topics or types of content. Subreddit Simulator is a collection of bots that source material from other Reddits and, often quite randomly, create new posts and material based on that. Its most popular post is sourced from the “aww” Subreddit and most likely sourced two different posts to create this:

Rescued a stray cat

Check out other top posts here. Again, the reason why it works well is because of human curation. People closely follow Subreddit Simulator and upvote remarkable outcomes, like the above.

wayback_exe – Twitter

Remember the internet when it had an intro tune? wayback_exe takes you back to the days of dial up and provides your Twitter feed with regular screenshots of retro websites. By now, it’s basically art.

It uses the Internet Archive’s Wayback Machine, which has saved historic snapshots of websites.

old site 1

old site 2

pixelsorter – Twitter

If you’re into glitch art, you’ll love pixelsorter. It’s a bot that re-encodes images. You can tweet it an image and get a glitched out version back. Sometimes it talks to other image bots like badpng, cga.graphics, BMPbug, Lowpoly Bot, or Arty Bots. With amazing algorithmic results.

 

Generative bot – Twitter

Generative bot

Generative Bot is one of those bots that makes you realize: algorithms are able to produce art that trumps 90% of all other art. It uses some quite advanced mathematics to create a new piece every 2 hours. Seeding your Twitter feed with occasional computer-generated bits of inspiration.

Want more inspiration? We previously wrote about DJ Hardwell’s bot.

What are your favourite bots? Ping me on Twitter.