Party Royale mode in Fortnite

What BITKRAFT’s recent investments suggest about music’s future and the metaverse

Forget the usual suspects: venture capital firm BITKRAFT is easily one of the most interesting funds to watch in the entertainment space. Since the start of June, they have participated in 5 funding rounds totalling over $44 million into companies pioneering possible futures for digital media.

With music mostly detached from its “real world” context of live gigs, it has become obvious that music’s virtual context of livestreams, virtual events, and online communities is set to shape tastes, genres and experiences. Professionals from across the industry, from labels to studios to artists, are increasingly involved in virtual aspects of our culture. Two recent examples:

So what do BITKRAFT’s recent investees enable? A look at 3.

Koji

Co-founded by Dmitry Shapiro, who previously founded Veoh and served as CTO of MySpace Music, Koji is a tool that makes it easy to remix posts for social media.

The posts are shareable and interactive, allowing people to remix them using content from various platforms, so Koji sees them more like “mini-apps“:

“If you’ve experienced WeChat Mini Programs, Kojis are the cross-platform, standards-based, modern versions of that.”

What appears to be the strategy, is for other platforms to allow these interactive forms of media inside of them, similar to how most social media platforms now have Giphy integrations to bring GIFs from the Giphy platform into your favourite social network.

So that sets it apart from other remix platforms, like TikTok or audiovisual mashup platform Coub which emphasise the on-platform experience. Unlike TikTok, Coub is not a walled garden, but most of the activity related to the platform seems to be happening in the garden regardless.

Screenshots of Koji

What does it mean for music?

Remix culture has gone through multiple iterations and isn’t done yet. Since the start of the digital era, we’ve seen these important steps for music’s remix culture:

  • Anyone with a computer being able to acquire (through piracy or a purchase) music production software at reasonable costs and distributing their creations through networks and filesharing apps. For example the rapper Benefit becoming an internet underground legend with a $5 mic and a $12 sound card.
  • As time went on, the above development spawned mash-up culture which moved from filesharing platforms over to the blogosphere.
  • SoundCloud emerged and made it even easier to follow and exchange with other producers around the world, spawning remix-heavy genre subcultures like Moombahton, ‘EDM Trap’, and ‘Cloudrap’.
  • Anyone with a mobile phone being able to produce, mix or remix media.
  • ‘Remix’ becoming a default interaction through the dynamics of Snapchat, Instagram Stories, Musically and TikTok as people use face filters, music, and various imagery as overlays to interact with friends and connect to new people.

Koji’s bet seems to be that there’s room for remixable media inside these platforms – think embedding a TikTok post (content) into an Instagram Story (context), but then being able to change elements of the content independently from context.

If this sounds vague, go play around with Koji: open one and hit the remix button.

Short version: we’ll see remixable content appear in countless contexts and will be able to move that content from one context (e.g. Fortnite) to another (e.g. Instagram Stories) without having it attached to the context (e.g. a screenshot of something (content) inside Fortnite (context)).

This will allow for an integrated web where you can interact with media from very day-to-day layers (like photo-based social media) to layers further removed from the physical world (like virtual reality). Like that time Zuckerberg demoed Oculus VR and Priscilla Chan (in ‘the real world’) called him while he was plugged into VR (see the Mixed reality section).

More on Koji.

Voicemod

Sticking to the theme of layers: Voicemod allows people to adjust their voice digitally in real-time. In a virtual environment, you can design your avatar however you wish, but unless you’re great at voice acting your voice will sound kind of ‘normal’.

In more every day terms: we’ve all seen Instagram and Snapchat filters that add dog features to friends’ faces — Voicemod makes the voice equivalent of that.

While their technology seems targeted towards demographics in immersive, fully virtual environments like online games or VR-environments, they also cater to YouTubers.

One of the things they’ll do with their investment is double down on mobile, for which they’ve already teamed up with T-Pain who’s well-known for his use of auto-tune.

Voicemod desktop screenshot

What does it mean for music?

The first aspect to point out is that voice modification has become increasingly easy and cheap to achieve, even in real-time. The second aspect is that BITKRAFT and Voicemod see a future with a high adoption of voice modification and the avatarisation of voice.

We already have virtual pop stars, so the boundary between virtual and ‘real’ is blurring, especially now that we can simulate elements that up until now were artefacts of “the real world” like our voice. Whereas today’s virtual pop stars didn’t emerge from the virtual landscape, future music personalities could come from this landscape, including their pre-programmed voices. Consider an influencer who’s mostly known for their in-game personality; now what if that influencer becomes popular for their music?

It’s the next generation of digital native.

Playable Worlds

The first thing you need to know about this startup is that it’s founded by Raph Koster, who was the lead designer for Ultima Online (UO). UO was an incredibly influential MMORPG: massively multiplayer online roleplaying game. It was released in 1997 – years before Runescape and World of Warfcraft. And people are still playing it today, lauding its open world of worlds where gameplay is as much player-made as it is scripted.

The next thing you need to know is that Playable Worlds intend to accelerate the development of a concept called the metaverse. The metaverse is the idea of being able to plug into a virtual environment that connects all kinds of different virtual environments. Minecraft and Roblox are often mentioned as examples due to the ability for people to creatively craft various environments and objects. Fortnite also has characteristics of this, as beyond a gaming environment it now also contains an environment to hang out in and perhaps even enjoy a concert called Party Royale (pictured above).

Playable Worlds‘ first goal is to create a “cloud-native sandbox MMO” game, which sounds reminiscent of aforementioned Ultima Online. Sam Engelbardt, one of the company’s investors, says that “Koster’s vision and demonstrated ability to give players a compelling sandbox for the expression of their digital identities makes him exactly the sort of founder that he likes to back. Englebardt is backing companies that he believes will lay the foundation for the metaverse.”

Raph Koster with an Ultima Online shirt

What does it mean for music?

While Koji and Voicemod are tools that help people immerse inside and across “the metaverse”, Playable Worlds’ team is building out the technology to enable such a metaverse and then building a game with that technology.

Soon, our assumed digital identities will be as important as our given day-to-day identity – which is something that has actually already occurred for many people in the earlier days of the internet with its internet forums, chatrooms, and networks, before using your real name and identity were the status quo.

With that emerging landscape come new types of fan culture and many new possibilities to connect with people who may have a variety of identities across virtual environments. If that sounds niche: that’s how it starts. Ultima Online provided a stepping stone towards the landscape of Twitch, Fortnite, and other virtual experiences which the music industry is committing itself to now, 20 years later.


If this post feels overwhelming or just too “out there” and you’re curious about how music has already been impacted by gaming, I suggest reading my article Hidden in plain sight: a global underground dance music scene with millions of fans from 2016. It was a bit “out there” at that time too, but by now it’s obvious.

Continue reading
Google Glass

When augmented reality converges with AI and the Internet of Things

The confluence of augmented reality, artificial intelligence, and the Internet of Things is rapidly giving rise to a new digital reality.

Remember when people said mobile was going to take over?

Well, we’re there. Some of the biggest brands in our world are totally mobile: Instagram, Snapchat, Uber. 84% (!) of Facebook’s ad revenue now comes from mobile.

And mobile will, sooner or later, be replaced by augmented reality devices, and it will look nothing like Google Glass.

Google Glass
Not the future of augmented reality.

Why some predictions fail

When viewing trends in technology in isolation, it’s inevitable you end up misunderstanding them. What happens is that we freeze time, take a trend and project the trend’s future into a society that looks almost exactly like today’s society.

Past predictions about the future
Almost.

This drains topics of substance and replaces it with hype. It causes smart people to ignore it, while easily excited entrepreneurs jump on the perceived opportunity with little to no understanding of it. Three of these domains right now are blockchain, messaging bots, and virtual reality, although I count myself lucky to know a lot of brilliant people in these areas, too.

What I’m trying to say is: just because it’s hyped, doesn’t mean it doesn’t deserve your attention. Don’t believe the hype, and dig deeper.

The great convergence

In order to understand the significance of a lot of today’s hype-surrounded topics, you have to link them. Artificial intelligence, smart homes & the ‘Internet of Things’, and augmented reality will all click together seamlessly a decade from now.

And that shift is already well underway.

Artificial intelligence

The first time I heard about AI was as a kid in the 90s. The context: video games. I heard that non-playable characters (NPCs) or ‘bots’ would have scripts that learned from my behaviour, so that they’d get better at defeating me. That seemed amazing, but their behaviour remained predictable.

In recent years, there have been big advances in artificial intelligence. This has a lot to do with the availability of large data sets that can be used to train AI. A connected world is a quantified world and data sets are continuously updated. This is useful for training algorithms that are capable of learning.

This is also what has given rise to the whole chatbot explosion right now. Our user interfaces are changing: instead of doing things ourselves, explicitly, AI can be trained to interpret our requests or even predict and anticipate them.

Conversational interfaces sucked 15 years ago. They came with a booklet. You had to memorize all the voice commands. You had to train the interface to get used to your voice… Why not just use a remote control? Or a mouse & keyboard? But in the future, getting things done by tapping on our screens may look as archaic as it would be to do everything from a command-line interface (think MS-DOS).

XKCD Sudo make me a sandwich
There are certain benefits to command-line interfaces… (xkcd)

So, right now we see all the tech giants diving into conversational interfaces (Google Home, Amazon Alexa, Apple Siri, Facebook Messenger, and Microsoft, err… Tay?) and in many cases opening up APIs to let external developers build apps for them. That’s right: chatbots are APPS that live inside or on top of conversational platforms.

So we get new design disciplines: conversational interfaces, and ‘zero UI’ which refers to voice-based interfaces. Besides developing logical conversation structures, integrating AI, and anticipating users’ actions, a lot of design effort also goes into the personality of these interfaces.

But conversational interfaces are awkward, right? It’s one of the things that made people uncomfortable with Google Glass: issuing voice commands in public. Optimists argued it would become normalized, just like talking to a bluetooth headset. Yet currently only 6% of of people who use voice assistants ever do so in public… But where we’re going, we won’t need voice commands. At least not as many.

The Internet of Things

There are still a lot of security concerns around littering our lives with smart devices: from vending machines in our offices, to refrigerators in our homes, to self-driving cars… But it seems to be an unstoppable march, with Amazon (Alexa) and Google (Home) intensifying the battle for the living room last year:

Let’s converge with the trend of artificial intelligence and the advances made in that domain. Instead of having the 2016 version of voice-controlled devices in our homes and work environments, these devices’ software will develop to the point where they get a great feeling of context. Through understanding acoustics, they can gain spacial awareness. If that doesn’t do it, they could use WiFi signals like radar to understand what’s going on. Let’s not forget cameras, too.

Your smart device knows what’s in the fridge before you do, what the weather is before you even wake up, it may even see warning signs about your health before you perceive them yourself (smart toilets are real). And it can use really large data sets to help us with decision-making.

And that’s the big thing: our connected devices are always plugged into the digital layer of our reality, even when we’re not interacting with them. While we may think we’re ‘offline’ when not near our laptops, we have started to look at the world through the lens of our digital realities. We’re acutely aware of the fact that we can photograph things and share them to Instagram or Facebook, even if we haven’t used the apps in the last 24 hours. Similarly, we go places without familiarizing ourselves with the layout of the area, because we know we can just open Google Maps any time. We are online, even when we’re offline.

Your connected home will be excellent at anticipating your desires andbehaviour. It’s in that context that augmented reality will reach maturity.

Google Home

Augmented reality

You’ve probably already been using AR. For a thorough take on the trend, go read my piece on how augmented reality is overtaking mobile. Two current examples of popular augmented reality apps: Snapchat and Pokémon Go. The latter is a great example of how you can design a virtual interaction layer for the physical world.

So the context in which you have to imagine augmented reality reaching maturity is a world in which our environments are smart and understand our intentions… in some cases predicting them before we even become aware of them.

Our smart environments will interact with our AR device to pull up HUDs when we most need them. So we won’t have to do awkward voice commands, because a lot of the time, it will already be taken care of.

Examples of HUDs in video games
Head-up displays (HUDs) have long been a staple of video games.

This means we don’t actually have to wear computers on our heads. Meaning that the future of augmented reality can come through contact lenses, rather than headsets.

But who actually wants to bother with that, right? What’s the point if you can already do everything you need right now? Perhaps you’re too young to remember, but that’s exactly what people said about mobile phones years ago. Even without contact lenses, all of these trends are underway now.

Augmented reality is an audiovisual medium, so if you want to prepare, spend some time learning about video game design, conversational interfaces, and get used to sticking your head in front of a camera.

There will be so many opportunities emerging on the way there, from experts on privacy and security (even political movements), to designing the experiences, to new personalities… because AR will have its own PewDiePie.

It’s why I just bought a mic and am figuring out a way to add audiovisual content to the mix of what I produce for MUSIC x TECH x FUTURE. Not to be the next PewDiePie, but to be able to embrace mediums that will extend into trends that will shape our digital landscapes for the next 20 years. More on that soon.

And if you’re reading this and you’re in music, then you’re in luck:
People already use music to augment their reality.

More on augmented reality by me on the Synchtank blog:
Projecting Trends: Augmented Reality is Overcoming its Hurdles to Overtake Mobile.