We Do Need A Metaverse, Mark Zuckerberg. We Just Don't Need Yours.
“You won't need a physical TV. It will just be a $1 hologram from some high school kid halfway across the world.” So gloats Mark Zuckerberg in his original announcement of Facebook’s newest initiative, the Metaverse.
At that point I thought: who would immerse themselves into a three-dimensional, all-encompassing sensory experience just to watch TV?
Zuckerberg says this newest innovation is “about bringing people together.” Which — according to the launch video — is apparently best accomplished by donning digital clothes (he chooses what appear to be some ninja-like black pajamas) and attending a meeting on the bridge of a spaceship.
On this conference-room-turned-UFO, he and colleagues ooh and ahh over a three-dimensional screensaver that seems to be made entirely of swimming pool noodles. Later, he explains how future versions of work will involve each of us sitting at a virtual desk and looking at virtual screens while our virtual coworker holograms walk by and make strange virtual faces at us.
While I’m sure it’s taken some impressive engineering to accomplish the above: this is not the Matrix I’ve been waiting for.
Switching form factors — for example, from desktop PCs to laptops, from laptops to mobile devices, or from mobile devices to VR glasses — is a very big deal. These switches often make an experience more lightweight, more portable, less expensive, and thus more accessible and ubiquitous. But — when we make a form factor switch — we also need to switch the metaphors we use in designing experiences for those new devices.
Early computers ran programs. Later, we used our machines to browse websites. Then our mobile devices evolved to run apps. You might say, aren’t these just technical distinctions for what are basically just different ways of running the same code? From an engineering perspective, yes — but to the end-users these metaphors are, in fact, quite different.
For example, let’s examine one of the most-used mobile apps ever: Uber. With Uber, I open the app, request a car, and I’m done. Behind-the-scenes of course, my phone’s GPS has determined my location, sent this to the Uber servers, summoned the nearest driver, sent the request to their phone, and booked a one-time, just-in-time transaction. I don’t spend time in Uber like I might in a program like, say, Microsoft Excel.
What does this have to do with the Metaverse? Simply this: when we transition form factors, it’s most often best to drop all the legacy metaphors and embrace what can uniquely be done with the new system. My iPhone is a GPS-enabled, high-quality digital camera with an onboard computer — making it exceptionally good not only at hailing a ride but at taking and sharing photographs, firing off quick texts and notes to friends, finding the nearest coffee shop, or getting directions to a cocktail bar. It fully embraces its concepts of location, photos, identity, and real-time, always-on connection. Now, imagine if one tried to implement Uber with laptop-native metaphors like files, folders, hard drives, and documents.
This is where I think many initial discussions around the Metaverse fall down. Talking about sitting at an “office” and looking at a “screen” in the Metaverse is like proposing that you should summon your Uber driver by uploading a new document to the Uber ride-request repository. It sounds silly because the device has no need for those old metaphors.
What might a Metaverse look like that embraces its unique capabilities? Let’s start by naming a few of them:
Immersiveness. No other form factor — with the possible exception of the cinema screen at your local movie theater — surrounds you with an experience quite like Virtual Reality headsets.
Navigability. Our programs and websites allow us to navigate through a very small and fixed set of menus, buttons, hyperlinks and (sometimes) voice commands. But in a virtual world? You can “go” anywhere you like, whenever you like.
Point-of-View Flexibility. Whether you are looking at a computer screen, an iPad, or an iPhone, physically you’re always in front of a lighted rectangle, and that spatial relationship never changes. But if you feel you’re inside of an immersive world, what other perspectives open up?
If we take the above — immersiveness, navigability, and fresh points-of-view — and combine them, where might this lead?
First, we can do away with all of our previous metaphors. Whether on the living room wall or the office desk, there is no need for screens. In fact, we need not even talk about location. We can replace this core idea of “I am in front of” with “we are inside of.” And that leads to something entirely new, because: to disconnect from the Screen means we can disconnect from the Self.
Up until now, because of the physical constraints of always having a screen in front of our eyes, a key design criteria was always managing the human-to-machine interaction. But if we remove that constraint — and, with virtual reality headsets, we can — now we can transition from human-(to-machine)-to-human interactions into human-with-human interactions.
With our new virtual reality headsets, the spatial metaphors describing our interactions can open up dramatically. We can immerse ourselves in new experiences in which I dissolves into we — here dissolves into everywhere and nowhere — and our differences dissolve into our similarities.
Let’s imagine how that might work. You don a VR headset and, instead of “you” acting out “your” avatar living inside of “your” virtual home — you find yourself instead, instantly, part of an avatar governed by a collective. For example, imagine being part of a giant blue whale swimming, slowly, through the deep, cold Pacific. But this whale isn’t just you — as tens of thousands of others are, at the same time, also part of this whale.
Individually, you’re less like the “brain” inside this whale and more like a few neurons. You have influence but not control. So you must learn, from others: what is our collective goal? What are our whale’s desires? What is it looking for, searching for? Those desires are both up to you and not up to you — as you are no more in charge of this particular avatar than your fingertip is in charge of your body.
But why do this? Just because it would make a cool art project? Perhaps. But I think there are many more applications these new Metaverse metaphors can unlock. For example: if we — eight billion souls and growing on this Earth — are looking for new ways to solve our climate change problems — why don’t we create virtual environments where we “feel” the tension of eight billion individuals? Alternatively, why don’t we shift our point of view so that we may “become” the greenhouse gas layer itself, and experience — in a felt sense — how we are shifting around the planet? Or, alternatively yet, why don’t we create collective avatars where we — all of us — become the Pacific Ocean, or the Amazon rainforest, or the Antarctic — and let’s all feel what it’s like for a miles-long chunk of our ice shelf to fall into the ocean?
If we participate in these experiences as a whole, together — perhaps we can leverage the Metaverse to start making decisions, as a whole, together.
Years ago, many people wrote that the promise of virtual reality was that we could use the technology to create Empathy Machines. I propose we — all of us — recenter our thinking around the Metaverse and work to create a set of systems that allow us to leverage these new metaphors to expand our sense of who “we” are.
This is the Matrix I want. How do we go about building this?