Meta-morphosis: how real is an immersive world?

What might happen to us when the boundary between online and offline begins to blur? And how might our psychological, physical and cyber safety change?
The upper body of a bearded man who is wearing a white hooded sweater and white VR headset. He has his left hand raised with finger pointed as though to press a button. He stands against a blue/grey gradient background.
SARAH VLOOTHUIS HEADSHOT

Written by Sarah Vloothuis

Senior Manager External Communications

Losing oneself in a book or a film is such a simple pleasure. Emotionally driven but easy to detach from, enjoyed with intention but naturally conclude. Both are still as popular as ever, but they are now in competition with a digital world which is far from simple. And as the metaverse develops into a truly usable space – however that might look – the competition will not be just for our emotional immersion, but our entire person. So, what does this mean for how we understand ourselves? And our perception of the world around us? Does safety in the metaverse mean the same as safety in the real world? And if not, why not? In this third exploration into the potential of the metaverse, we ask: how real is an immersive world?

For most of us who remember a time before smartphones, there’s always a mild sense of horror at the idea of a life lived entirely online, but that’s an accepted fact for Gen Z, who wouldn’t recognise a world of pre-smartphone anonymity. As it currently exists, the internet is a chaotic place, where universal governance and legal frameworks still don’t quite have the teeth they do in the real world. However, most of us have a natural sense of delineation and very clear psychological boundaries between what is online and offline. In fact, nowhere does this show more than in the very different and out of character ways in which people conduct themselves when there is the perceived ‘protection’ of the internet between them and others.

But when the boundary between online and offline begins to blur, what happens? “In the real world we’re continually receiving cues from our external environment, which then feed back into our inner sense of who we are and what we stand for,” explains psychologist Joy Palfery. “In the online environment you can be quite limited by what’s available to express, but the availability of the cues you get back is also limited.” What Joy means by this is that in the first stages of the metaverse, the social cues that we give and receive will be dictated by the sophistication of our avatars or digital representation. We won’t necessarily be able to understand what is socially acceptable and what is not, particularly as the rules are unclear, still evolving and forming. The meeting of cultures also plays a part in this. “Other people you interact with are perhaps drawing on very different social constructs to you. It’ll be more chaotic, and you may not get familiar feedback. So, that’s potentially quite disruptive to your sense of self.”

On the left stands a man in a blue shirt over a white t-shirt. He is wearing a black VR headset and his locs are falling over the top of it. His arms are raised in front of him and he hold hands with a woman on the right of the photograph. She is also wearing a VR headset over a colourful headscarf and a peach-coloured chunky knit sweater.

When we connect in the metaverse, will we access the same social cues as we do in real life? If not, how will we bond with others?

How do we cope with that? “Right now, we see people respond to online complexity by reverting to some quite tribal mindsets. which give a sense of certainty,” says Joy. We have already seen how any sense of discomfort plays out in our current online world. When we feel like we don’t fully understand the world we occupy, it drives us to seek out people who share our values. We want the safety of familiarity and when we have it, it can all too quickly become a digital echo chamber, where algorithms then amplify content that appears to resonate with us. “So, then everything becomes very binary,” Joy explains. “And you fall into this tribal thinking – ‘my group says this, so I will too. To me, that’s people looking for certainty in a world that’s become too complex for them to handle.”

In the metaverse, this difficult complexity could lie in a lack of social boundaries, as well as the way in which personal data points will form an essential part of your identity while you are there. Consider how we currently move from website to website, using our Apple, Google, Meta or Amazon profiles to make the user experience easier, and how easy it is to use these to take actions which impact our real lives – such as online shopping, or registering for services. In the metaverse, this convenience and the data it carries from place to place will also inform your new ‘lived’ experience. From a psychological perspective, this means that disassociating your real-world self from your digital identity, while difficult enough already, will become impossible.

But why is this a problem? Self-exploration is a natural and normal part of human development that is particularly important for young people. However, because of this ‘connected convenience’ there is likely to be no way of safely separating different aspects of your developing identity in order to test them out in in a risk-free fashion. Everything you are, say and do in a metaverse reality may be too connected. Let’s say, for example, that a young person is seeking to understand their gender or sexuality in the metaverse (which, in theory, could offer a really safe place to do so). This exploration will contribute to their dataset going forward. And their experience of the metaverse will reflect anything they have said or done while following this path, even if it’s not who they subsequently wish to be.

Now consider how this would play out if a young person were to explore something that is not a healthy and positive rite of passage – such as a negative self-image and diet culture. “This challenges the idea of psychological safety,” stresses Joy. “Compartmentalising in this way had once been a psychologically safe way to process different parts of our identity, but it can suddenly become very unsafe if there’s ‘leakage’ between each identity. There is the potential for cognitive dissonance.” As we know from many high-profile cases, the algorithmic echo chamber can also normalise some very troubling and dangerous behaviours. And these behaviours do not necessarily just stay online.

"Other people you interact with are perhaps drawing on very different social constructs to you. It’ll be more chaotic, and you may not get familiar feedback.”

Personal safety is, without a doubt, the greatest challenge that the metaverse presents. Because a sense of safety, conceptually, covers so much. How safe are we when everything from the way we look, our vocal inflections and our vital signs are logged and tracked? This already happens through our smart assistants, phones and watches, and we think nothing of it, as we feel a certain safety in the checks, measures and laws which govern the use of this data. Even so, we still regularly see stories around data leaks and breaches. In the metaverse, it’s likely that we will be under even more data scrutiny, with huge volumes of significant personal information amassed. A brilliant and revolutionary outcome of this could be in healthcare, where our vital signs, pupil dilation and tone of voice might be used to assess our emotional wellbeing. But, of course, the same data could also be very useful for marketers – and will we want to opt in/opt out constantly throughout the metaverse in order to prevent them using it? What will the action of informed consent even look like? And doesn’t this defeat the object of the metaverse experience being seamless and interoperable between each space within it?

Equally, what happens if, (as happened in Neal Stephenson’s now infamous Snow Crash from which the metaverse takes its name), our metaverse avatar or VR/AR headsets become infected with a virus? How could that manifest itself? Can a digital attack result in physical danger? Surely not, given nothing is truly ‘real’? Recently, Palmer Luckey, the co-founder of Oculus (now owned by Meta), claimed to have invented a VR headset that would kill you if you were killed in a game while wearing it. He made the point that it was “just a piece of office art, a thought-provoking reminder of unexplored avenues in game design." But the shockwaves it created in the media gave everyone pause for thought. Physical danger is a very risk in the metaverse, explains Quentyn Taylor, Canon’s Senior Director of Information Security & Global Response. Like The Matrix, what happens within it could have very real outcomes. “People are wearing haptic gloves and suits, which at the moment are relatively coarse and they have safety interlocks to stop you from becoming injured. But as products develop, these safety interlocks will be soft settable, and some people will turn them off. Who will be the first person to be seriously injured from their haptic suit when someone hits them in the metaverse? Is that assault? I don’t know. These are questions that are going to have to be answered.” Even in the absence of haptics, there are already serious discussions being had around boundaries, consent and their criminal definitions in the metaverse.

A smiling woman stands, slightly to one side against a yellow background. She has shoulder length dark curly hair. She is wearing a green and white patterned winter jumper and jeans, and a white VR headset with connected red boxing gloves. She has her right glove near her face and left arm outstretched, suggesting that she is boxing in a virtual reality world.

How real is virtual reality? Could it be possible to be hurt in a future metaverse? And if so, where does the liability for injury lie?

These discussions always start in the same place: what is real? And what is not real? These are not just philosophical questions. They have very real legal ramifications for the way we conduct ourselves in the metaverse. How do we protect ourselves and what we perceive to be ours when, to all intents and purposes, both are simply data? “What do you own?” asks Quentyn. “And this is an interesting point, because people are going to realise that you actually don’t own anything. You license things now.” In law, this opens a Pandora’s Box of complexity, where physical and digital ownership overlap. “Is it fraudulent to unlock some digital goods that you already have?” he asks, citing the example of a computer game where you have paid for a copy, but there are elements which are ‘locked’ to the player unless they win the right to access them. Is it fraud to unlock them yourself? And such questions of ownership – and the legal rights to the things we pay for and use in the metaverse can also be a source of great anxiety for humans who create strong attachments to the things that are dear to us.

You see, even if from a legal standpoint we fundamentally own nothing, the emotional response to these things remains. “The physiological responses to violation are the same online as in reality,” explains Joy. “If a child is in Minecraft and somebody deliberately destroys their construction, that child experiences loss in exactly the same way they would if someone in their real-life classroom stole their toy.” And much like the real world, we certainly won’t be immune to all kinds of crimes, such as the ‘theft’ of digital assets, such as NFTs, copyright issues, identity theft and even more sophisticated forms of trolling and harassment. The difficulty lies in waiting for the law to catch up. “There is a whole raft of laws that are going to have to be updated to take account for the way the metaverse and other similar online worlds work,” says Quentyn.

"Who will be the first person to be seriously injured or bruised from their haptic suit when someone hits them in the metaverse? Is that assault? I don’t know. These are questions that are going to have to be answered.”

However, even in the absence of any crime, trust could be a big ask in the metaverse. Much like today, how we present our lives online isn’t always in alignment with the absolute reality. How many of us have given our profile photos a flattering little tweak here and there? Or even edited and filtered them out of the realms of reality? In the metaverse, could this be amplified to the point where no one even expects you to resemble who you are in real life? Surely communications of any kind under such fundamental inauthenticity will be extremely difficult? After all, if every interaction is effectively based on the understanding that no one is who they purport to be, what does that do for forming important relationships? Such as an educator and student? Between colleagues? Or new friends? And considering the current rate at which Artificial Intelligence is advancing, how will you know that you’re even interacting with another human? “It could be quite psychologically damaging if you have a total mistrust of everyone,” says Joy. “That doesn’t sound like a happy recipe for a fundamentally sociable species.”

A constant and underlying lack of trust, as well as the knowledge that you yourself are presenting a façade is a problem for psychological safety, which in turn impacts our ability to be creative, to problem solve – in effect, to do the things that humans have evolved to be really good at doing. Certainly, it feels like it would be safer if our metaverse avatars could be true representations of our physical selves, as they are in Canon USA’s Kokomo technology, which brings fully accurate avatars to VR video calling. Is the solution to feeling psychologically safe as simple as ruling that we must all be our authentic selves, wherever we are?

This, paradoxically, brings us full circle. Back to the place where an enormously complicated and potentially scary metaverse might offer many uniquely safe spaces for exploration and a means to safely understand ourselves better. Where we might experiment with how we want the world to perceive us and the ways we feel most comfortable being. Perhaps what is necessary is not to ‘lose ourselves’ in an immersive metaverse, but to dial up the clarity and make it, and us, radically honest. There will, of course, be risks – as there are everywhere – but personal authenticity, both on and offline, feels like an essential starting point for good mental health, trust, communication and psychological safety in a world that looks set to increase in ‘phygital’ fluidity.

Sarah Vloothuis Senior Manager External Communications

Read more articles like this from Canon VIEW

Virtual Reality for a Very Real world

Immersive future metaverse experiences aside, what does VR actually achieve today? In the film industry, it’s far more practical than you might think.

27 Jun 2022

Putting reality into Virtual Reality

Canon USA debuted a video calling platform with a difference at this year’s CES. ‘Kokomo’ transports you into a virtual world, just the way you are.

17 Feb 2022

Meta-morphosis: how do we find the new classroom?

What will we learn in the metaverse? And how do we get there? Educators, experts and a student talk about this last bastion of digital transformation.

01 Dec 2022

Meta-morphosis: how do we reach a metaverse state of mind?

Right now, the metaverse is no more than a lot of speculation – and that’s great! With nothing off the table, creators are free to show us what might be.

30 Sep 2022