Hannah Murphy in San Francisco
11Print this page
Receive free Meta Platforms updates
We’ll send you a myFT Daily Digest email rounding up the latest Meta Platforms news every morning.
Accessibility helpSkip to content
Need help?
The man leading Facebook’s push into the metaverse has told employees he wants its virtual worlds to have “almost Disney levels of safety”, but also acknowledged that moderating how users speak and behave “at any meaningful scale is practically impossible”.
Andrew Bosworth, who has been steering a $10 billion-a-year budget to build “the metaverse”, warned that virtual reality can often be a “toxic environment” especially for women and minorities, in an internal memo from March seen by the Financial Times.
He added that this would be an “existential threat” to Facebook’s ambitious plans if it turned off “mainstream customers from the medium entirely”.
The memo sets out the enormous challenge facing Facebook, which has a history of failing to stop harmful content from reaching its users, as it tries to create an immersive digital realm where people will log on as 3D avatars to socialise, game, shop and work.
Bosworth, who will take over as Facebook’s chief technology officer next year, sketched out ways in which the company can try to tackle the issue, but experts warned that monitoring billions of interactions in real time will require significant effort and may not even be possible. Reality Labs, the division headed by Bosworth, currently has no head of safety role.
“Facebook can’t moderate their existing platform. How can they moderate one that is enormously more complex and dynamic?” said John Egan, chief executive of forecasting group L’Atelier BNP Paribas.

The shift from checking text, images and video to policing a live 3D world will be dramatic, said Brittan Heller, a technology lawyer at Foley Hoag. “In 3D, it’s not content that you’re trying to govern, it’s behaviour,” she said. “They’re going to have to build a whole new type of moderation system.”
Facebook’s current plan is to give people tools to report bad behaviour and block users they do not wish to interact with.
A 2020 safety video for Horizon Worlds, a Facebook-developed virtual reality social game, says that Facebook will constantly record what is happening in the metaverse, but that this information will be stored locally on a user’s virtual reality headset.
If a user then reports bad behaviour, several minutes of footage will be sent to Facebook’s human reviewers to assess. Bosworth said in his memo that in many ways, this is “better” than real life in terms of safety because there will always be a record to check.
A user can also enter a “personal safety zone” to step away from their virtual surroundings, draw a personal “bubble” to protect their space against other users, or request an invisible safety specialist to monitor a dicey situation.
Bosworth claimed in his memo that Facebook should lean on its existing community rules, which for example permit cursing in general but not at a specific person, but also have “a stronger bias towards enforcement along some sort of spectrum of warning, successively longer suspensions, and ultimately expulsion from multi-user spaces”.
He suggested that because users would have a single account with Meta, Facebook’s new holding company, they could be blocked across different platforms, even if they had multiple virtual avatars.
“The theory here has to be that we can move the culture so that in the long term we aren’t actually having to take those enforcement actions too often,” he added.
He acknowledged, however, that bullying and toxic behaviour can be exacerbated by the immersive nature of virtual reality. This was highlighted in a 2019 study by researchers in Facebook’s Oculus division, who found more than a fifth of their 422 respondents had reported an “uncomfortable experience” in VR.
“The psychological impact on humans is much greater,” said Kavya Pearlman, chief executive of the XR Safety Initiative, a non-profit focused on developing safety standards for VR, augmented and mixed reality. Users would retain what happens to them in the metaverse as if it happened in real life, she added.

Safety experts argue that the measures Facebook has laid out so far to tackle unwanted behaviour are reactive, only providing support once harm has been caused.
Instead, Facebook could proactively wield emerging artificial intelligence technologies including monitoring speech or text for keywords or scanning signals of abnormal activities, such as one adult repeatedly approaching children or making certain gestures.
“These filters are going to be extremely important,” said Mike Pinkerton, chief operating officer of moderation outsourcing group ModSquad.
But AI remains ineffective across Facebook’s current platforms, according to the company’s own internal assessments. One notable example of an AI failing to catch a problematic live video was in early 2019, when Facebook was criticised for failing to contain the spread of footage of the Christchurch terror attacks.
Facebook told the Financial Times that it was “exploring how best to use AI” in Horizon Worlds, adding that it was “not built yet”. 

Beyond moderating live chat and interactions, Facebook may also have to devise a set of standards for the creators and developers that build on its metaverse platform, which it has said will be open and interoperable with other services.
Ethan Zuckerman, director of the Institute for Digital Public Infrastructure at the University of Massachusetts Amherst, said that in order to prevent spamming or harassment, the company could consider a review process for developers similar to Apple App Store requirements.
However, such vetting could “massively slow down and really take away from” the open creator process that Zuckerberg has put forward, he added.
In his memo, Bosworth said the company should set a baseline of standards for third party VR developers but that it was “a mistake” to hold them to the same standard as its own apps.
“I think there is an opportunity within VR for consumers to seek out and establish a ‘public square’ where expression is more highly valued than safety if they so choose,” he added. It is unclear if this approach would apply to developers building in its future metaverse, on top of those building apps for its current Oculus headset.
A Facebook spokesperson said the company was discussing the metaverse now to ensure that safety and privacy controls were effective in keeping people safe. “This won’t be the job of any one company alone. It will require collaboration across industry and with experts, governments and regulators to get it right.”
Get alerts on Meta Platforms when a new story is published
Copyright The Financial Times Limited 2021. All rights reserved.

Promoted Content

Comment guidelines
Please keep comments respectful. Use plain English for our global readership and avoid using phrasing that could be misinterpreted as offensive. By commenting, you agree to abide by our community guidelines and these terms and conditions. We encourage you to report inappropriate comments.

"Disney levels of security" — no thanks, that sounds lame. I'll stick with GTA online.
It seems like the METAverse will be MEGA creepy and add MINI value to my life thanks to its DISNEY levels of SECURITY.

Thanks but no thanks, I'm not interested.
If they have Disney levels of safety, it'll be a Mickey Mouse metaverse.
Dystopian 
"If a user then reports bad behaviour, several minutes of footage will be sent to Facebook’s human reviewers to assess. Bosworth said in his memo that in many ways, this is “better” than real life in terms of safety because there will always be a record to check."

This will be the biggest issue. How to balance holding people to account, while allowing people to say what they want without it being pushed through an algo to sell them toothpaste. Honestly, considering he is already miscommunicating how to keep people safe, it doesn't bode well. You need to sell this product to Gen Z, and last I checked they are all leaving your platforms because you keep tracking all their data and using it against them.
People are leaving the app because it isnt cool anymore, not because their data is being tracked. 

Otherwise they would have concerns about all pouring into TiKTok, which does the exact same thing and, worse, lets the CCP dive into the data whenever they want. 
(Edited)
"How will Facebook keep its metaverse safe for users?"

They won’t, of course!


"Betteridge's law of headlines:

"Any headline that ends in a question mark can be answered by the word no." 

I don't believe that one needs to be a techno-phobe or over 40 to have a deep intuition that this is probably going to do far more harm than good. I will be accused of the slippery slope fallacy, but its not a real fallacy. One day we will be plugging chips in our brains, and pure dystopia will have arrived.
"Elon Musk's Neuralink could transition from implanting chips in monkeys to humans within the year"

(Edited)
They won't as long as they are making profit. That is at least what previous evidence would suggest...