The head of metaverse development at Meta — formerly known as Facebook — has reportedly told employees in an internal memo that the company is targeting levels of metaverse safety that are at par with Disney. Ever since CEO Mark Zuckerberg announced plans to build a metaverse, a more immersive version of the internet with AR and VR experiences, privacy advocates and security experts have raised alarms regarding those plans. Because of how Facebook's poor record with user data and privacy could carry over into the metaverse. Zuckerberg has said on multiple occasions that the metaverse will be a group effort that would involve more than just Meta. A number of its executives have also called for early policy drafts to regulate it.

The objective appears noble at first, but the company is yet to fix the glaring problems such as pathetic moderation, rampant harassment and discriminatory policy enforcement issues on Facebook and Instagram. And despite all those failures, it is already hurrying into the metaverse without addressing them. Earlier this month, one of the leaked internal documents revealed that Facebook was aware of the widespread plagiarism issue on its platform, but the company chose to ignore the suggestions because taking a proactive approach would land it in legal hot waters.

Related: Facebook Dissolved Team That Revealed Its Platform Addiction Problem

Now, Financial Times has got its hands on an internal memo telling employees that the company wants "almost Disney levels of safety" for the metaverse. Andrew Bosworth, head of Reality Labs at Meta and the soon-to-be chief technology officer, wrote the memo. It also warned that the metaverse could be a "toxic environment" for women and people hailing from minority communities. The company is no stranger to the problem, though. Leaked internal research has revealed that the company knew how Instagram was a toxic hellscape for teens battling mental health and body image issues, but the company chose to ignore those red flags for a while in favor of growth. Bosworth further added in his memo that excluding "mainstream customers from the medium entirely" would pose an "existential threat" for the company.

Facebook Is No Disney With Its Myriad Problems

Facebook Wants Disney Level Safety For Metaverse

In the memo that was reportedly written in March this year, Bosworth mentioned that moderating what people say and how they behave in the metaverse "at any meaningful scale is practically impossible." The Meta executive referenced Masnick's Impossibility Theorem, which argues that moderation is impossible to execute well at scale. And with Zuckerberg's well-known attitude of keeping Facebook as a platform where freedom of speech is preserved for people from all sides of a dialogue, many contradictions will raise their heads while developing the metaverse and its governing policies. Facebook whistleblower Frances Haugen recently made a similar revelation, noting that Facebook often prioritized profits over the safety of its audience on multiple occasions.

But history is just one-half the problem here, and that's because developing moderation technology for the metaverse is a considerable challenge in itself. Meta is already struggling (read: failing miserably) with moderating text, photos and videos on Facebook and Instagram. It was recently reported that Facebook willingly allowed COVID-19 misinformation to spread in regions like India, where it spends negligible resources on moderation, hate speech and propaganda are pervasive. With metaverse turning people into 3D avatars in real-time and creating a more immersive internet with its own thriving ecosystem, the moderation challenge ahead is simply colossal. And Meta doesn't appear to be ready for multiple standpoints.

Next: Is it time to Delete Facebook?

Sources: Financial Times, TechDirt