Facebook has divulged details of its new artificial intelligence (AI) moderation system and claims it’s even better at uncovering bad posts than the previous iteration. The social network has long faced challenges policing content on its platform, a situation only exacerbated by the size of its user base and the sheer volume of languages it supports. The more people join Facebook, the more it has to monitor what they say, and the contrasting tongues with which they say it.

Over the years, the Meta-owned firm has had to tackle everything from revenge porn and COVID-19 misinformation to election interference and genocide in Myanmar. But just as each problem is different, so often are the skills needed to identify and remove associated content. For that, the company relies on an army of human moderators whose job it is to sift through reports of potentially offensive posts. Still, for an operation of Facebook’s size, recruiting enough people to do such a job is difficult to scale, and so it relies on AI tools to serve as a frontline defense. Those programs are trained on a plethora of sample data (images, videos, text), allowing the machine to effectively learn what constitutes bad content so that it can then be taken down.

Related: Drugs On Instagram Are Easy To Find, Even For Teens

Facebook has introduced a new AI system dubbed Few-Shot Learner, which it says is more efficient at uncovering troubling material. As Wired notes, the Menlo Park-based social platform claims Few-Shot Learner can work with a smaller training set, allowing it to get to grips with violating content in just six weeks, down from the previous range of six months. Facebook says it first introduced Few-Shot Learner earlier this year, and that it can already handle posts that encourage others to not get COVID-19 vaccinations, a rule that came into effect in September.

Better Rule Interpretation

Cornelia Carapcea, a Facebook AI product manager, says it’s not just the ability to work with smaller training sets that makes Few-Shot Learner so useful, it’s that it can also better interpret the platform’s user terms. Typically, human moderators have to cross-reference a handbook that covers thousands of rules and is constantly being tweaked to cover more and more edge cases. It was partly because of this level of complexity that Facebook introduced an Oversight Board to monitor the rules it enforced. Carapcea claims Few-Shot Learner can better identify violations using only broad prompts.

Still, despite all the promise, Facebook’s ability to moderate content across non-English speaking cultures has often been a key point of criticism. In the early Fall of 2021, a former Facebook employee leaked documents relating to internal research and highlighted its global oversight weaknesses to US and UK lawmakers. Now the social network is trying to address those attacks by revealing Few-Shot Learner works in 100 languages. That’s still far short of the hundreds used on the platform, but it’s a sign the company is in a better position than it was. If the firm can keep iterating on the tool, it might go some way to mending its damaged reputation.

Next: Facebook Whistleblower Says Zuckerberg Should Resign as Meta CEO

Source: Wired