Facebook is being inundated by fake accounts and violent content more than ever. But the company wants you to know that it’s doing a better job of combatting the problem even as it admitted that the job is difficult and often imperfect.
Facebook revealed the details on Thursday in a so-called transparency report that covered October 2018 to March 2018. It came as the company faces intense criticism for its policing of harmful content, a problem that the company has struggled to contain for years.
“Whenever you’re drawing a line around what content is acceptable, there will always be some people who think you got it wrong,” CEO Mark Zuckerberg said on a conference call with reporters after the report’s release. “But overall, I’m proud of the diligence and thoughtfulness that goes into this process.”
He also addressed critics who want to break up Facebook because of its size, struggles policing its service of objectionable content, and string of privacy missteps.
Here are five key takeaways:
Breaking Up Facebook
Government officials and some business people, including a former co-founder, are increasingly calling for Facebook to be broken up. Critics claim the company has too much power and is threatening democracy.
But on Thursday, Zuckerberg, unsurprisingly, argued that breaking up the company solves nothing. He called any suggestion that Facebook is the dominant player in its space is “a little stretch,” never mind that the company dominates social media. Instead, Zuckerberg said that Facebook accounts for less than 10% of the global online ad market. On that basis, Facebook is merely a piker, or at least, that’s what he wants the world to believe.
Zuckerberg then mentioned that breaking up Facebook would only make fighting misinformation and harmful content more difficult. Any smaller company spawned from the breakup, he suggested, would spend less on policing its service, implying that its leader would inevitably choose profits over corporate responsibility.
“The success of this company has allowed us to fund these efforts at a massive level,” Zuckerberg said. “We’re able to do things that are just not possible for other folks to do.”
Attack of the Fake Accounts
In the first quarter, Facebook removed 2.2 billion fake accounts, 1 billion than during the previous quarter. The number of accounts deleted is so big that it rivals the number of Facebook’s 2.4 billion monthly active users.
The company said the higher numbers were due to increased efforts by spammers that use automation to create fake accounts. Facebook said that nearly all of them were caught within minutes of registration if not before they could be registered.
Facebook said that fake accounts made up 5% of its monthly active users over the six-month period that the report covered, indicating that some are still slipping through and are only detected later.
Bullying and Harassment Dilemma
Facebook’s use of artificial intelligence to identify and remove harassing and bullying posts is still a work in progress, according to its data. In the first quarter, it was only able to remove 14.1% of all such content, compared to 21% during the fourth quarter
The reason? Unlike some other kinds of content, bullying and harassment is too difficult for software to identify.
“We expect that for the foreseeable future, our automated systems will not have the ability to detect this content at a scale similar to other policy violations,” Facebook said in the transparency report.
Privacy Versus Safety
Earlier this year, Facebook announced plans to focus more on private communications for its users and encryption. More users want to chat in smaller groups or via direct messaging, Facebook said, than post publicly.
And while encryption and private spaces protect user data, they complicate how Facebook can police its network. It’s unclear whether Facebook will be able to do as good of a job in identifying harmful content considering the new reality.
“We recognize its gong to be harder to find all of the different types of harmful content,” Zuckerberg said. “We’ll be fighting that battle without one of the very important tools, which is, of course, being able to look at the content itself.”
Firearms and Drugs
In the first quarter, Facebook reviewed 900,000 pieces of content, nearly 300,000 more than during the previous quarter. It used software to identify 83.3% of those posts, up from 77.2% the previous quarter.
In terms of firearms, Facebook said it reviewed 670,000 pieces of content in the first quarter, 754,000 during the fourth quarter. Software identified 69.9% of the cases, up from 64.9% the previous quarter.