Mark Zuckerberg met with the civil rights groups behind a Facebook ad boycott over how the company handles hate speech and other race-related issues.
The Stop Hate for Profit campaign brought 10 recommendations to Zuckerberg to “provide clear steps that Facebook could take immediately” to increase platform accountability and decency.
They include adding a C-suite executive with civil rights expertise to review policies for bias and hate; removing groups focused around White supremacy; and eliminating the exemption that allows politicians to lie in ads without punishment.
In a Facebook post before the meeting, COO Sheryl Sandberg said the platform is already addressing some of these concerns and will release a “two-year review of our policies” from an independent civil rights audit on Wednesday. She added that Facebook “won’t be making every change” called for in the audit, but “will put more of their proposals into practice soon.”
Jonathan Greenblatt, CEO of the Anti-Defamation League, told CNBC that Sandberg’s post didn’t sit well with him or other members of Stop Hate for Profit.
“If this were a good-faith effort to release the audit, they wouldn’t be shoehorning it in, accelerating its release timed with our meeting today," Greenblatt said. "If this were a good-faith effort, they would have already given us how they’re responding to our recommendations, rather than spinning with a Facebook post hours before the meeting.”
Sandberg also wrote that Facebook is spending "billions of dollars" on teams and tech to remove hate, including on artificial intelligence that can "remove hateful content at scale." In a recent CNN interview, Facebook VP of Global Affairs and Communications Nick Clegg elaborated on how that system has worked so far.
"We remove about 3 million items of hate speech content per month around the world. Ninety percent of that, by the way, we get to that before anyone reports it to us," Clegg said.
However, experts say this is the norm for most big social media platforms.
"Artificial intelligence is very widely used, and a very significant proportion of the material that is ultimately taken down is initially identified by artificial intelligence-driven technologies," said Paul Barrett, deputy director at the NYU Stern Center for Business and Human Rights and author of the study "Who Moderates The Social Media Giants?"
And even if Facebook’s software flagged 90% of hate speech, that figure doesn’t account for accuracy.
"One should not assume that just because the AI technology is able, fairly effectively, to flag things, that it is also effective enough to actually follow through and take it all down," Barrett said.
Barrett also said that even if Facebook can flag 90% of 3 million hate speech items each month, it implies the software misses 10%, or roughly 300,000 posts.