
SAN FRANCISCO — Since 2013, Matt Bors has made a living as a left-leaning cartoonist on the internet. His site, The Nib, runs cartoons from him and other contributors that regularly skewer right-wing movements and conservatives with political commentary steeped in irony.
One cartoon in December took aim at the Proud Boys, a far-right extremist group. With tongue planted firmly in cheek, Mr. Bors titled it “Boys Will Be Boys” and depicted a recruitment where new Proud Boys were trained to be “stabby guys” and to “yell slurs at teenagers” while playing video games.
Days later, Facebook sent Mr. Bors a message saying that it had removed “Boys Will Be Boys” from his Facebook page for “advocating violence” and that he was on probation for violating its content policies.
It wasn’t the first time that Facebook had dinged him. Last year, the company briefly took down another Nib cartoon — an ironic critique of former President Donald J. Trump’s pandemic response, the substance of which supported wearing masks in public — for “spreading misinformation” about the coronavirus. Instagram, which Facebook owns, removed one of his sardonic antiviolence cartoons in 2019 because, the photo-sharing app said, it promoted violence.
Facebook barred Mr. Trump from posting on its site altogether after he incited a crowd that stormed the U.S. Capitol.
At the same time, misinformation researchers said, Facebook has had trouble identifying the slipperiest and subtlest of political content: satire. While satire and irony are common in everyday speech, the company’s artificial intelligence systems — and even its human moderators — can have difficulty distinguishing them. That’s because such discourse relies on nuance, implication, exaggeration and parody to make a point.
That means Facebook has sometimes misunderstood the intent of political cartoons, leading to takedowns. The company has acknowledged that some of the cartoons it expunged — including those from Mr. Bors — were removed by mistake and later reinstated them.
“If social media companies are going to take on the responsibility of finally regulating incitement, conspiracies and hate speech, then they are going to have to develop some literacy around satire,” Mr. Bors, 37, said in an interview.
accused Facebook and other internet platforms of suppressing only right-wing views.
In a statement, Facebook did not address whether it has trouble spotting satire. Instead, the company said it made room for satirical content — but only up to a point. Posts about hate groups and extremist content, it said, are allowed only if the posts clearly condemn or neutrally discuss them, because the risk for real-world harm is otherwise too great.
Facebook’s struggles to moderate content across its core social network, Instagram, Messenger and WhatsApp have been well documented. After Russians manipulated the platform before the 2016 presidential election by spreading inflammatory posts, the company recruited thousands of third-party moderators to prevent a recurrence. It also developed sophisticated algorithms to sift through content.
Facebook also created a process so that only verified buyers could purchase political ads, and instituted policies against hate speech to limit posts that contained anti-Semitic or white supremacist content.
Last year, Facebook said it had stopped more than 2.2 million political ad submissions that had not yet been verified and that targeted U.S. users. It also cracked down on the conspiracy group QAnon and the Proud Boys, removed vaccine misinformation, and displayed warnings on more than 150 million pieces of content viewed in the United States that third-party fact checkers debunked.
But satire kept popping up as a blind spot. In 2019 and 2020, Facebook often dealt with far-right misinformation sites that used “satire” claims to protect their presence on the platform, Mr. Brooking said. For example, The Babylon Bee, a right-leaning site, frequently trafficked in misinformation under the guise of satire.