China Tries to Counter Xinjiang Backlash With … a Musical?

In one scene, Uyghur women are seen dancing in a rousing Bollywood style face-off with a group of Uyghur men. In another, a Kazakh man serenades a group of friends with a traditional two-stringed lute while sitting in a yurt.

Welcome to “The Wings of Songs,” a state-backed musical that is the latest addition to China’s propaganda campaign to defend its policies in Xinjiang. The campaign has intensified in recent weeks as Western politicians and rights groups have accused Beijing of subjecting Uyghurs and other Muslim minorities in Xinjiang to forced labor and genocide.

The film, which debuted in Chinese cinemas last week, offers a glimpse of the alternate vision of Xinjiang that China’s ruling Communist Party is pushing to audiences at home and abroad. Far from being oppressed, the musical seems to say, the Uyghurs and other minorities are singing and dancing happily in colorful dress, a flashy take on a tired Chinese stereotype about the region’s minorities that Uyghur rights activists quickly denounced.

“The notion that Uyghurs can sing and dance so therefore there is no genocide — that’s just not going to work,” said Nury Turkel, a Uyghur-American lawyer and senior fellow at the Hudson Institute in Washington. “Genocide can take place in any beautiful place.”

Western sanctions, the Chinese government has responded with a fresh wave of Xinjiang propaganda across a wide spectrum. The approach ranges from portraying a sanitized, feel-good version of life in Xinjiang — as in the example of the musical — to deploying Chinese officials on social media sites to attack Beijing’s critics. To reinforce its message, the party is emphasizing that its efforts have rooted out the perceived threat of violent terrorism.

In the government’s telling, Xinjiang is now a peaceful place where Han Chinese, the nation’s dominant ethnic group, live in harmony alongside the region’s Muslim ethnic minorities, just like the “seeds of a pomegranate.” It’s a place where the government has successfully emancipated women from the shackles of extremist thinking. And the region’s ethnic minorities are portrayed as grateful for the government’s efforts.

reality on the ground, in which the authorities maintain tight control using a dense network of surveillance cameras and police posts, and have detained many Uyghurs and other Muslims in mass internment camps and prisons. As of Monday, the film had brought in a dismal $109,000 at the box office, according to Maoyan, a company that tracks ticket sales.

initially denied the existence of the region’s internment camps. Then they described the facilities as “boarding schools” in which attendance was completely voluntary.

Now, the government is increasingly adopting a more combative approach, seeking to justify its policies as necessary to combat terrorism and separatism in the region.

Chinese officials and state media have pushed the government’s narrative about its policies in Xinjiang in part by spreading alternative narratives — including disinformation — on American social networks like Twitter and Facebook. This approach reached an all-time high last year, according to a report published last week by researchers at the International Cyber Policy Center of the Australian Strategic Policy Institute, or ASPI.

The social media campaign is centered on Chinese diplomats on Twitter, state-owned media accounts, pro-Communist Party influencers and bots, the institute’s researchers found. The accounts send messages often aimed at spreading disinformation about Uyghurs who have spoken out, and to smear researchers, journalists, and organizations working on Xinjiang issues.

Anne-Marie Brady, a professor of Chinese politics at the University of Canterbury in New Zealand who was not involved in the ASPI report, called China’s Xinjiang offensive the biggest international propaganda campaign on a single topic that she had seen in her 25 years of researching the Chinese propaganda system.

“It’s shrill and dogmatic, it’s increasingly aggressive,” she said in emailed comments. “And it will keep on going, whether it is effective or not.”

In a statement, Twitter said it had suspended a number of the accounts cited by the ASPI researchers. Facebook said in a statement that it had recently removed a malicious hacker group that had been targeting the Uyghur diaspora. Both companies began labeling the accounts of state-affiliated media outlets last year.

The party has also asserted that it needed to take firm action after a spate of deadly attacks rocked the region some years ago. Critics say that the extent of the violence remains unclear, but also that such unrest did not justify the sweeping, indiscriminate scope of the detentions.

Last week, the government played up a claim that it had uncovered a plot by Uyghur intellectuals to sow ethnic hatred. CGTN, an international arm of China’s state broadcaster, released a documentary on Friday that accused the scholars of writing textbooks that were full of “blood, violence, terrorism and separatism.”

The books had been approved for use in elementary and middle schools in Xinjiang for more than a decade. Then in 2016, shortly before the crackdown started, they were suddenly deemed subversive.

The documentary accuses the intellectuals of having distorted historical facts, citing, for example, the inclusion of a historical photo of Ehmetjan Qasim, a leader of a short-lived independent state in Xinjiang in the late 1940s.

“It’s just absurd,” said Kamalturk Yalqun, whose father, Yalqun Rozi, a prominent Uyghur scholar, was sentenced to 15 years in prison in 2018 for attempted subversion for his involvement with the textbooks. He said that a photo of Mr. Rozi shown in the film was the first time he had seen his father in five years.

“China is just trying to come up with any way they can think of to dehumanize Uyghurs and make these textbooks look like dangerous materials,” he said by phone from Boston. “My father was not an extremist but just a scholar trying to do his job well.”

Amy Chang Chien contributed reporting.

View Source

Jack Dorsey says Twitter played a role in U.S. Capitol riot.

Jack Dorsey, Twitter’s chief executive, said during his congressional testimony on Thursday that the site played a role in the storming of the U.S. Capitol on Jan. 6, in what appeared to be the first public acknowledgment by a top social media executive of the influence of the platforms on the riot.

Mr. Dorsey’s answer came after Representative Mike Doyle, Democrat of Pennsylvania, pressed the tech chief executives at a hearing on disinformation to answer “yes” or “no” as to whether their platforms had contributed to the spread of misinformation and the planning of the attack.

Mark Zuckerberg, Facebook’s chief executive, and Sundar Pichai, Google’s chief executive, did not answer with a “yes” or “no.” Mr. Dorsey took a different tactic.

“Yes,” he said. “But you also have to take into consideration the broader ecosystem. It’s not just about the technological systems that we use.”

Twitter and Facebook barred Mr. Trump from posting on their platforms. Their actions suggested that they saw a risk of more violence being incited from what was posted on their sites, but the executives had not previously articulated what role the platforms had played.

Representative Jan Schakowsky, a Democrat of Illinois, later asked Mr. Zuckerberg about remarks that Facebook’s chief operating officer, Sheryl Sandberg, made shortly after the riot. In a January interview with Reuters, Ms. Sandberg said that the planning for the riot had been “largely organized” on other social media platforms and downplayed Facebook’s involvement.

Ms. Schakowsky asked whether Mr. Zuckerberg agreed with Ms. Sandberg’s statement.

Mr. Zuckerberg appeared to walk back Ms. Sandberg’s remarks.“In the comment that Sheryl made what I believe that we were trying to say was, and where I stand behind, is what was widely reported at the time,” he responded.

Mr. Zuckerberg then said: “Certainly there was content on our services. From that perspective, I think there’s further work that we need to do.” He seemed to want to add more before Ms. Schakowsky interrupted him.

View Source

Lawmakers Grill Tech C.E.O.s on Capitol Riot, Getting Few Direct Answers

WASHINGTON — Lawmakers grilled the leaders of Facebook, Google and Twitter on Thursday about the connection between online disinformation and the Jan. 6 riot at the Capitol, causing Twitter’s chief executive to publicly admit for the first time that his product had played a role in the events that left five people dead.

When a Democratic lawmaker asked the executives to answer with a “yes” or a “no” whether the platforms bore some responsibility for the misinformation that had contributed to the riot, Jack Dorsey of Twitter said “yes.” Neither Mark Zuckerberg of Facebook nor Sundar Pichai of Google would answer the question directly.

The roughly five-hour hearing before a House committee marked the first time lawmakers directly questioned the chief executives regarding social media’s role in the January riot. The tech bosses were also peppered with questions about how their companies helped spread falsehoods around Covid-19 vaccines, enable racism and hurt children’s mental health.

It was also the first time the executives had testified since President Biden’s inauguration. Tough questioning from lawmakers signaled that scrutiny of Silicon Valley’s business practices would not let up, and could even intensify, with Democrats in the White House and leading both chambers of Congress.

tweeted a single question mark with a poll that had two options: “Yes” or “No.” When asked about his tweet by a lawmaker, he said “yes” was winning.

The January riot at the Capitol has made the issue of disinformation deeply personal for lawmakers. The riot was fueled by false claims from President Donald J. Trump and others that the election had been stolen, which were rampant on social media.

Some of the participants had connections to QAnon and other online conspiracy theories. And prosecutors have said that groups involved in the riot, including the Oath Keepers and the Proud Boys, coordinated some of their actions on social media.

ban Mr. Trump and his associates after the Jan. 6 riots. The bans hardened views by conservatives that the companies are left-leaning and are inclined to squelch conservative voices.

“We’re all aware of Big Tech’s ever-increasing censorship of conservative voices and their commitment to serve the radical progressive agenda,” said Representative Bob Latta of Ohio, the ranking Republican on the panel’s technology subcommittee.

The company leaders defended their businesses, saying they had invested heavily in hiring content moderators and in technology like artificial intelligence, used to identify and fight disinformation.

Mr. Zuckerberg argued against the notion that his company had a financial incentive to juice its users’ attention by driving them toward more extreme content. He said Facebook didn’t design “algorithms in order to just kind of try to tweak and optimize and get people to spend every last minute on our service.”

He added later in the hearing that elections disinformation was spread in messaging apps, where amplification and algorithms don’t aid in spread of false content. He also blamed television and other traditional media for spreading election lies.

The companies showed fissures in their view on regulations. Facebook has vocally supported internet regulations in a major advertising blitz on television and in newspapers. In the hearing, Mr. Zuckerberg suggested specific regulatory reforms to a key legal shield, known as Section 230 of the Communications Decency Act, that has helped Facebook and other Silicon Valley internet giants thrive.

The legal shield protects companies that host and moderate third-party content, and says companies like Google and Twitter are simply intermediaries of their user-generated content. Democrats have argued that with that protection, companies aren’t motivated to remove disinformation. Republicans accuse the companies of using the shield to moderate too much and to take down content that doesn’t represent their political viewpoints.

“I believe that Section 230 would benefit from thoughtful changes to make it work better for people,” Mr. Zuckerberg said in the statement.

He proposed that liability protection for companies be conditional on their ability to fight the spread of certain types of unlawful content. He said platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Reforms, he said, should be different for smaller social networks, which wouldn’t have the same resources like Facebook to meet new requirements.

Mr. Pichai and Mr. Dorsey said they supported requirements of transparency in content moderation but fell short of agreeing with Mr. Zuckerberg’s other ideas. Mr. Dorsey said that it would be very difficult to distinguish a large platform from a smaller one.

Lawmakers did not appear to be won over.

“There’s a lot of smugness among you,” said Representative Bill Johnson, a Republican of Ohio. “There’s this air of untouchable-ness in your responses to many of the tough questions that you’re being asked.”

Kate Conger and Daisuke Wakabayashi contributed reporting.

View Source

Is a Big Tech Overhaul Just Around the Corner?

The leaders of Google, Facebook and Twitter testified on Thursday before a House committee in their first appearances on Capitol Hill since the start of the Biden administration. As expected, sparks flew.

The hearing was centered on questions of how to regulate disinformation online, although lawmakers also voiced concerns about the public-health effects of social media and the borderline-monopolistic practices of the largest tech companies.

On the subject of disinformation, Democratic legislators scolded the executives for the role their platforms played in spreading false claims about election fraud before the Capitol riot on Jan. 6. Jack Dorsey, the chief executive of Twitter, admitted that his company had been partly responsible for helping to circulate disinformation and plans for the Capitol attack. “But you also have to take into consideration the broader ecosystem,” he added. Sundar Pichai and Mark Zuckerberg, the top executives at Google and Facebook, avoided answering the question directly.

Lawmakers on both sides of the aisle returned often to the possibility of jettisoning or overhauling Section 230 of the Communications Decency Act, a federal law that for 25 years has granted immunity to tech companies for any harm caused by speech that’s hosted on their platforms.

393 million, to be precise, which is more than one per person and about 46 percent of all civilian-owned firearms in the world. As researchers at the Harvard T.H. Chan School of Public Health have put it, “more guns = more homicide” and “more guns = more suicide.”

But when it comes to understanding the causes of America’s political inertia on the issue, the lines of thought become a little more tangled. Some of them are easy to follow: There’s the line about the Senate, of course, which gives large states that favor gun regulation the same number of representatives as small states that don’t. There’s also the line about the National Rifle Association, which some gun control proponents have cast — arguably incorrectly — as the sine qua non of our national deadlock.

But there may be a psychological thread, too. Research has found that after a mass shooting, people who don’t own guns tend to identify the general availability of guns as the culprit. Gun owners, on the other hand, are more likely to blame other factors, such as popular culture or parenting.

Americans who support gun regulations also don’t prioritize the issue at the polls as much as Americans who oppose them, so gun rights advocates tend to win out. Or, in the words of Robert Gebelhoff of The Washington Post, “Gun reform doesn’t happen because Americans don’t want it enough.”

Sign up here to get it delivered to your inbox.

Is there anything you think we’re missing? Anything you want to see more of? We’d love to hear from you. Email us at onpolitics@nytimes.com.

View Source

Zuckerberg, Dorsey and Pichai testify about disinformation.

The chief executives of Google, Facebook and Twitter are testifying at the House on Thursday about how disinformation spreads across their platforms, an issue that the tech companies were scrutinized for during the presidential election and after the Jan. 6 riot at the Capitol.

The hearing, held by the House Energy and Commerce Committee, is the first time that Mark Zuckerberg of Facebook, Jack Dorsey of Twitter and Sundar Pichai of Google are appearing before Congress during the Biden administration. President Biden has indicated that he is likely to be tough on the tech industry. That position, coupled with Democratic control of Congress, has raised liberal hopes that Washington will take steps to rein in Big Tech’s power and reach over the next few years.

The hearing is also be the first opportunity since the Jan. 6 Capitol riot for lawmakers to question the three men about the role their companies played in the event. The attack has made the issue of disinformation intensely personal for the lawmakers since those who participated in the riot have been linked to online conspiracy theories like QAnon.

Before the hearing, Democrats signaled in a memo that they were interested in questioning the executives about the Jan. 6 attacks, efforts by the right to undermine the results of the 2020 election and misinformation related to the Covid-19 pandemic.

October article in The New York Post about President Biden’s son Hunter.

Lawmakers have debated whether social media platforms’ business models encourage the spread of hate and disinformation by prioritizing content that will elicit user engagement, often by emphasizing salacious or divisive posts.

Some lawmakers will push for changes to Section 230 of the Communications Decency Act, a 1996 law that shields the platforms from lawsuits over their users’ posts. Lawmakers are trying to strip the protections in cases where the companies’ algorithms amplified certain illegal content. Others believe that the spread of disinformation could be stemmed with stronger antitrust laws, since the platforms are by far the major outlets for communicating publicly online.

“By now it’s painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it’s a question of how best to do it,” said Representative Frank Pallone, the New Jersey Democrat who is chairman of the committee.

The tech executives are expected to play up their efforts to limit misinformation and redirect users to more reliable sources of information. They may also entertain the possibility of more regulation, in an effort to shape increasingly likely legislative changes rather than resist them outright.

View Source

How The Death of Taylor Force in Israel Echoes Through the Fight Over Online Speech

WASHINGTON — Stuart Force says he found solace on Facebook after his son was stabbed to death in Israel by a member of the militant group Hamas in 2016. He turned to the site to read hundreds of messages offering condolences on his son’s page.

But only a few months later, Mr. Force had decided that Facebook was partly to blame for the death, because the algorithms that power the social network helped spread Hamas’s content. He joined relatives of other terror victims in suing the company, arguing that its algorithms aided the crimes by regularly amplifying posts that encouraged terrorist attacks.

The legal case ended unsuccessfully last year when the Supreme Court declined to take it up. But arguments about the algorithms’ power have reverberated in Washington, where some members of Congress are citing the case in an intense debate about the law that shields tech companies from liability for content posted by users.

At a House hearing on Thursday about the spread of misinformation with the chief executives of Facebook, Twitter and Google, some lawmakers are expected to focus on how the companies’ algorithms are written to generate revenue by surfacing posts that users are inclined to click on and respond to. And some will argue that the law that protects the social networks from liability, Section 230 of the Communications Decency Act, should be changed to hold the companies responsible when their software turns the services from platforms into accomplices for crimes committed offline.

litigation group, which had a question: Would the Force family be willing to sue Facebook?

After Mr. Force spent some time on a Facebook page belonging to Hamas, the family agreed to sue. The lawsuit fit into a broader effort by the Forces to limit the resources and tools available to Palestinian groups. Mr. Force and his wife allied with lawmakers in Washington to pass legislation restricting aid to the Palestinian Authority, which governs part of the West Bank.

Their lawyers argued in an American court that Facebook gave Hamas “a highly developed and sophisticated algorithm that facilitates Hamas’s ability to reach and engage an audience it could not otherwise reach as effectively.” The lawsuit said Facebook’s algorithms had not only amplified posts but aided Hamas by recommending groups, friends and events to users.

The federal district judge, in New York, ruled against the claims, citing Section 230. The lawyers for the Force family appealed to a three-judge panel of the U.S. Court of Appeals for the Second Circuit, and two of the judges ruled entirely for Facebook. The other, Judge Robert Katzmann, wrote a 35-page dissent to part of the ruling, arguing that Facebook’s algorithmic recommendations shouldn’t be covered by the legal protections.

“Mounting evidence suggests that providers designed their algorithms to drive users toward content and people the users agreed with — and that they have done it too well, nudging susceptible souls ever further down dark paths,” he said.

Late last year, the Supreme Court rejected a call to hear a different case that would have tested the Section 230 shield. In a statement attached to the court’s decision, Justice Clarence Thomas called for the court to consider whether Section 230’s protections had been expanded too far, citing Mr. Force’s lawsuit and Judge Katzmann’s opinion.

Justice Thomas said the court didn’t need to decide in the moment whether to rein in the legal protections. “But in an appropriate case, it behooves us to do so,” he said.

Some lawmakers, lawyers and academics say recognition of the power of social media’s algorithms in determining what people see is long overdue. The platforms usually do not reveal exactly what factors the algorithms use to make decisions and how they are weighed against one another.

“Amplification and automated decision-making systems are creating opportunities for connection that are otherwise not possible,” said Olivier Sylvain, a professor of law at Fordham University, who has made the argument in the context of civil rights. “They’re materially contributing to the content.”

That argument has appeared in a series of lawsuits that contend Facebook should be responsible for discrimination in housing when its platform could target advertisements according to a user’s race. A draft bill produced by Representative Yvette D. Clarke, Democrat of New York, would strip Section 230 immunity from targeted ads that violated civil rights law.

A bill introduced last year by Representatives Tom Malinowski of New Jersey and Anna G. Eshoo of California, both Democrats, would strip Section 230 protections from social media platforms when their algorithms amplified content that violated some antiterrorism and civil rights laws. The news release announcing the bill, which was reintroduced on Wednesday, cited the Force family’s lawsuit against Facebook. Mr. Malinowski said he had been inspired in part by Judge Katzmann’s dissent.

Critics of the legislation say it may violate the First Amendment and, because there are so many algorithms on the web, could sweep up a wider range of services than lawmakers intend. They also say there’s a more fundamental problem: Regulating algorithmic amplification out of existence wouldn’t eliminate the impulses that drive it.

“There’s a thing you kind of can’t get away from,” said Daphne Keller, the director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center, “which is human demand for garbage content.”

View Source

How Anti-Asian Activity Online Set the Stage for Real-World Violence

Negative Asian-American tropes have long existed online but began increasing last March as parts of the United States went into lockdown over the coronavirus. That month, politicians including Representative Paul Gosar, Republican of Arizona, and Representative Kevin McCarthy, a Republican of California, used the terms “Wuhan virus” and “Chinese coronavirus” to refer to Covid-19 in their tweets.

Those terms then began trending online, according to a study from the University of California, Berkeley. On the day Mr. Gosar posted his tweet, usage of the term “Chinese virus” jumped 650 percent on Twitter; a day later there was an 800 percent increase in their usage in conservative news articles, the study found.

Mr. Trump also posted eight times on Twitter last March about the “Chinese virus,” causing vitriolic reactions. In the replies section of one of his posts, a Trump supporter responded, “U caused the virus,” directing the comment to an Asian Twitter user who had cited U.S. death statistics for Covid-19. The Trump fan added a slur about Asian people.

In a study this week from the University of California, San Francisco, researchers who examined 700,000 tweets before and after Mr. Trump’s March 2020 posts found that people who posted the hashtag #chinesevirus were more likely to use racist hashtags, including #bateatingchinese.

“There’s been a lot of discussion that ‘Chinese virus’ isn’t racist and that it can be used,” said Yulin Hswen, an assistant professor of epidemiology at the University of California, San Francisco, who conducted the research. But the term, she said, has turned into “a rallying cry to be able to gather and galvanize people who have these feelings, as well as normalize racist beliefs.”

Representatives for Mr. Trump, Mr. McCarthy and Mr. Gosar did not respond to requests for comment.

Misinformation linking the coronavirus to anti-Asian beliefs also rose last year. Since last March, there have been nearly eight million mentions of anti-Asian speech online, much of it falsehoods, according to Zignal Labs, a media insights firm.

In one example, a Fox News article from April that went viral baselessly said that the coronavirus was created in a lab in the Chinese city of Wuhan and intentionally released. The article was liked and shared more than one million times on Facebook and retweeted 78,800 times on Twitter, according to data from Zignal and CrowdTangle, a Facebook-owned tool for analyzing social media.

View Source

For Political Cartoonists, the Irony Was That Facebook Didn’t Recognize Irony

SAN FRANCISCO — Since 2013, Matt Bors has made a living as a left-leaning cartoonist on the internet. His site, The Nib, runs cartoons from him and other contributors that regularly skewer right-wing movements and conservatives with political commentary steeped in irony.

One cartoon in December took aim at the Proud Boys, a far-right extremist group. With tongue planted firmly in cheek, Mr. Bors titled it “Boys Will Be Boys” and depicted a recruitment where new Proud Boys were trained to be “stabby guys” and to “yell slurs at teenagers” while playing video games.

Days later, Facebook sent Mr. Bors a message saying that it had removed “Boys Will Be Boys” from his Facebook page for “advocating violence” and that he was on probation for violating its content policies.

It wasn’t the first time that Facebook had dinged him. Last year, the company briefly took down another Nib cartoon — an ironic critique of former President Donald J. Trump’s pandemic response, the substance of which supported wearing masks in public — for “spreading misinformation” about the coronavirus. Instagram, which Facebook owns, removed one of his sardonic antiviolence cartoons in 2019 because, the photo-sharing app said, it promoted violence.

Facebook barred Mr. Trump from posting on its site altogether after he incited a crowd that stormed the U.S. Capitol.

At the same time, misinformation researchers said, Facebook has had trouble identifying the slipperiest and subtlest of political content: satire. While satire and irony are common in everyday speech, the company’s artificial intelligence systems — and even its human moderators — can have difficulty distinguishing them. That’s because such discourse relies on nuance, implication, exaggeration and parody to make a point.

That means Facebook has sometimes misunderstood the intent of political cartoons, leading to takedowns. The company has acknowledged that some of the cartoons it expunged — including those from Mr. Bors — were removed by mistake and later reinstated them.

“If social media companies are going to take on the responsibility of finally regulating incitement, conspiracies and hate speech, then they are going to have to develop some literacy around satire,” Mr. Bors, 37, said in an interview.

accused Facebook and other internet platforms of suppressing only right-wing views.

In a statement, Facebook did not address whether it has trouble spotting satire. Instead, the company said it made room for satirical content — but only up to a point. Posts about hate groups and extremist content, it said, are allowed only if the posts clearly condemn or neutrally discuss them, because the risk for real-world harm is otherwise too great.

Facebook’s struggles to moderate content across its core social network, Instagram, Messenger and WhatsApp have been well documented. After Russians manipulated the platform before the 2016 presidential election by spreading inflammatory posts, the company recruited thousands of third-party moderators to prevent a recurrence. It also developed sophisticated algorithms to sift through content.

Facebook also created a process so that only verified buyers could purchase political ads, and instituted policies against hate speech to limit posts that contained anti-Semitic or white supremacist content.

Last year, Facebook said it had stopped more than 2.2 million political ad submissions that had not yet been verified and that targeted U.S. users. It also cracked down on the conspiracy group QAnon and the Proud Boys, removed vaccine misinformation, and displayed warnings on more than 150 million pieces of content viewed in the United States that third-party fact checkers debunked.

But satire kept popping up as a blind spot. In 2019 and 2020, Facebook often dealt with far-right misinformation sites that used “satire” claims to protect their presence on the platform, Mr. Brooking said. For example, The Babylon Bee, a right-leaning site, frequently trafficked in misinformation under the guise of satire.

whose independent work regularly appears in North American and European newspapers.

When Prime Minister Benjamin Netanyahu said in 2019 that he would bar two congresswomen — critics of Israel’s treatment of Palestinians — from visiting the country, Mr. Hall drew a cartoon showing a sign affixed to barbed wire that read, in German, “Jews are not welcome here.” He added a line of text addressing Mr. Netanyahu: “Hey Bibi, did you forget something?”

Mr. Hall said his intent was to draw an analogy between how Mr. Netanyahu was treating the U.S. representatives and Nazi Germany. Facebook took the cartoon down shortly after it was posted, saying it violated its standards on hate speech.

“If algorithms are making these decisions based solely upon words that pop up on a feed, then that is not a catalyst for fair or measured decisions when it comes to free speech,” Mr. Hall said.

Adam Zyglis, a nationally syndicated political cartoonist for The Buffalo News, was also caught in Facebook’s cross hairs.

paid memberships to The Nib and book sales on his personal site, he gets most of his traffic and new readership through Facebook and Instagram.

The takedowns, which have resulted in “strikes” against his Facebook page, could upend that. If he accumulates more strikes, his page could be erased, something that Mr. Bors said would cut 60 percent of his readership.

“Removing someone from social media can end their career these days, so you need a process that distinguishes incitement of violence from a satire of these very groups doing the incitement,” he said.

Mr. Bors said he had also heard from the Proud Boys. A group of them recently organized on the messaging chat app Telegram to mass-report his critical cartoons to Facebook for violating the site’s community standards, he said.

“You just wake up and find you’re in danger of being shut down because white nationalists were triggered by your comic,” he said

Facebook has sometimes recognized its errors and corrected them after he has made appeals, Mr. Bors said. But the back-and-forth and the potential for expulsion from the site have been frustrating and made him question his work, he said.

“Sometimes I do think about if a joke is worth it, or if it’s going to get us banned,” he said. “The problem with that is, where is the line on that kind of thinking? How will it affect my work in the long run?”

Cade Metz contributed reporting.

View Source