Coco’s Choice: A Charlie Hebdo Cartoonist’s Road Back From Hell

PARIS — For years after the attack on the Charlie Hebdo office, the most unbearable words for Corinne Rey, known as Coco, were, “In your place.” Other people couldn’t put themselves in her place at the satirical magazine. Others couldn’t know what they would have done.

On Jan. 7, 2015, Ms. Rey, a cartoonist, was leaving the magazine’s Paris offices to pick up her 1-year-old daughter from day care when she was confronted by two masked men brandishing assault rifles. They pointed the guns at her head. “Take us to Charlie Hebdo!” they shouted. “You have insulted the Prophet.”

In her recently published graphic novel, “To Draw Again,” Ms. Rey, 38, portrays herself as a small, trembling figure being tracked up the stairs by two immense featureless shapes whose weapons bear down on her. “That is how I saw them,” she said in a recent interview in Paris. “Monsters, dressed in black, huge, with no human trait.”

Chérif and Saïd Kouachi, the terrorists, had a clear objective: to avenge Charlie Hebdo’s publication of cartoons of the Prophet Muhammad by killing its editor, Stéphane Charbonnier, known as Charb, and the staff. They prodded Ms. Rey at gunpoint toward the Charlie office.

the first to be shot. Ms. Rey hid under a desk. “I heard the shots, the Allahu akbar, and the silence afterward,” she said. “No screams. Not one. I remember the sounds, precisely, of chairs, of people getting up from their chairs, just as they were killed.”

In her book, a way to speak of and transcend the unsayable, Ms. Rey chooses not to portray the terrible scene of prone bodies. Instead there are pages of darkness, as if of dense tangled dark wire, the void left by her dead friends and colleagues.

killed a dozen people that day. It is hard to imagine a more brutal confrontation of a free press and the fanatic’s fury. The words of the Kouachi brothers, whom the police killed two days later, fill a page of the book: “We have avenged the Prophet. We have killed Charlie Hebdo.”

“I was left with terrible guilt feelings,” Ms. Rey said in the interview. “I had the impression of making a choice, when really there was none.”

Over 10 pages of “To Draw Again,” she evokes her self-interrogation in a maelstrom of captioned images: “And if I had screamed for help? And if I had tried to flee? And if I had pushed them down the stairs? And if. And if. And if …”

One absurd image, of her kicking her massive assailants in the face, conveys that there was no if, just as at Auschwitz, in Primo Levi’s memorable phrase, there was no why.

beheading last October of Samuel Paty, a history teacher in a Paris suburb who showed images of the Prophet Muhammad in a class on free speech, affected Ms. Rey deeply — proof that the battle for which her friends’ lives were lost continues in France.

“Paty is somehow a member of Charlie, almost a colleague,” she said. “He wanted to explain what freedom of expression is. Explain that blasphemy is not a crime in France.” Explain freedom of opinion and thought, too. Explain freedom itself.

A middle school in France refused to be named for Mr. Paty for fear of being attacked, she said. “I, too, am sometimes afraid, but I transcend that fear.”

I asked Mr. Fieschi whether Ms. Rey had changed since the devastating day known simply as “7,” much as 9/11 became an American shorthand. “More than change her, I think it revealed her,” he said. “It deepened her. Her simplicity lost its naïveté. She always fought for freedom. She does so even more now.”

Ms. Rey is uncomfortable with the idea of victimhood. She does not want to be seen that way. She has fought to emerge from an unimaginable place. By depicting Coco’s choice in her book, she has helped herself lay that choice to rest.

In 2018, she had another child, a boy. “I am a mother,” she said. “I draw, and that is my passion. Charlie did not die; it lives. I am a little better, even if the absentees around the table are always there.”

View Source

Nick Clegg Steers Facebook’s Trump Decision

Facebook wanted Mr. Clegg to help repair its relationships with regulators, political leaders and the media after the Cambridge Analytica scandal, when data improperly pulled from Facebook was used to create voter profiles. Mr. Clegg’s international experience and comfort in five languages — English, Spanish, French, German and Dutch — appealed to the American-centric company.

Friends said Mr. Clegg had initially been reluctant to join Facebook, one of the world’s most polarizing corporations. But he wanted to be back at the center of important political and policy debates. In a memo outlining how he envisioned the role, he argued that it was unsustainable for a private company like Facebook, rather than democratically elected governments, to have so much power, especially on speech-related issues.

“My advice was strongly to go for it,” said Tony Blair, the former British prime minister, whom Mr. Clegg spoke with before taking the job, “because you’re going to be part of one of the most powerful companies in the world at a moment of enormous change in the world, and when technology is at the heart of that change.”

Inside Facebook, where Mr. Zuckerberg leans on a group of friends and early employees for counsel, Mr. Clegg earned the trust of his new boss. At the company’s headquarters, where proximity to Mr. Zuckerberg is power, Mr. Clegg’s desk was placed nearby. He orchestrated a trip through Europe with Mr. Zuckerberg, meeting with European Union leaders in Brussels and President Emmanuel Macron of France in Paris.

Since Mr. Clegg’s arrival, Facebook has shifted some of its policy positions. It now appears more accepting of regulation and higher taxes. He overcame reluctance from Mr. Zuckerberg and others in the company to ban political ads in the weeks before Election Day last year. And he was the main internal supporter for recently announced product changes that give users more control over what posts they see in their Facebook feeds.

“He has a track record of knowing what it’s like to work inside a cabinet that needs to make decisions quickly and move at the speed of a country, or in this case a platform,” said Chris Cox, Facebook’s chief product officer, who worked with Mr. Clegg on the user-control changes.

View Source

What Is the Facebook Oversight Board?

An independent panel called the Facebook Oversight Board on Wednesday upheld Facebook’s ban on former President Donald J. Trump, but said the company must review its decision to impose an indefinite suspension.

The company suspended Mr. Trump’s account on Jan. 7, after he used social media accounts to incite a mob of supporters to attack the Capitol a day earlier. The board gave Facebook six months to determine its final decision on Mr. Trump’s account status.

Here are key facts to know about the Facebook Oversight Board and its decision:

The board is a panel of about 20 former political leaders, human rights activists and journalists picked by Facebook to deliberate the company’s content decisions. It began a year ago and is based in London.

Facebook’s chief executive, Mark Zuckerberg, conceived the idea of having an independent body that acted like a Supreme Court in 2018. The idea was for the public to have a way to appeal decisions by Facebook to remove content that violates its policies against harmful and hateful posts. Mr. Zuckerberg said neither he nor the company wanted to have the final decision on speech.

The company and paid members of the panel stress that the board is independent. But Facebook funds the board with a $130 million trust and top executives played a big role in its formation.

So far the board has issued a handful of decisions on minor takedowns by Facebook. The majority of the rulings have overturned Facebook’s decisions.

Two weeks after Facebook decided to temporarily lock the account of Mr. Trump, the company said it would refer the case to the Oversight Board, effectively punting to outsiders a final decision on the former president.

In a blog post, the company explained that executives had blocked Mr. Trump’s account because he had violated the company’s policies against the incitement of violence and that the deadly storming of the Capitol defied the company’s belief in a peaceful transition of government and the democratic process.

“We look forward to receiving the board’s decision — and we hope, given the clear justification for our actions on Jan. 7, that it will uphold the choices we made,” Nick Clegg, Facebook’s vice president for global affairs, said in the post.

The board will tell Facebook to remove the ban, or to keep it. The ruling may also come with more nuance. The board could say that the ban was appropriate at the time but is no longer necessary, or that the ban was the wrong decision from the start.

The company then has seven days to put the board’s ruling into effect.

The board takes cases that are referred by Facebook or the public. The panel then selects five members to first deliberate on each case, with one based in the home country represented by the case.

The members meet to discuss the case and vet public comments. More than 9,000 comments were submitted on Trump’s account. The board extended its 90-day deadline on decisions for the Trump case because of the high volume of public comments. The board will base its decision on two main criteria: if Facebook’s ban on Mr. Trump followed the company’s community standards and adhered to human rights laws. When the smaller panel of board members reaches a majority, the decision is taken to the full board for a vote.

In the Trump case, Facebook also asked the board to give policy recommendations on how to handle the accounts of political leaders. The company doesn’t have to adopt the recommendations.

If Facebook follows its own rules, then yes. The company has said that all decisions from the oversight board are binding, and that even Mr. Zuckerberg couldn’t overturn the rulings. (Mr. Trump was also barred, permanently, from Twitter, where he had some 88 million followers.)

But there is no body that enforces this agreement between Facebook and the board. Facebook has rejected one recommendation by the board that dealt with the takedown of a Covid-19 post. The company says recommendations are different from rulings and are not binding.

View Source

Far-Right French Leader Marine Le Pen Acquitted Over ISIS Tweets

PARIS — Marine Le Pen, the French far-right leader, was acquitted on Tuesday in a criminal case involving graphic photographs of acts of violence by the Islamic State that she posted on Twitter in 2015 after comparisons were drawn between the group and her party.

Ms. Le Pen, the head of the National Rally party, was acquitted by a court in Nanterre, a western suburb of Paris. The charge against her — the dissemination of violent messages — carried a sentence of up to three years in prison and a fine of 75,000 euros, about $90,000, but prosecutors had only sought a fine of €5,000.

Rodolphe Bosselut, Ms. Le Pen’s lawyer in the case, said, “The court judged that by publishing the photos, she was exercising her freedom of expression.” He added that the ruling underlined that the posts clearly were not Islamic State propaganda and had an “informative value” instead.

Prosecutors opened their investigation in December 2015, shortly after Ms. Le Pen — furious over a televised interview in which a French journalist compared her party to the Islamic State — posted three pictures on Twitter that showed killings carried out by the group. One showed the body of James Foley, an American journalist who was kidnapped in Syria in 2012 and later beheaded by the group.

deleted that post after criticism from Mr. Foley’s family, but the two other pictures, which showed a man in an orange jumpsuit being run over by a tank and a prisoner being burned alive in a cage, remained online.

“Daesh is THAT!” she wrote, using an Arabic acronym for the Islamic State, which is also known as ISIS.

The pictures — posted just weeks after a string of deadly terrorist attacks in and around Paris — caused outrage in France.

Ms. Le Pen lost to President Emmanuel Macron in the 2017 election in France, and her party has a limited presence in Parliament. But she is still seen as Mr. Macron’s main opponent on the national political scene, and the verdict will most likely help her prospects in presidential elections next year, with early polls suggesting that she will again face Mr. Macron in a runoff.

The killing of a police officer by a radicalized Tunisian man last month in a town southwest of Paris has fueled a resurgent debate about terrorism, security and immigration, all themes that have fed the rise of Ms. Le Pen’s far-right party, despite Mr. Macron’s attempts to court voters on those issues.

appeared increasingly fragile, and Ms. Le Pen has spent years trying to soften her image and pull her party from the extremist fringe into the mainstream.

Unlike other French politicians who have recently been convicted on serious charges like corruption or embezzlement, Ms. Le Pen was prosecuted under a more obscure article in the French penal code that prohibits disseminating messages that are “violent” or that could “seriously harm human dignity” and that could be seen by a minor.

While there is robust support for freedom of expression, laws regulating free speech in France are often considered more restrictive than in the United States, with laws against calls to violence or hate speech.

Ms. Le Pen has called the investigation a political witch hunt aimed at silencing her, arguing that she was being wrongly prosecuted for exercising her free speech, on charges normally meant to protect minors from violent propaganda or pornography.

“The crime is causing harm to human dignity, not its photographic reproduction,” she said during the trial, held in February.

Gilbert Collard, a lawyer and National Rally representative in the European Parliament who had also posted pictures of Islamic State violence on the same day as Ms. Le Pen did, was acquitted of the charges against him on Tuesday, too.

The court’s verdict on Ms. Le Pen comes amid an increasingly heated political climate in France, ahead of the presidential elections scheduled for next year but also regional elections this June.

View Source

Is an Activist’s Pricey House News? Facebook Alone Decides.

The Post’s editorial board wrote that Facebook and other social media companies “claim to be ‘neutral’ and that they aren’t making editorial decisions in a cynical bid to stave off regulation or legal accountability that threatens their profits. But they do act as publishers — just very bad ones.”

Of course, it takes one to know one. The Post, always a mix of strong local news, great gossip and spun-up conservative politics, is making a bid for the title of worst newspaper in America right now. It has run a string of scary stories about Covid vaccines, the highlight of which was a headline linking vaccines to herpes, part of a broader attempt to extend its digital reach. Great stuff, if you’re mining for traffic in anti-vax Telegram groups. The piece on the Black Lives Matter activist that Facebook blocked was pretty weak, too. It insinuated, without evidence, that her wealth was ill-gotten, and mostly just sneered at how “the self-described Marxist last month purchased a $1.4 million home.”

But then, you’ve probably hate-read a story about a person you disliked buying an expensive house. When Lachlan Murdoch, the co-chairman of The Post’s parent company, bought the most expensive house in Los Angeles, for instance, it received wide and occasionally sneering coverage. Maybe Mr. Murdoch didn’t know he could get the stories deleted by Facebook.

Facebook doesn’t keep a central register of news articles it expunges on these grounds, though the service did block a Daily Mail article about the Black Lives Matter activist’s real estate as well. And it does not keep track of how many news articles it has blocked, though it regularly deletes offending posts by individuals, including photos of the home of the Fox News star Tucker Carlson, a Facebook employee said.

What Facebook’s clash with The Post really revealed — and what surprised me — is that the platform does not defer, at all, to news organizations on questions of news judgment. A decision by The Post, or The New York Times, that someone’s personal wealth is newsworthy carries no weight in the company’s opaque enforcement mechanisms. Nor, Facebook’s lawyer said, does a more nebulous and reasonable human judgment that the country has felt on edge for the last year and that a Black activist’s concern for her own safety was justified. (The activist didn’t respond to my inquiry but, in an Instagram post, called the reporting on her personal finances “doxxing” and a “tactic of terror.”)

The point of Facebook’s bureaucracy is to replace human judgment with a kind of strict corporate law. “The policy in this case prioritizes safety and privacy, and this enforcement shows how difficult these trade-offs can be,” the company’s vice president for communications, Tucker Bounds, said. “To help us understand if our policies are in the right place, we are referring the policy to the Oversight Board.”

The board is a promising kind of supercourt that has yet to set much meaningful policy. So this rule could eventually change. (Get your stories deleted while you can!)

View Source

Hong Kong Sentences Jimmy Lai, Other Pro-Democracy Figures, to Prison

HONG KONG — Jimmy Lai, a pro-democracy media figure, and several of Hong Kong’s most prominent opposition campaigners were sentenced on Friday to prison terms of eight months to 18 months for holding an unauthorized peaceful protest.

Supporters of the defendants say the prosecutions are the latest sign of the fundamental transformation that Beijing has sought to impose on Hong Kong. Until recently, the city had long been a bastion of free speech. Now, the sentences send an unmistakable message that activism carries severe risks for even the most internationally recognized opposition figures.

The court sentenced Mr. Lai, 73, a media tycoon who founded Apple Daily, an aggressively pro-democracy newspaper, to 12 months in prison. Martin Lee, an 82-year-old lawyer, often called Hong Kong’s “father of democracy,” was handed a suspended 11-month prison term, meaning he would avoid being put behind bars if he is not convicted of another crime in the next two years.

overhauled Hong Kong’s electoral system to cement the pro-Beijing establishment’s grip on power. Protests have been largely barred during the pandemic, and self-censorship in the media and arts, which are under intense official pressure, is a growing concern.

Over a period of months in 2019, hundreds of thousands of people joined antigovernment demonstrations in one of the greatest challenges to the Communist Party in decades. The sentences imposed on Friday, added to the measures already taken against dissent, are likely to chill participation in such protests in the future.

“It’s very clear that the approach has changed radically, not just by courts and police,” said Sharron Fast, a media law lecturer at the University of Hong Kong. “The emphasis is on deterrence; the emphasis is on punishment. And with large-scale assemblies, the risk is very high.”

a march on Aug. 18, 2019, that followed a gathering in Victoria Park on Hong Kong Island. The rally in the park had been permitted by the police, but the authorities, citing the violence at earlier protests, had not approved plans for demonstrators to march about two miles to government headquarters afterward.

He has traveled the world, including many trips to Washington, to lobby for that cause. Such internationally focused activism is now banned under the national security law.

Mr. Lai, the media mogul, was smuggled into Hong Kong from mainland China as a child and worked his way up from factory laborer to clothing company tycoon. He then put his wealth into crusading, tabloid-style publications that have been sharply critical of the authorities in Beijing and Hong Kong.

Mr. Lai also faces a fraud case and charges of collusion with a foreign country under the security law for allegedly calling for sanctions against Hong Kong. In a separate hearing on Friday, prosecutors added two more national security charges, accusing Mr. Lai of conspiracy to commit subversion and obstructing justice.

In the illegal assembly case, the court rejected defense arguments that the procession after the rally was necessary to help protesters safely clear out of the crowded park, or that potential imprisonment for a nonviolent march would infringe on the rights to free speech and assembly that have traditionally been protected in Hong Kong.

protests did devolve into widespread violence.

in a letter this week to his colleagues at Apple Daily, told them to be careful because “freedom of speech is dangerous work now.”

“The situation in Hong Kong is becoming more and more chilling,” he wrote. “The era is falling apart before us, and it is therefore time for us to stand with our heads high.”

View Source

Is a Big Tech Overhaul Just Around the Corner?

The leaders of Google, Facebook and Twitter testified on Thursday before a House committee in their first appearances on Capitol Hill since the start of the Biden administration. As expected, sparks flew.

The hearing was centered on questions of how to regulate disinformation online, although lawmakers also voiced concerns about the public-health effects of social media and the borderline-monopolistic practices of the largest tech companies.

On the subject of disinformation, Democratic legislators scolded the executives for the role their platforms played in spreading false claims about election fraud before the Capitol riot on Jan. 6. Jack Dorsey, the chief executive of Twitter, admitted that his company had been partly responsible for helping to circulate disinformation and plans for the Capitol attack. “But you also have to take into consideration the broader ecosystem,” he added. Sundar Pichai and Mark Zuckerberg, the top executives at Google and Facebook, avoided answering the question directly.

Lawmakers on both sides of the aisle returned often to the possibility of jettisoning or overhauling Section 230 of the Communications Decency Act, a federal law that for 25 years has granted immunity to tech companies for any harm caused by speech that’s hosted on their platforms.

393 million, to be precise, which is more than one per person and about 46 percent of all civilian-owned firearms in the world. As researchers at the Harvard T.H. Chan School of Public Health have put it, “more guns = more homicide” and “more guns = more suicide.”

But when it comes to understanding the causes of America’s political inertia on the issue, the lines of thought become a little more tangled. Some of them are easy to follow: There’s the line about the Senate, of course, which gives large states that favor gun regulation the same number of representatives as small states that don’t. There’s also the line about the National Rifle Association, which some gun control proponents have cast — arguably incorrectly — as the sine qua non of our national deadlock.

But there may be a psychological thread, too. Research has found that after a mass shooting, people who don’t own guns tend to identify the general availability of guns as the culprit. Gun owners, on the other hand, are more likely to blame other factors, such as popular culture or parenting.

Americans who support gun regulations also don’t prioritize the issue at the polls as much as Americans who oppose them, so gun rights advocates tend to win out. Or, in the words of Robert Gebelhoff of The Washington Post, “Gun reform doesn’t happen because Americans don’t want it enough.”

Sign up here to get it delivered to your inbox.

Is there anything you think we’re missing? Anything you want to see more of? We’d love to hear from you. Email us at onpolitics@nytimes.com.

View Source

Zuckerberg, Dorsey and Pichai testify about disinformation.

The chief executives of Google, Facebook and Twitter are testifying at the House on Thursday about how disinformation spreads across their platforms, an issue that the tech companies were scrutinized for during the presidential election and after the Jan. 6 riot at the Capitol.

The hearing, held by the House Energy and Commerce Committee, is the first time that Mark Zuckerberg of Facebook, Jack Dorsey of Twitter and Sundar Pichai of Google are appearing before Congress during the Biden administration. President Biden has indicated that he is likely to be tough on the tech industry. That position, coupled with Democratic control of Congress, has raised liberal hopes that Washington will take steps to rein in Big Tech’s power and reach over the next few years.

The hearing is also be the first opportunity since the Jan. 6 Capitol riot for lawmakers to question the three men about the role their companies played in the event. The attack has made the issue of disinformation intensely personal for the lawmakers since those who participated in the riot have been linked to online conspiracy theories like QAnon.

Before the hearing, Democrats signaled in a memo that they were interested in questioning the executives about the Jan. 6 attacks, efforts by the right to undermine the results of the 2020 election and misinformation related to the Covid-19 pandemic.

October article in The New York Post about President Biden’s son Hunter.

Lawmakers have debated whether social media platforms’ business models encourage the spread of hate and disinformation by prioritizing content that will elicit user engagement, often by emphasizing salacious or divisive posts.

Some lawmakers will push for changes to Section 230 of the Communications Decency Act, a 1996 law that shields the platforms from lawsuits over their users’ posts. Lawmakers are trying to strip the protections in cases where the companies’ algorithms amplified certain illegal content. Others believe that the spread of disinformation could be stemmed with stronger antitrust laws, since the platforms are by far the major outlets for communicating publicly online.

“By now it’s painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it’s a question of how best to do it,” said Representative Frank Pallone, the New Jersey Democrat who is chairman of the committee.

The tech executives are expected to play up their efforts to limit misinformation and redirect users to more reliable sources of information. They may also entertain the possibility of more regulation, in an effort to shape increasingly likely legislative changes rather than resist them outright.

View Source

How The Death of Taylor Force in Israel Echoes Through the Fight Over Online Speech

WASHINGTON — Stuart Force says he found solace on Facebook after his son was stabbed to death in Israel by a member of the militant group Hamas in 2016. He turned to the site to read hundreds of messages offering condolences on his son’s page.

But only a few months later, Mr. Force had decided that Facebook was partly to blame for the death, because the algorithms that power the social network helped spread Hamas’s content. He joined relatives of other terror victims in suing the company, arguing that its algorithms aided the crimes by regularly amplifying posts that encouraged terrorist attacks.

The legal case ended unsuccessfully last year when the Supreme Court declined to take it up. But arguments about the algorithms’ power have reverberated in Washington, where some members of Congress are citing the case in an intense debate about the law that shields tech companies from liability for content posted by users.

At a House hearing on Thursday about the spread of misinformation with the chief executives of Facebook, Twitter and Google, some lawmakers are expected to focus on how the companies’ algorithms are written to generate revenue by surfacing posts that users are inclined to click on and respond to. And some will argue that the law that protects the social networks from liability, Section 230 of the Communications Decency Act, should be changed to hold the companies responsible when their software turns the services from platforms into accomplices for crimes committed offline.

litigation group, which had a question: Would the Force family be willing to sue Facebook?

After Mr. Force spent some time on a Facebook page belonging to Hamas, the family agreed to sue. The lawsuit fit into a broader effort by the Forces to limit the resources and tools available to Palestinian groups. Mr. Force and his wife allied with lawmakers in Washington to pass legislation restricting aid to the Palestinian Authority, which governs part of the West Bank.

Their lawyers argued in an American court that Facebook gave Hamas “a highly developed and sophisticated algorithm that facilitates Hamas’s ability to reach and engage an audience it could not otherwise reach as effectively.” The lawsuit said Facebook’s algorithms had not only amplified posts but aided Hamas by recommending groups, friends and events to users.

The federal district judge, in New York, ruled against the claims, citing Section 230. The lawyers for the Force family appealed to a three-judge panel of the U.S. Court of Appeals for the Second Circuit, and two of the judges ruled entirely for Facebook. The other, Judge Robert Katzmann, wrote a 35-page dissent to part of the ruling, arguing that Facebook’s algorithmic recommendations shouldn’t be covered by the legal protections.

“Mounting evidence suggests that providers designed their algorithms to drive users toward content and people the users agreed with — and that they have done it too well, nudging susceptible souls ever further down dark paths,” he said.

Late last year, the Supreme Court rejected a call to hear a different case that would have tested the Section 230 shield. In a statement attached to the court’s decision, Justice Clarence Thomas called for the court to consider whether Section 230’s protections had been expanded too far, citing Mr. Force’s lawsuit and Judge Katzmann’s opinion.

Justice Thomas said the court didn’t need to decide in the moment whether to rein in the legal protections. “But in an appropriate case, it behooves us to do so,” he said.

Some lawmakers, lawyers and academics say recognition of the power of social media’s algorithms in determining what people see is long overdue. The platforms usually do not reveal exactly what factors the algorithms use to make decisions and how they are weighed against one another.

“Amplification and automated decision-making systems are creating opportunities for connection that are otherwise not possible,” said Olivier Sylvain, a professor of law at Fordham University, who has made the argument in the context of civil rights. “They’re materially contributing to the content.”

That argument has appeared in a series of lawsuits that contend Facebook should be responsible for discrimination in housing when its platform could target advertisements according to a user’s race. A draft bill produced by Representative Yvette D. Clarke, Democrat of New York, would strip Section 230 immunity from targeted ads that violated civil rights law.

A bill introduced last year by Representatives Tom Malinowski of New Jersey and Anna G. Eshoo of California, both Democrats, would strip Section 230 protections from social media platforms when their algorithms amplified content that violated some antiterrorism and civil rights laws. The news release announcing the bill, which was reintroduced on Wednesday, cited the Force family’s lawsuit against Facebook. Mr. Malinowski said he had been inspired in part by Judge Katzmann’s dissent.

Critics of the legislation say it may violate the First Amendment and, because there are so many algorithms on the web, could sweep up a wider range of services than lawmakers intend. They also say there’s a more fundamental problem: Regulating algorithmic amplification out of existence wouldn’t eliminate the impulses that drive it.

“There’s a thing you kind of can’t get away from,” said Daphne Keller, the director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center, “which is human demand for garbage content.”

View Source