protests did devolve into widespread violence.

in a letter this week to his colleagues at Apple Daily, told them to be careful because “freedom of speech is dangerous work now.”

“The situation in Hong Kong is becoming more and more chilling,” he wrote. “The era is falling apart before us, and it is therefore time for us to stand with our heads high.”

View Source

Is a Big Tech Overhaul Just Around the Corner?

The leaders of Google, Facebook and Twitter testified on Thursday before a House committee in their first appearances on Capitol Hill since the start of the Biden administration. As expected, sparks flew.

The hearing was centered on questions of how to regulate disinformation online, although lawmakers also voiced concerns about the public-health effects of social media and the borderline-monopolistic practices of the largest tech companies.

On the subject of disinformation, Democratic legislators scolded the executives for the role their platforms played in spreading false claims about election fraud before the Capitol riot on Jan. 6. Jack Dorsey, the chief executive of Twitter, admitted that his company had been partly responsible for helping to circulate disinformation and plans for the Capitol attack. “But you also have to take into consideration the broader ecosystem,” he added. Sundar Pichai and Mark Zuckerberg, the top executives at Google and Facebook, avoided answering the question directly.

Lawmakers on both sides of the aisle returned often to the possibility of jettisoning or overhauling Section 230 of the Communications Decency Act, a federal law that for 25 years has granted immunity to tech companies for any harm caused by speech that’s hosted on their platforms.

393 million, to be precise, which is more than one per person and about 46 percent of all civilian-owned firearms in the world. As researchers at the Harvard T.H. Chan School of Public Health have put it, “more guns = more homicide” and “more guns = more suicide.”

But when it comes to understanding the causes of America’s political inertia on the issue, the lines of thought become a little more tangled. Some of them are easy to follow: There’s the line about the Senate, of course, which gives large states that favor gun regulation the same number of representatives as small states that don’t. There’s also the line about the National Rifle Association, which some gun control proponents have cast — arguably incorrectly — as the sine qua non of our national deadlock.

But there may be a psychological thread, too. Research has found that after a mass shooting, people who don’t own guns tend to identify the general availability of guns as the culprit. Gun owners, on the other hand, are more likely to blame other factors, such as popular culture or parenting.

Americans who support gun regulations also don’t prioritize the issue at the polls as much as Americans who oppose them, so gun rights advocates tend to win out. Or, in the words of Robert Gebelhoff of The Washington Post, “Gun reform doesn’t happen because Americans don’t want it enough.”

Sign up here to get it delivered to your inbox.

Is there anything you think we’re missing? Anything you want to see more of? We’d love to hear from you. Email us at onpolitics@nytimes.com.

View Source

Zuckerberg, Dorsey and Pichai testify about disinformation.

The chief executives of Google, Facebook and Twitter are testifying at the House on Thursday about how disinformation spreads across their platforms, an issue that the tech companies were scrutinized for during the presidential election and after the Jan. 6 riot at the Capitol.

The hearing, held by the House Energy and Commerce Committee, is the first time that Mark Zuckerberg of Facebook, Jack Dorsey of Twitter and Sundar Pichai of Google are appearing before Congress during the Biden administration. President Biden has indicated that he is likely to be tough on the tech industry. That position, coupled with Democratic control of Congress, has raised liberal hopes that Washington will take steps to rein in Big Tech’s power and reach over the next few years.

The hearing is also be the first opportunity since the Jan. 6 Capitol riot for lawmakers to question the three men about the role their companies played in the event. The attack has made the issue of disinformation intensely personal for the lawmakers since those who participated in the riot have been linked to online conspiracy theories like QAnon.

Before the hearing, Democrats signaled in a memo that they were interested in questioning the executives about the Jan. 6 attacks, efforts by the right to undermine the results of the 2020 election and misinformation related to the Covid-19 pandemic.

October article in The New York Post about President Biden’s son Hunter.

Lawmakers have debated whether social media platforms’ business models encourage the spread of hate and disinformation by prioritizing content that will elicit user engagement, often by emphasizing salacious or divisive posts.

Some lawmakers will push for changes to Section 230 of the Communications Decency Act, a 1996 law that shields the platforms from lawsuits over their users’ posts. Lawmakers are trying to strip the protections in cases where the companies’ algorithms amplified certain illegal content. Others believe that the spread of disinformation could be stemmed with stronger antitrust laws, since the platforms are by far the major outlets for communicating publicly online.

“By now it’s painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it’s a question of how best to do it,” said Representative Frank Pallone, the New Jersey Democrat who is chairman of the committee.

The tech executives are expected to play up their efforts to limit misinformation and redirect users to more reliable sources of information. They may also entertain the possibility of more regulation, in an effort to shape increasingly likely legislative changes rather than resist them outright.

View Source

How The Death of Taylor Force in Israel Echoes Through the Fight Over Online Speech

WASHINGTON — Stuart Force says he found solace on Facebook after his son was stabbed to death in Israel by a member of the militant group Hamas in 2016. He turned to the site to read hundreds of messages offering condolences on his son’s page.

But only a few months later, Mr. Force had decided that Facebook was partly to blame for the death, because the algorithms that power the social network helped spread Hamas’s content. He joined relatives of other terror victims in suing the company, arguing that its algorithms aided the crimes by regularly amplifying posts that encouraged terrorist attacks.

The legal case ended unsuccessfully last year when the Supreme Court declined to take it up. But arguments about the algorithms’ power have reverberated in Washington, where some members of Congress are citing the case in an intense debate about the law that shields tech companies from liability for content posted by users.

At a House hearing on Thursday about the spread of misinformation with the chief executives of Facebook, Twitter and Google, some lawmakers are expected to focus on how the companies’ algorithms are written to generate revenue by surfacing posts that users are inclined to click on and respond to. And some will argue that the law that protects the social networks from liability, Section 230 of the Communications Decency Act, should be changed to hold the companies responsible when their software turns the services from platforms into accomplices for crimes committed offline.

litigation group, which had a question: Would the Force family be willing to sue Facebook?

After Mr. Force spent some time on a Facebook page belonging to Hamas, the family agreed to sue. The lawsuit fit into a broader effort by the Forces to limit the resources and tools available to Palestinian groups. Mr. Force and his wife allied with lawmakers in Washington to pass legislation restricting aid to the Palestinian Authority, which governs part of the West Bank.

Their lawyers argued in an American court that Facebook gave Hamas “a highly developed and sophisticated algorithm that facilitates Hamas’s ability to reach and engage an audience it could not otherwise reach as effectively.” The lawsuit said Facebook’s algorithms had not only amplified posts but aided Hamas by recommending groups, friends and events to users.

The federal district judge, in New York, ruled against the claims, citing Section 230. The lawyers for the Force family appealed to a three-judge panel of the U.S. Court of Appeals for the Second Circuit, and two of the judges ruled entirely for Facebook. The other, Judge Robert Katzmann, wrote a 35-page dissent to part of the ruling, arguing that Facebook’s algorithmic recommendations shouldn’t be covered by the legal protections.

“Mounting evidence suggests that providers designed their algorithms to drive users toward content and people the users agreed with — and that they have done it too well, nudging susceptible souls ever further down dark paths,” he said.

Late last year, the Supreme Court rejected a call to hear a different case that would have tested the Section 230 shield. In a statement attached to the court’s decision, Justice Clarence Thomas called for the court to consider whether Section 230’s protections had been expanded too far, citing Mr. Force’s lawsuit and Judge Katzmann’s opinion.

Justice Thomas said the court didn’t need to decide in the moment whether to rein in the legal protections. “But in an appropriate case, it behooves us to do so,” he said.

Some lawmakers, lawyers and academics say recognition of the power of social media’s algorithms in determining what people see is long overdue. The platforms usually do not reveal exactly what factors the algorithms use to make decisions and how they are weighed against one another.

“Amplification and automated decision-making systems are creating opportunities for connection that are otherwise not possible,” said Olivier Sylvain, a professor of law at Fordham University, who has made the argument in the context of civil rights. “They’re materially contributing to the content.”

That argument has appeared in a series of lawsuits that contend Facebook should be responsible for discrimination in housing when its platform could target advertisements according to a user’s race. A draft bill produced by Representative Yvette D. Clarke, Democrat of New York, would strip Section 230 immunity from targeted ads that violated civil rights law.

A bill introduced last year by Representatives Tom Malinowski of New Jersey and Anna G. Eshoo of California, both Democrats, would strip Section 230 protections from social media platforms when their algorithms amplified content that violated some antiterrorism and civil rights laws. The news release announcing the bill, which was reintroduced on Wednesday, cited the Force family’s lawsuit against Facebook. Mr. Malinowski said he had been inspired in part by Judge Katzmann’s dissent.

Critics of the legislation say it may violate the First Amendment and, because there are so many algorithms on the web, could sweep up a wider range of services than lawmakers intend. They also say there’s a more fundamental problem: Regulating algorithmic amplification out of existence wouldn’t eliminate the impulses that drive it.

“There’s a thing you kind of can’t get away from,” said Daphne Keller, the director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center, “which is human demand for garbage content.”

View Source

Tech’s Legal Shield Appears Likely to Survive as Congress Focuses on Details

WASHINGTON — Former President Donald J. Trump called multiple times for repealing the law that shields tech companies from legal responsibility over what people post. President Biden, as a candidate, said the law should be “revoked.”

But the lawmakers aiming to weaken the law have started to agree on a different approach. They are increasingly focused on eliminating protections for specific kinds of content rather than making wholesale changes to the law or eliminating it entirely.

That has still left them a question with potentially wide-ranging outcomes: What, exactly, should lawmakers cut?

One bill introduced last month would strip the protections from content the companies are paid to distribute, like ads, among other categories. A different proposal, expected to be reintroduced from the last congressional session, would allow people to sue when a platform amplified content linked to terrorism. And another that is likely to return would exempt content from the law only when a platform failed to follow a court’s order to take it down.

open to trimming the law, an effort to shape changes they see as increasingly likely to happen. Facebook and Google, the owner of YouTube, have signaled that they are willing to work with lawmakers changing the law, and some smaller companies recently formed a lobbying group to shape any changes.

December op-ed that was co-written by Bruce Reed, Mr. Biden’s deputy chief of staff, said that “platforms should be held accountable for any content that generates revenue.” The op-ed also said that while carving out specific types of content was a start, lawmakers would do well to consider giving platforms the entire liability shield only on the condition that they properly moderate content.

Supporters of Section 230 say even small changes could hurt vulnerable people. They point to the 2018 anti-trafficking bill, which sex workers say made it harder to vet potential clients online after some of the services they used closed, fearing new legal liability. Instead, sex workers have said they must now risk meeting with clients in person without using the internet to ascertain their intentions at a safe distance.

Senator Ron Wyden, the Oregon Democrat who co-wrote Section 230 while in the House, said measures meant to address disinformation on the right could be used against other political groups in the future.

“If you remember 9/11, and you had all these knee-jerk reactions to those horrible tragedies,” he said. “I think it would be a huge mistake to use the disgusting, nauseating attacks on the Capitol as a vehicle to suppress free speech.”

Industry officials say carve-outs to the law could nonetheless be extremely difficult to carry out.

“I appreciate that some policymakers are trying to be more specific about what they don’t like online,” said Kate Tummarello, the executive director of Engine, an advocacy group for small companies. “But there’s no universe in which platforms, especially small platforms, will automatically know when and where illegal speech is happening on their site.”

The issue may take center stage when the chief executives of Google, Facebook and Twitter testify late this month before the House Energy and Commerce Committee, which has been examining the future of the law.

“I think it’s going to be a huge issue,” said Representative Cathy McMorris Rodgers of Washington, the committee’s top Republican. “Section 230 is really driving it.”

View Source