Lawmakers Grill Tech C.E.O.s on Capitol Riot, Getting Few Direct Answers

WASHINGTON — Lawmakers grilled the leaders of Facebook, Google and Twitter on Thursday about the connection between online disinformation and the Jan. 6 riot at the Capitol, causing Twitter’s chief executive to publicly admit for the first time that his product had played a role in the events that left five people dead.

When a Democratic lawmaker asked the executives to answer with a “yes” or a “no” whether the platforms bore some responsibility for the misinformation that had contributed to the riot, Jack Dorsey of Twitter said “yes.” Neither Mark Zuckerberg of Facebook nor Sundar Pichai of Google would answer the question directly.

The roughly five-hour hearing before a House committee marked the first time lawmakers directly questioned the chief executives regarding social media’s role in the January riot. The tech bosses were also peppered with questions about how their companies helped spread falsehoods around Covid-19 vaccines, enable racism and hurt children’s mental health.

It was also the first time the executives had testified since President Biden’s inauguration. Tough questioning from lawmakers signaled that scrutiny of Silicon Valley’s business practices would not let up, and could even intensify, with Democrats in the White House and leading both chambers of Congress.

tweeted a single question mark with a poll that had two options: “Yes” or “No.” When asked about his tweet by a lawmaker, he said “yes” was winning.

The January riot at the Capitol has made the issue of disinformation deeply personal for lawmakers. The riot was fueled by false claims from President Donald J. Trump and others that the election had been stolen, which were rampant on social media.

Some of the participants had connections to QAnon and other online conspiracy theories. And prosecutors have said that groups involved in the riot, including the Oath Keepers and the Proud Boys, coordinated some of their actions on social media.

ban Mr. Trump and his associates after the Jan. 6 riots. The bans hardened views by conservatives that the companies are left-leaning and are inclined to squelch conservative voices.

“We’re all aware of Big Tech’s ever-increasing censorship of conservative voices and their commitment to serve the radical progressive agenda,” said Representative Bob Latta of Ohio, the ranking Republican on the panel’s technology subcommittee.

The company leaders defended their businesses, saying they had invested heavily in hiring content moderators and in technology like artificial intelligence, used to identify and fight disinformation.

Mr. Zuckerberg argued against the notion that his company had a financial incentive to juice its users’ attention by driving them toward more extreme content. He said Facebook didn’t design “algorithms in order to just kind of try to tweak and optimize and get people to spend every last minute on our service.”

He added later in the hearing that elections disinformation was spread in messaging apps, where amplification and algorithms don’t aid in spread of false content. He also blamed television and other traditional media for spreading election lies.

The companies showed fissures in their view on regulations. Facebook has vocally supported internet regulations in a major advertising blitz on television and in newspapers. In the hearing, Mr. Zuckerberg suggested specific regulatory reforms to a key legal shield, known as Section 230 of the Communications Decency Act, that has helped Facebook and other Silicon Valley internet giants thrive.

The legal shield protects companies that host and moderate third-party content, and says companies like Google and Twitter are simply intermediaries of their user-generated content. Democrats have argued that with that protection, companies aren’t motivated to remove disinformation. Republicans accuse the companies of using the shield to moderate too much and to take down content that doesn’t represent their political viewpoints.

“I believe that Section 230 would benefit from thoughtful changes to make it work better for people,” Mr. Zuckerberg said in the statement.

He proposed that liability protection for companies be conditional on their ability to fight the spread of certain types of unlawful content. He said platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Reforms, he said, should be different for smaller social networks, which wouldn’t have the same resources like Facebook to meet new requirements.

Mr. Pichai and Mr. Dorsey said they supported requirements of transparency in content moderation but fell short of agreeing with Mr. Zuckerberg’s other ideas. Mr. Dorsey said that it would be very difficult to distinguish a large platform from a smaller one.

Lawmakers did not appear to be won over.

“There’s a lot of smugness among you,” said Representative Bill Johnson, a Republican of Ohio. “There’s this air of untouchable-ness in your responses to many of the tough questions that you’re being asked.”

Kate Conger and Daisuke Wakabayashi contributed reporting.

View Source

Is a Big Tech Overhaul Just Around the Corner?

The leaders of Google, Facebook and Twitter testified on Thursday before a House committee in their first appearances on Capitol Hill since the start of the Biden administration. As expected, sparks flew.

The hearing was centered on questions of how to regulate disinformation online, although lawmakers also voiced concerns about the public-health effects of social media and the borderline-monopolistic practices of the largest tech companies.

On the subject of disinformation, Democratic legislators scolded the executives for the role their platforms played in spreading false claims about election fraud before the Capitol riot on Jan. 6. Jack Dorsey, the chief executive of Twitter, admitted that his company had been partly responsible for helping to circulate disinformation and plans for the Capitol attack. “But you also have to take into consideration the broader ecosystem,” he added. Sundar Pichai and Mark Zuckerberg, the top executives at Google and Facebook, avoided answering the question directly.

Lawmakers on both sides of the aisle returned often to the possibility of jettisoning or overhauling Section 230 of the Communications Decency Act, a federal law that for 25 years has granted immunity to tech companies for any harm caused by speech that’s hosted on their platforms.

393 million, to be precise, which is more than one per person and about 46 percent of all civilian-owned firearms in the world. As researchers at the Harvard T.H. Chan School of Public Health have put it, “more guns = more homicide” and “more guns = more suicide.”

But when it comes to understanding the causes of America’s political inertia on the issue, the lines of thought become a little more tangled. Some of them are easy to follow: There’s the line about the Senate, of course, which gives large states that favor gun regulation the same number of representatives as small states that don’t. There’s also the line about the National Rifle Association, which some gun control proponents have cast — arguably incorrectly — as the sine qua non of our national deadlock.

But there may be a psychological thread, too. Research has found that after a mass shooting, people who don’t own guns tend to identify the general availability of guns as the culprit. Gun owners, on the other hand, are more likely to blame other factors, such as popular culture or parenting.

Americans who support gun regulations also don’t prioritize the issue at the polls as much as Americans who oppose them, so gun rights advocates tend to win out. Or, in the words of Robert Gebelhoff of The Washington Post, “Gun reform doesn’t happen because Americans don’t want it enough.”

Sign up here to get it delivered to your inbox.

Is there anything you think we’re missing? Anything you want to see more of? We’d love to hear from you. Email us at onpolitics@nytimes.com.

View Source

Zuckerberg, Dorsey and Pichai testify about disinformation.

The chief executives of Google, Facebook and Twitter are testifying at the House on Thursday about how disinformation spreads across their platforms, an issue that the tech companies were scrutinized for during the presidential election and after the Jan. 6 riot at the Capitol.

The hearing, held by the House Energy and Commerce Committee, is the first time that Mark Zuckerberg of Facebook, Jack Dorsey of Twitter and Sundar Pichai of Google are appearing before Congress during the Biden administration. President Biden has indicated that he is likely to be tough on the tech industry. That position, coupled with Democratic control of Congress, has raised liberal hopes that Washington will take steps to rein in Big Tech’s power and reach over the next few years.

The hearing is also be the first opportunity since the Jan. 6 Capitol riot for lawmakers to question the three men about the role their companies played in the event. The attack has made the issue of disinformation intensely personal for the lawmakers since those who participated in the riot have been linked to online conspiracy theories like QAnon.

Before the hearing, Democrats signaled in a memo that they were interested in questioning the executives about the Jan. 6 attacks, efforts by the right to undermine the results of the 2020 election and misinformation related to the Covid-19 pandemic.

October article in The New York Post about President Biden’s son Hunter.

Lawmakers have debated whether social media platforms’ business models encourage the spread of hate and disinformation by prioritizing content that will elicit user engagement, often by emphasizing salacious or divisive posts.

Some lawmakers will push for changes to Section 230 of the Communications Decency Act, a 1996 law that shields the platforms from lawsuits over their users’ posts. Lawmakers are trying to strip the protections in cases where the companies’ algorithms amplified certain illegal content. Others believe that the spread of disinformation could be stemmed with stronger antitrust laws, since the platforms are by far the major outlets for communicating publicly online.

“By now it’s painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it’s a question of how best to do it,” said Representative Frank Pallone, the New Jersey Democrat who is chairman of the committee.

The tech executives are expected to play up their efforts to limit misinformation and redirect users to more reliable sources of information. They may also entertain the possibility of more regulation, in an effort to shape increasingly likely legislative changes rather than resist them outright.

View Source

Penny Stocks Are the Latest Trading Mania

Of all the trading manias in recent months — Bitcoin, SPACs, meme stocks, nonfungible tokens — the latest has a long history of fraud and scandal. That’s right, penny stocks are booming, according to The Times’s Matt Phillips, who visited the “low-rent district of Wall Street.”

There were 1.9 trillion transactions last month on the over-the-counter markets, where such stocks trade, according to the industry regulator Finra. That’s up more than 2,000 percent from a year earlier, driven in large part by the surge in retail trading — enabled by commission-free trading from online brokerages — that has also stoked the frenzy for shares in GameStop and other speculative assets.

left interest rates at rock-bottom levels, despite improving economic growth forecasts. But the Upshot’s Neil Irwin notes that it may become harder for Jay Powell, the Fed chair, to wave away criticism of those who think monetary policy is too loose.

The I.R.S. delays the tax filing deadline. Americans have until May 17 to file their federal income taxes, a delay meant to help people cope with the pandemic’s economic upheaval and account for changes from the rescue plan.

separate its asset-management division, replace its chief and suspend bonuses over the unit’s role in financing Greensill Capital, the supply-chain financing lender that collapsed this month.

Gasoline may have hit its peak. Global demand may never return to pre-pandemic levels, the International Energy Agency said, as more electric vehicles hit the roads and transportation habits change. Use may rise for a bit in places like China and India, but overall consumption in industrialized economies will fall by 2023.

Senate confirms President Biden’s top trade official. Katherine Tai will become the U.S. trade representative. She is a prominent critic of China’s trade practices, signaling that the White House won’t completely walk back the Trump administration’s tough stance. Top U.S. officials are to meet their Chinese counterparts for the first time today, at a summit meeting in Alaska.

Google said today that it planned to invest $7 billion in offices and data centers in 19 U.S. states, making it the latest tech giant to expand its footprint while other companies retrench in a commercial real estate market roiled by the pandemic. Google’s C.E.O., Sundar Pichai, shared the plans in a blog post, saying that the move would create 10,000 jobs at the company this year. (Alphabet, Google’s parent company, employed around 135,000 people at the end of 2020.)

Google is expanding across the country. The plan includes investments in data centers in places like Nebraska, South Carolina and Texas. The company recently opened its first office in Minnesota and an operations center in Mississippi. It will open its first office in Houston this year.

“Coming together in person to collaborate and build community is core to Google’s culture,” Mr. Pichai wrote. Google was one of the first companies to tell employees to work from home, and it expects workers to begin returning to offices in September. When that happens, it will test a “flexible workweek,” with employees spending at least three days a week in the office.

Congressional hearing which focused on the relationship between brokers like Robinhood and market makers like Citadel Securities.


SPACs have already raised more money this year than in all of 2020, setting a record for blank-check deal volume. More than $84 billion has been raised by 264 SPACs to date, according to Dealogic, compared with $83 billion raised by 256 acquisition vehicles last year.

cooperating with an S.E.C. inquiry, after a short seller accused it of misleading investors about its business prospects.


Crypto Mom,” she’s been raising the profile of cryptocurrencies and blockchain technology since being appointed an S.E.C. commissioner in 2018. On “Blockchain Policy Matters,” an online show by the Blockchain Association, a trade group, Ms. Peirce described her hopes for innovation and regulation of the crypto world. DealBook got a preview of the show, which posts today.

bitcoin E.T.F.s have begun trading in Canada.

She welcomes Gary Gensler, the blockchain professor, as the agency’s next chief. President Biden’s pick to lead the S.E.C. has lectured on cryptocurrency and blockchain at M.I.T. since 2018. Ms. Peirce said she was “hopeful” that he will help the agency think “in a more sophisticated way.” She added that Mr. Gensler has “more inclination to regulate” than she does, but that she believes he’ can provide the regulatory clarity on crypto she has sought.

Blockchain technology could address the issues raised by meme-stock mania. That includes “concerns around settlement times, tracking where shares are, and who owns what shares when,” Ms. Pierce said. Distributed ledger technology like blockchain could eliminate common failure points in the financial system, rather than centralizing them, Ms. Peirce said, adding: “I hope that a lot of that innovation happens in the private sector as opposed to us taking it over as a securities regulator.”

Deals

Politics and policy

Tech

Best of the rest

We’d like your feedback! Please email thoughts and suggestions to dealbook@nytimes.com.

View Source

Google reduces some app-store fees amid criticism.

Google is cutting in half its commission on developers’ first $1 million in app sales, following a similar move by Apple that is aimed at appeasing developers and regulators who accuse the companies of abusing their dominance of the smartphone industry.

Google said that starting July 1, it would take 15 percent of the first $1 million developers take in from certain app sales, down from 30 percent. Google will still charge 30 percent after the first $1 million.

Apple last year said it was halving its app-store commission to 15 percent on companies that earn less than $1 million a year in app sales.

The dual actions reverse years of resistance by the companies to change their app-store commissions, which have become important to their growth.

Rivals have intensified their criticism of the rates, saying they are artificially high because the companies have a duopoly on the distribution of mobile apps. Regulators around the world have begun investigating the commissions as part of larger antitrust probes, and lawmakers in several states are considering bills that would make it more difficult for Apple and Google to impose the fees.

Apple has been largely the focus of the criticism because it forces developers to use its app store to reach iPhone users. Google’s Android software allows users to download apps outside of its flagship app store, called the Play Store. Still, Android is the dominant smartphone operating system around the world, underpinning roughly 85 percent of the world’s smartphones, according to IDC, a market research firm.

Google said its change would halve the app-store fees for 99 percent of Android developers. But while Apple’s and Google’s moves have earned them positive headlines, they will likely have little impact on the companies’ bottom lines, because most of their app-store revenues come from larger developers.

Apple’s new policy, for instance, will affect roughly 98 percent of the companies that pay Apple a commission, but those developers accounted for less than 5 percent of App Store revenues last year, according to estimates from Sensor Tower, an app analytics firm.

View Source

Who Is Making Sure the A.I. Machines Aren’t Racist?

Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.

Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.

The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.

But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.

especially with the current hype and demand for people in the field,” she wrote. “The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”

The A.I. community buzzed about the mini-manifesto. Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google.

She teamed with Margaret Mitchell, who was building a group inside Google dedicated to “ethical A.I.” Dr. Mitchell had previously worked in the research lab at Microsoft. She had grabbed attention when she told Bloomberg News in 2016 that A.I. suffered from a “sea of dudes” problem. She estimated that she had worked with hundreds of men over the previous five years and about 10 women.

said she had been fired after criticizing Google’s approach to minority hiring and, with a research paper, highlighting the harmful biases in the A.I. systems that underpin Google’s search engine and other services.

“Your life starts getting worse when you start advocating for underrepresented people,” Dr. Gebru said in an email before her firing. “You start making the other leaders upset.”

As Dr. Mitchell defended Dr. Gebru, the company removed her, too. She had searched through her own Google email account for material that would support their position and forwarded emails to another account, which somehow got her into trouble. Google declined to comment for this article.

Their departure became a point of contention for A.I. researchers and other tech workers. Some saw a giant company no longer willing to listen, too eager to get technology out the door without considering its implications. I saw an old problem — part technological and part sociological — finally breaking into the open.

talking digital assistants and conversational “chatbots,” Google Photos relied on an A.I. system that learned its skills by analyzing enormous amounts of digital data.

Called a “neural network,” this mathematical system could learn tasks that engineers could never code into a machine on their own. By analyzing thousands of photos of gorillas, it could learn to recognize a gorilla. It was also capable of egregious mistakes. The onus was on engineers to choose the right data when training these mathematical systems. (In this case, the easiest fix was to eliminate “gorilla” as a photo category.)

As a software engineer, Mr. Alciné understood the problem. He compared it to making lasagna. “If you mess up the lasagna ingredients early, the whole thing is ruined,” he said. “It is the same thing with A.I. You have to be very intentional about what you put into it. Otherwise, it is very difficult to undo.”

the study drove a backlash against facial recognition technology and, particularly, its use in law enforcement. Microsoft’s chief legal officer said the company had turned down sales to law enforcement when there was concern the technology could unreasonably infringe on people’s rights, and he made a public call for government regulation.

Twelve months later, Microsoft backed a bill in Washington State that would require notices to be posted in public places using facial recognition and ensure that government agencies obtained a court order when looking for specific people. The bill passed, and it takes effect later this year. The company, which did not respond to a request for comment for this article, did not back other legislation that would have provided stronger protections.

Ms. Buolamwini began to collaborate with Ms. Raji, who moved to M.I.T. They started testing facial recognition technology from a third American tech giant: Amazon. The company had started to market its technology to police departments and government agencies under the name Amazon Rekognition.

Ms. Buolamwini and Ms. Raji published a study showing that an Amazon face service also had trouble identifying the sex of female and darker-​skinned faces. According to the study, the service mistook women for men 19 percent of the time and misidentified darker-​skinned women for men 31 percent of the time. For lighter-​skinned males, the error rate was zero.

New York Times article that described it.

In an open letter, Dr. Mitchell and Dr. Gebru rejected Amazon’s argument and called on it to stop selling to law enforcement. The letter was signed by 25 artificial intelligence researchers from Google, Microsoft and academia.

Last June, Amazon backed down. It announced that it would not let the police use its technology for at least a year, saying it wanted to give Congress time to create rules for the ethical use of the technology. Congress has yet to take up the issue. Amazon declined to comment for this article.

Dr. Gebru and Dr. Mitchell had less success fighting for change inside their own company. Corporate gatekeepers at Google were heading them off with a new review system that had lawyers and even communications staff vetting research papers.

Dr. Gebru’s dismissal in December stemmed, she said, from the company’s treatment of a research paper she wrote alongside six other researchers, including Dr. Mitchell and three others at Google. The paper discussed ways that a new type of language technology, including a system built by Google that underpins its search engine, can show bias against women and people of color.

After she submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper or remove the names of Google employees. She said she would resign if the company could not tell her why it wanted her to retract the paper and answer other concerns.

Cade Metz is a technology correspondent at The Times and the author of “Genius Makers: The Mavericks Who Brought A.I. to Google, Facebook, and the World,” from which this article is adapted.

View Source

Microsoft takes aim at Google as it supports bill to give news publishers more leverage over Big Tech.

Lawmakers on Friday debated an antitrust bill that would give news publishers collective bargaining power with online platforms like Facebook and Google, putting the spotlight on a proposal aimed at chipping away at the power of Big Tech.

At a hearing held by the House antitrust subcommittee, Microsoft’s president, Brad Smith, emerged as a leading industry voice in favor of the law. He took a divergent path from his tech counterparts, pointing to an imbalance in power between publishers and tech platforms. Newspaper ad revenue plummeted to $14.3 billion in 2018 from $49.4 billion in 2005, he said, while ad revenue at Google jumped to $116 billion from $6.1 billion.

“Even though news helps fuel search engines, news organizations frequently are uncompensated or, at best, undercompensated for its use,” Mr. Smith said. “The problems that beset journalism today are caused in part by a fundamental lack of competition in the search and ad tech markets that are controlled by Google.”

The hearing was the second in a series planned by the subcommittee to set the stage for the creation of stronger antitrust laws. In October, the subcommittee, led by Representative David Cicilline, Democrat of Rhode Island, released the results of a 16-month investigation into the power of Amazon, Apple, Facebook and Google. The report accused the companies of monopoly behavior.

This week, the committee’s two top leaders, Mr. Cicilline and Representative Ken Buck, Republican of Colorado, introduced the Journalism and Competition Preservation Act. The bill aims to give smaller news publishers the ability to band together to bargain with online platforms for higher fees for distributing their content. The bill was also introduced in the Senate by Senator Amy Klobuchar, a Democrat of Minnesota and the chairwoman of that chamber’s antitrust subcommittee.

Global concern is growing over the decline of local news organizations, which have become dependent on online platforms for distribution of their content. Australia recently proposed a law allowing news publishers to bargain with Google and Facebook, and lawmakers in Canada and Britain are considering similar steps.

Mr. Cicilline said, “While I do not view this legislation as a substitute for more meaningful competition online — including structural remedies to address the underlying problems in the market — it is clear that we must do something in the short term to save trustworthy journalism before it is lost forever.”

Google, though not a witness at the hearing, issued a statement in response to Mr. Smith’s planned testimony, defending its business practices and disparaging the motives of Microsoft, whose Bing search engine runs a very distant second place behind Google.

“Unfortunately, as competition in these areas intensifies, they are reverting to their familiar playbook of attacking rivals and lobbying for regulations that benefit their own interests,” wrote Kent Walker, the senior vice president of policy for Google.

View Source

Russia Says It Is Slowing Access to Twitter

MOSCOW — The Russian government said on Wednesday that it was slowing access to Twitter, accusing the social network of failing to remove illegal content and signaling that the Kremlin is escalating its offensive against American internet companies that have long provided a haven for freedom of expression.

Russia’s telecommunications regulator said it was reducing the speed at which Twitter loaded for internet users in Russia, though it was not immediately clear how noticeable the move would be. The regulator, Roskomnadzor, accused Twitter of failing for years to remove posts about illegal drug use or child pornography or messages “pushing minors toward suicide.”

“With the aim of protecting Russian citizens and forcing the internet service to follow the law on the territory of the Russian Federation, centralized reactive measures have been taken against Twitter starting March 10, 2021 — specifically, the initial throttling of the service’s speeds, in accordance with the regulations,” the regulator said in a statement.

“If the internet service Twitter continues to ignore the demands of the law, measures against it will continue in accordance with the regulations, up to and including blocking it,” it added.

he has allowed the internet to remain essentially free.

Twitter — and to a much greater extent, Facebook’s Instagram and Google’s YouTube — have given Russians ways to speak, report and organize openly even though the Kremlin controls the television airwaves.

Those social networks, along with Chinese-owned TikTok, played a pivotal role in the anti-Kremlin protests that accompanied the return and imprisonment of the opposition leader Aleksei A. Navalny this year. Mr. Navalny has some 2.5 million Twitter followers, and his investigation published in January into a purported secret palace of Mr. Putin was viewed more than 100 million times on YouTube.

Russian officials claim that Silicon Valley companies discriminate against Russians by blocking some pro-Kremlin accounts while handing a megaphone to the Kremlin’s critics. They have also said that social networks have refused to remove content drawing children into the unauthorized protests in support of Mr. Navalny.

In recent weeks, the Kremlin has led an intensifying drumbeat criticizing American internet companies, painting them as corrupting foreign forces.

Mr. Putin said this month.

The internet, Mr. Putin said, must respect “the moral laws of the society in which we live — otherwise, this society will be destroyed from the inside.”

Twitter has a small user base in Russia, though it is popular among journalists, politicians and opposition activists. A report last year estimated the service had 690,000 active users in Russia, meaning that any public backlash over the move is likely to be far smaller than if the Kremlin imposed similar limits for Instagram or YouTube.

View Source

Tech’s Legal Shield Appears Likely to Survive as Congress Focuses on Details

WASHINGTON — Former President Donald J. Trump called multiple times for repealing the law that shields tech companies from legal responsibility over what people post. President Biden, as a candidate, said the law should be “revoked.”

But the lawmakers aiming to weaken the law have started to agree on a different approach. They are increasingly focused on eliminating protections for specific kinds of content rather than making wholesale changes to the law or eliminating it entirely.

That has still left them a question with potentially wide-ranging outcomes: What, exactly, should lawmakers cut?

One bill introduced last month would strip the protections from content the companies are paid to distribute, like ads, among other categories. A different proposal, expected to be reintroduced from the last congressional session, would allow people to sue when a platform amplified content linked to terrorism. And another that is likely to return would exempt content from the law only when a platform failed to follow a court’s order to take it down.

open to trimming the law, an effort to shape changes they see as increasingly likely to happen. Facebook and Google, the owner of YouTube, have signaled that they are willing to work with lawmakers changing the law, and some smaller companies recently formed a lobbying group to shape any changes.

December op-ed that was co-written by Bruce Reed, Mr. Biden’s deputy chief of staff, said that “platforms should be held accountable for any content that generates revenue.” The op-ed also said that while carving out specific types of content was a start, lawmakers would do well to consider giving platforms the entire liability shield only on the condition that they properly moderate content.

Supporters of Section 230 say even small changes could hurt vulnerable people. They point to the 2018 anti-trafficking bill, which sex workers say made it harder to vet potential clients online after some of the services they used closed, fearing new legal liability. Instead, sex workers have said they must now risk meeting with clients in person without using the internet to ascertain their intentions at a safe distance.

Senator Ron Wyden, the Oregon Democrat who co-wrote Section 230 while in the House, said measures meant to address disinformation on the right could be used against other political groups in the future.

“If you remember 9/11, and you had all these knee-jerk reactions to those horrible tragedies,” he said. “I think it would be a huge mistake to use the disgusting, nauseating attacks on the Capitol as a vehicle to suppress free speech.”

Industry officials say carve-outs to the law could nonetheless be extremely difficult to carry out.

“I appreciate that some policymakers are trying to be more specific about what they don’t like online,” said Kate Tummarello, the executive director of Engine, an advocacy group for small companies. “But there’s no universe in which platforms, especially small platforms, will automatically know when and where illegal speech is happening on their site.”

The issue may take center stage when the chief executives of Google, Facebook and Twitter testify late this month before the House Energy and Commerce Committee, which has been examining the future of the law.

“I think it’s going to be a huge issue,” said Representative Cathy McMorris Rodgers of Washington, the committee’s top Republican. “Section 230 is really driving it.”

View Source