TikTok’s design makes it a breeding ground for misinformation, the researchers found. They wrote that videos could easily be manipulated and republished on the platform and showcased alongside stolen or original content. Pseudonyms are common; parody and comedy videos are easily misinterpreted as fact; popularity affects the visibility of comments; and data about publication time and other details are not clearly displayed on the mobile app.
(The Shorenstein Center researchers noted, however, that TikTok is less vulnerable to so-called brigading, in which groups coordinate to make a post spread widely, than platforms like Twitter or Facebook.)
During the first quarter of 2022, more than 60 percent of videos with harmful misinformation were viewed by users before being removed, TikTok said. Last year, a group of behavioral scientists who had worked with TikTok said that an effort to attach warnings to posts with unsubstantiated content had reduced sharing by 24 percent but had limited views by only 5 percent.
Researchers said that misinformation would continue to thrive on TikTok as long as the platform refused to release data about the origins of its videos or share insight into its algorithms. Last month, TikTok said it would offer some access to a version of its application programming interface, or A.P.I., this year, but it would not say whether it would do so before the midterms.
Filippo Menczer, an informatics and computer science professor and the director of the Observatory on Social Media at Indiana University, said he had proposed research collaborations to TikTok and had been told, “Absolutely not.”
“At least with Facebook and Twitter, there is some level of transparency, but, in the case of TikTok, we have no clue,” he said. “Without resources, without being able to access data, we don’t know who gets suspended, what content gets taken down, whether they act on reports or what the criteria are. It’s completely opaque, and we cannot independently assess anything.”
SAN FRANCISCO — For years, Twitter was a runner-up social media company. It never grew to the size and scale of a Facebook or an Instagram. It simply muddled along.
Then, Elon Musk, a power user of the service, stormed in. He offered $44 billion to buy Twitter and declared that the company could perform far better if he were in charge. He disparaged Twitter’s executives, ridiculed its content policies, complained about the product and confused its more than 7,000 employees with his pronouncements. As Mr. Musk revealed the company’s lack of business and financial prospects, Twitter’s stock plunged more than 30 percent.
Now, as Mr. Musk, a billionaire, tries to back out of the blockbuster deal, he is inexorably leaving Twitter worse off than it was when he said he would buy it. With each needling tweet and public taunt, Mr. Musk has eroded trust in the social media company, walloped employee morale, spooked potential advertisers, emphasized its financial difficulties and spread misinformation about how Twitter operates.
set to sue Mr. Musk as soon as this week to force a completion of the deal. The court battle is likely to be protracted and immense, involving months of expensive litigation and high-stakes negotiations by elite lawyers. A resolution is far from certain — Twitter might win, but, if it loses, Mr. Musk could walk away by paying a breakup fee. Or the two sides could renegotiate or settle.
On Monday, the damage that Mr. Musk, 51, has inflicted was evident. Twitter’s stock plunged more than 11 percent to one of its lowest points since 2020 as investors anticipated the coming legal battle. Since Twitter accepted Mr. Musk’s acquisition offer, on April 25, its stock has lost over a third of its valueas investors have grown increasingly skeptical that the deal would get done on the agreed terms. (In contrast, the tech-heavy Nasdaq index was down about 12.5 percent in the same period.)
Twitter declined to comment on Monday. In a letter to Mr. Musk’s lawyers on Sunday, the company’s lawyers said that his move to terminate the deal was “invalid and wrongful” and that Mr. Musk “knowingly, intentionally, willfully and materially breached” his agreement to buy the firm. Twitter would continue to provide information to Mr. Musk and to work to close the transaction, the letter added.
cited the number of fake accounts on Twitter’s platform as the reason that he cannot buy the company, tweeted a picture of himself laughing at the situation.
the best it could obtain, suggesting it saw no way to reach that price on its own.
Parag Agrawal, Twitter’s chief executive, said in a memo to employees in May that the company had not lived up to its business and financial goals. To address the issues, he pushed out the heads of product and revenue, instituted a hiring slowdown and began an effort to attract new users and diversify into e-commerce. In April, the company stopped providing a forward-looking financial outlook to investors, pending the acquisition.
That trajectory is unlikely to change as uncertainty over the deal discomfits advertisers, the main source of Twitter’s revenue.
“Twitter will have trouble in the near future reassuring skittish advertisers and their users that they’re going to be stable,” said Angelo Carusone, the president of the watchdog group Media Matters for America.
In what was an implicit dig at Twitter’s top executives, Mr. Musk said he could have done way better with the company. In a presentation to investors in May, he said he planned to quintuple the company’s revenue to $26.4 billion by 2028 and to reach 931 million users that same year, up from 217 million at the end of last year.
letter filed to the Securities and Exchange Commission on Friday. The company’s “declining business prospects and financial outlook” had given him pause, his lawyers wrote, especially considering Twitter’s recent “financial performance and revised outlook” on the fiscal year ahead.
Mr. Musk, who has more than 100 million followers on Twitter, has also jackhammered the product, saying it is not as attractive as other apps. He has repeatedly claimed, without evidence, that Twitter is overrun with more inauthentic accounts than it has disclosed; such accounts can be automated to pump out toxic or false content. (The company has said fewer than 5 percent of the accounts on its platform are fake.)
His barbs about fake accounts have weakened trust in Twitter, just as the company prepares to moderate heated political discussions about an upcoming election in Brazil and the midterm elections this fall in the United States, misinformation experts said.
In another criticism of Twitter and the way it supervises content, Mr. Musk vowed to unwind the company’s moderation policies in the name of free speech. In May, he said he would “reverse the permanent ban” of former President Donald J. Trump from Twitter, allowing Mr. Trump back on the social network. That riled up right-wing users, who have long accused the company of censoring them, and renewed questions about how Twitter should handle debates over the limits of free speech.
Inside the company, employee morale has been battered, leading to infighting and attrition, according to six current and former employees.
Some of those who remain said they were relieved that Mr. Musk seemed to have decided against owning the company. Others shared nihilistic memes on the company’s Slack or openly criticized Twitter’s board and executives for entertaining Mr. Musk’s offer in the first place, according to internal messages viewed by The New York Times. The mood among executives was one of grim determination, two people with knowledge of their thinking said.
illustrated the mood with a cartoon that showed a shattered company that had been bumped off a shelf by Mr. Musk’s careless elbow. His caption: “You break it, you buy it!”
Ryan Mac and Isabella Simonetti contributed reporting.
To fight disinformation, California lawmakers are advancing a bill that would force social media companies to divulge their process for removing false, hateful or extremist material from their platforms. Texas lawmakers, by contrast, want to ban the largest of the companies — Facebook, Twitter and YouTube — from removing posts because of political points of view.
In Washington, the state attorney general persuaded a court to fine a nonprofit and its lawyer $28,000 for filing a baseless legal challenge to the 2020 governor’s race. In Alabama, lawmakers want to allow people to seek financial damages from social media platforms that shut down their accounts for having posted false content.
In the absence of significant action on disinformation at the federal level, officials in state after state are taking aim at the sources of disinformation and the platforms that propagate them — only they are doing so from starkly divergent ideological positions. In this deeply polarized era, even the fight for truth breaks along partisan lines.
a nation increasingly divided over a variety of issues — including abortion, guns, the environment — and along geographic lines.
a similar law in Florida that would have fined social media companies as much as $250,000 a day if they blocked political candidates from their platforms, which have become essential tools of modern campaigning. Other states with Republican-controlled legislatures have proposed similar measures, including Alabama, Mississippi, South Carolina, West Virginia, Ohio, Indiana, Iowa and Alaska.
Alabama’s attorney general, Steve Marshall, has created an online portal through which residents can complain that their access to social media has been restricted: alabamaag.gov/Censored. In a written response to questions, he said that social media platforms stepped up efforts to restrict content during the pandemic and the presidential election of 2020.
“During this period (and continuing to present day), social media platforms abandoned all pretense of promoting free speech — a principle on which they sold themselves to users — and openly and arrogantly proclaimed themselves the Ministry of Truth,” he wrote. “Suddenly, any viewpoint that deviated in the slightest from the prevailing orthodoxy was censored.”
Much of the activity on the state level today has been animated by the fraudulent assertion that Mr. Trump, and not President Biden, won the 2020 presidential election. Although disproved repeatedly, the claim has been cited by Republicans to introduce dozens of bills that would clamp down on absentee or mail-in voting in the states they control.
memoirist and Republican nominee for Senate, railed against social media giants, saying they stifled news about the foreign business dealings of Hunter Biden, the president’s son.
massacre at a supermarket in Buffalo in May.
Connecticut plans to spend nearly $2 million on marketing to share factual information about voting and to create a position for an expert to root out misinformation narratives about voting before they go viral. A similar effort to create a disinformation board at the Department of Homeland Security provoked a political fury before its work was suspended in May pending an internal review.
In California, the State Senate is moving forward with legislation that would require social media companies to disclose their policies regarding hate speech, disinformation, extremism, harassment and foreign political interference. (The legislation would not compel them to restrict content.) Another bill would allow civil lawsuits against large social media platforms like TikTok and Meta’s Facebook and Instagram if their products were proven to have addicted children.
“All of these different challenges that we’re facing have a common thread, and the common thread is the power of social media to amplify really problematic content,” said Assemblyman Jesse Gabriel of California, a Democrat, who sponsored the legislation to require greater transparency from social media platforms. “That has significant consequences both online and in physical spaces.”
It seems unlikely that the flurry of legislative activity will have a significant impact before this fall’s elections; social media companies will have no single response acceptable to both sides when accusations of disinformation inevitably arise.
“Any election cycle brings intense new content challenges for platforms, but the November midterms seem likely to be particularly explosive,” said Matt Perault, a director of the Center on Technology Policy at the University of North Carolina. “With abortion, guns, democratic participation at the forefront of voters’ minds, platforms will face intense challenges in moderating speech. It’s likely that neither side will be satisfied by the decisions platforms make.”
Mark Zuckerberg, Facebook’s chief executive, made securing the 2020 U.S. election a top priority. He met regularly with an election team, which included more than 300 people from across his company, to prevent misinformation from spreading on the social network. He asked civil rights leaders for advice on upholding voter rights.
The core election team at Facebook, which was renamed Meta last year, has since been dispersed. Roughly 60 people are now focused primarily on elections, while others split their time on other projects. They meet with another executive, not Mr. Zuckerberg. And the chief executive has not talked recently with civil rights groups, even as some have asked him to pay more attention to the midterm elections in November.
Safeguarding elections is no longer Mr. Zuckerberg’s top concern, said four Meta employees with knowledge of the situation. Instead, he is focused on transforming his company into a provider of the immersive world of the metaverse, which he sees as the next frontier of growth, said the people, who were not authorized to speak publicly.
hearings on the Jan. 6 Capitol riot have underlined how precarious elections can be. And dozens of political candidates are running this November on the false premise that former President Donald J. Trump was robbed of the 2020 election, with social media platforms continuing to be a key way to reach American voters.
2000 Mules,” a film that falsely claims the 2020 election was stolen from Mr. Trump, was widely shared on Facebook and Instagram, garnering more than 430,000 interactions, according to an analysis by The New York Times. In posts about the film, commenters said they expected election fraud this year and warned against using mail-in voting and electronic voting machines.
$44 billion sale to Elon Musk, three employees with knowledge of the situation said. Mr. Musk has suggested that he wants fewer rules about what can and cannot be posted on the service.
barred Mr. Trump from its platforms after the riot at the U.S. Capitol on Jan. 6, 2021, has worked over the years to limit political falsehoods on its sites. Tom Reynolds, a Meta spokesman, said the company had “taken a comprehensive approach to how elections play out on our platforms since before the U.S. 2020 elections and through the dozens of global elections since then.”
recently raised doubts about the country’s electoral process. Latvia, Bosnia and Slovenia are also holding elections in October.
“People in the U.S. are almost certainly getting the Rolls-Royce treatment when it comes to any integrity on any platform, especially for U.S. elections,” said Sahar Massachi, the executive director of the think tank Integrity Institute and a former Facebook employee. “And so however bad it is here, think about how much worse it is everywhere else.”
Facebook’s role in potentially distorting elections became evident after 2016, when Russian operatives used the site to spread inflammatory content and divide American voters in the U.S. presidential election. In 2018, Mr. Zuckerberg testified before Congress that election security was his top priority.
banning QAnon conspiracy theory posts and groups in October 2020.
Around the same time, Mr. Zuckerberg and his wife, Priscilla Chan, donated $400 million to local governments to fund poll workers, pay for rental fees for polling places, provide personal protective equipment and cover other administrative costs.
The week before the November 2020 election, Meta also froze all political advertising to limit the spread of falsehoods.
But while there were successes — the company kept foreign election interference off the platform — it struggled with how to handle Mr. Trump, who used his Facebook account to amplify false claims of voter fraud. After the Jan. 6 riot, Facebook barred Mr. Trump from posting. He is eligible for reinstatement in January.
Frances Haugen, a Facebook employee turned whistle-blower, filed complaints with the Securities and Exchange Commission accusing the company of removing election safety features too soon after the 2020 election. Facebook made growth and engagement its priorities over security, she said.
fully realized digital world that exists beyond the one in which we live. It was coined by Neal Stephenson in his 1992 novel “Snow Crash,” and the concept was further explored by Ernest Cline in his novel “Ready Player One.”
The future. Many people in tech believe the metaverse will herald an era in which our virtual lives will play as important a role as our physical realities. Some experts warn that it could still turn out to be a fad or even dangerous.
Mr. Zuckerberg no longer meets weekly with those focused on election security, said the four employees, though he receives their reports. Instead, they meet with Nick Clegg, Meta’s president of global affairs.
Several civil right groups said they had noticed Meta’s shift in priorities. Mr. Zuckerberg isn’t involved in discussions with them as he once was, nor are other top Meta executives, they said.
“I’m concerned,” said Derrick Johnson, president of the National Association for the Advancement of Colored People, who talked with Mr. Zuckerberg and Sheryl Sandberg, Meta’s chief operating officer, ahead of the 2020 election. “It appears to be out of sight, out of mind.” (Ms. Sandberg has announced that she will leave Meta this fall.)
wrote a letter to Mr. Zuckerberg and the chief executives of YouTube, Twitter, Snap and other platforms. They called for them to take down posts about the lie that Mr. Trump won the 2020 election and to slow the spread of election misinformation before the midterms.
Yosef Getachew, a director at the nonprofit public advocacy organization Common Cause, whose group studied 2020 election misinformation on social media, said the companies had not responded.
“The Big Lie is front and center in the midterms with so many candidates using it to pre-emptively declare that the 2022 election will be stolen,” he said, pointing to recent tweets from politicians in Michigan and Arizona who falsely said dead people cast votes for Democrats. “Now is not the time to stop enforcing against the Big Lie.”
Ahead of the 2020 elections, Connecticut confronted a bevy of falsehoods about voting that swirled around online. One, widely viewed on Facebook, wrongly said absentee ballots had been sent to dead people. On Twitter, users spread a false post that a tractor-trailer carrying ballots had crashed on Interstate 95, sending thousands of voter slips into the air and across the highway.
Concerned about a similar deluge of unfounded rumors and lies around this year’s midterm elections, the state plans to spend nearly $2 million on marketing to share factual information about voting, and to create its first-ever position for an expert in combating misinformation. With a salary of $150,000, the person is expected to comb fringe sites like 4chan, far-right social networks like Gettr and Rumble, and mainstream social media sites to root out early misinformation narratives about voting before they go viral, and then urge the companies to remove or flag the posts that contain false information.
“We have to have situational awareness by looking into all the incoming threats to the integrity of elections,” said Scott Bates, Connecticut’s deputy secretary of the state. “Misinformation can erode people’s confidence in elections, and we view that as a critical threat to the democratic process.”
Connecticut joins a handful of states preparing to fight an onslaught of rumors and lies about this year’s elections.
ABC/Ipsos poll from January, only 20 percent of respondents said they were “very confident” in the integrity of the election system and 39 percent said they felt “somewhat confident.” Numerous Republican candidates have embraced former President Donald J. Trump’s falsehoods about the 2020 election, campaigning — often successfully — on the untrue claim that it was stolen from him.
Some conservatives and civil rights groups are almost certain to complain that the efforts to limit misinformation could restrict free speech. Florida, led by Republicans, has enacted legislation limiting the kind of social media moderation that sites like Facebook, YouTube and Twitter can do, with supporters saying the sites constrict conservative voices. (A U.S. appeals court recently blocked most aspects of the law.) On the federal level, the Department of Homeland Security recently paused the work of an advisory board on disinformation after a barrage of criticism from conservative lawmakers and free speech advocates that the group could suppress speech.
“State and local governments are well situated to reduce harms from dis- and misinformation by providing timely, accurate and trustworthy information,” said Rachel Goodman, a lawyer at Protect Democracy, a nonpartisan advocacy group. “But in order to maintain that trust, they must make clear that they are not engaging in any kind of censorship or surveillance that would raise constitutional concerns.”
Connecticut and Colorado officials said that the problem of misinformation had only worsened since 2020 and that without a more concerted push to counteract it, even more voters could lose faith in the integrity of elections. They also said they feared for the safety of some election workers.
“We are seeing a threat atmosphere unlike anything this country has seen before,” said Jena Griswold, the secretary of state of Colorado. Ms. Griswold, a Democrat who is up for re-election this fall, has received threats for upholding 2020 election results and refuting Mr. Trump’s false claims of fraudulent voting in the state.
Other secretaries of state, who head the office typically charged with overseeing elections, have received similar pushback. In Georgia, Brad Raffensperger, a Republican who certified President Biden’s win in the state, has faced fierce criticism laced with false claims about the 2020 election.
In his primary race this year, Mr. Raffensperger batted down misinformation that there were 66,000 underage voters, 2,400 unregistered voters and more than 10,350 dead people who cast ballots in the presidential election. None of the claims are true. He won his primary last week.
Colorado is redeploying a misinformation team that the state created for the 2020 election. The team is composed of three election security experts who monitor the internet for misinformation and then report it to federal law enforcement.
Ms. Griswold will oversee the team, called the Rapid Response Election Security Cyber Unit. It looks only for election-related misinformation on issues like absentee voting, polling locations and eligibility, she said.
“Facts still exist, and lies are being used to chip away at our fundamental freedoms,” Ms. Griswold said.
Connecticut officials said the state’s goal was to patrol the internet for election falsehoods. On May 7, the Connecticut Legislature approved $2 million for internet, TV, mail and radio education campaigns on the election process, and to hire an election information security officer.
Officials said they would prefer candidates fluent in both English and Spanish, to address the spread of misinformation in both languages. The officer would track down viral misinformation posts on Facebook, Instagram, Twitter and YouTube, and look for emerging narratives and memes, especially on fringe social media platforms and the dark web.
“We know we can’t boil the ocean, but we have to figure out where the threat is coming from, and before it metastasizes,” Mr. Bates said.
Twitter, which went public in 2013, has also had a tumultuous corporate history. It has repeatedly dealt with board dysfunction and drama with its founders, and was courted by other interested buyers in the past, including Disney and Salesforce. In 2020, the activist investment firm Elliott Management took a stake in Twitter and called for Jack Dorsey, one of its founders, to resign as chief executive. Mr. Dorsey stepped down last year.
“This company is very much undermonetized, especially compared to other platforms and competitors like Facebook,” said Pinar Yildirim, a professor of marketing at the University of Pennsylvania Wharton School of Business. “If you look at it from a point of pure business value, there’s definitely room for improvement.”
In a statement, Bret Taylor, Twitter’s chairman, said that the board had “conducted a thoughtful and comprehensive process” on Mr. Musk’s bid and that the deal would “deliver a substantial cash premium” for shareholders.
Regulators are unlikely to seriously challenge the transaction, former antitrust officials said, since the government most commonly intervenes to stop a deal when a company is buying a competitor.
The deal came together in a matter of weeks. Mr. Musk, who also leads the electric carmaker Tesla and the rocket maker SpaceX, began buying shares of Twitter in January and disclosed this month that he had amassed a stake of more than 9 percent.
That immediately set off a guessing game over what Mr. Musk planned to do with the platform. Twitter’s executives initially welcomed him to the board of directors, but he reversed course within days and instead began a bid to buy the company outright.
Any agreement initially appeared unlikely because the entrepreneur did not say how he would finance the deal. Twitter’s executives appeared skeptical, too, given that it was difficult to discern how much Mr. Musk might be jesting. In 2018, for example, he tweeted that he planned to take Tesla private and inaccurately claimed that he had “funding secured” for such a deal.
“And almost without exception, these influencers feel that they have been wronged by mainstream society in some way,” Mr. Brooking added.
Dr. Malone earned a medical degree from Northwestern University in 1991, and for the next decade taught pathology at the University of California, Davis, and the University of Maryland. He then turned to biotech start-ups and consulting. His résumé says he was “instrumental” in securing early-stage approval for research on the Ebola vaccine by the pharmaceutical company Merck in the mid-2010s. He also worked on repurposing drugs to treat Zika.
In extended interviews at his home over two days, Dr. Malone said he was repeatedly not recognized for his contributions over the course of his career, his voice low and grave as he recounted perceived slights by the institutions he had worked for. His wife, Dr. Jill Glasspool Malone, paced the room and pulled up articles on her laptop that she said supported his complaints.
The example he points to more frequently is from his time at the Salk Institute for Biological Studies in San Diego. While there, he performed experiments that showed how human cells could absorb an mRNA cocktail and produce proteins from it. Those experiments, he says, make him the inventor of mRNA vaccine technology.
“I was there,” Dr. Malone said. “I wrote all the invention.”
What the mainstream media did instead, he said, was give credit for the mRNA vaccines to the scientists Katalin Kariko and Drew Weissman, because there “is a concerted campaign to get them the Nobel Prize” by Pfizer and BioNTech, where Dr. Kariko is a senior vice president, as well as the University of Pennsylvania, where Dr. Weissman leads a laboratory researching vaccines and infectious diseases.
But at the time he was conducting those experiments, it was not known how to protect the fragile RNA from the immune system’s attack, scientists say. Former colleagues said they had watched in astonishment as Dr. Malone began posting on social media about why he deserved to win the Nobel Prize.
The idea that he is the inventor of mRNA vaccines is “a totally false claim,” said Dr. Gyula Acsadi, a pediatrician in Connecticut who along with Dr. Malone and five others wrote a widely cited paper in 1990 showing that injecting RNA into muscle could produce proteins. (The Pfizer and Moderna vaccines work by injecting RNA into arm muscles that produce copies of the “spike protein” found on the outside of the coronavirus. The human immune system identifies that protein, attacks it and then remembers how to defeat it.)
Meta, which owns Facebook and Instagram, took an unusual step last week: It suspended some of the quality controls that ensure that posts from users in Russia, Ukraine and other Eastern European countries meet its rules.
Under the change, Meta temporarily stopped tracking whether its workers who monitor Facebook and Instagram posts from those areas were accurately enforcing its content guidelines, six people with knowledge of the situation said. That’s because the workers could not keep up with shifting rules about what kinds of posts were allowed about the war in Ukraine, they said.
Meta has made more than half a dozen content policy revisions since Russia invaded Ukraine last month. The company has permitted posts about the conflict that it would normally have taken down — including some calling for the death of President Vladimir V. Putin of Russia and violence against Russian soldiers — before changing its mind or drawing up new guidelines, the people said.
The result has been internal confusion, especially among the content moderators who patrol Facebook and Instagram for text and images with gore, hate speech and incitements to violence. Meta has sometimes shifted its rules on a daily basis, causing whiplash, said the people, who were not authorized to speak publicly.
contended with pressure from Russian and Ukrainian authorities over the information battle about the conflict. And internally, it has dealt with discontent about its decisions, including from Russian employees concerned for their safety and Ukrainian workers who want the company to be tougher on Kremlin-affiliated organizations online, three people said.
Meta has weathered international strife before — including the genocide of a Muslim minority in Myanmar last decade and skirmishes between India and Pakistan — with varying degrees of success. Now the largest conflict on the European continent since World War II has become a litmus test of whether the company has learned to police its platforms during major global crises — and so far, it appears to remain a work in progress.
“All the ingredients of the Russia-Ukraine conflict have been around for a long time: the calls for violence, the disinformation, the propaganda from state media,” said David Kaye, a law professor at the University of California, Irvine, and a former special rapporteur to the United Nations. “What I find mystifying was that they didn’t have a game plan to deal with it.”
Dani Lever, a Meta spokeswoman, declined to directly address how the company was handling content decisions and employee concerns during the war.
After Russia invaded Ukraine, Meta said it established a round-the-clock special operations team staffed by employees who are native Russian and Ukrainian speakers. It also updated its products to aid civilians in the war, including features that direct Ukrainians toward reliable, verified information to locate housing and refugee assistance.
Mark Zuckerberg, Meta’s chief executive, and Sheryl Sandberg, the chief operating officer, have been directly involved in the response to the war, said two people with knowledge of the efforts. But as Mr. Zuckerberg focuses on transforming Meta into a company that will lead the digital worlds of the so-called metaverse, many responsibilities around the conflict have fallen — at least publicly — to Nick Clegg, the president for global affairs.
announced that Meta would restrict access within the European Union to the pages of Russia Today and Sputnik, which are Russian state-controlled media, following requests by Ukraine and other European governments. Russia retaliated by cutting off access to Facebook inside the country, claiming the company discriminated against Russian media, and then blocking Instagram.
This month, President Volodymyr Zelensky of Ukraine praised Meta for moving quickly to limit Russian war propaganda on its platforms. Meta also acted rapidly to remove an edited “deepfake” video from its platforms that falsely featured Mr. Zelensky yielding to Russian forces.
a group called the Ukrainian Legion to run ads on its platforms this month to recruit “foreigners” for the Ukrainian army, a violation of international laws. It later removed the ads — which were shown to people in the United States, Ireland, Germany and elsewhere — because the group may have misrepresented ties to the Ukrainian government, according to Meta.
Internally, Meta had also started changing its content policies to deal with the fast-moving nature of posts about the war. The company has long forbidden posts that might incite violence. But on Feb. 26, two days after Russia invaded Ukraine, Meta informed its content moderators — who are typically contractors — that it would allow calls for the death of Mr. Putin and “calls for violence against Russians and Russian soldiers in the context of the Ukraine invasion,” according to the policy changes, which were reviewed by The New York Times.
Reuters reported on Meta’s shifts with a headline that suggested that posts calling for violence against all Russians would be tolerated. In response, Russian authorities labeled Meta’s activities as “extremist.”
Shortly thereafter, Meta reversed course and said it would not let its users call for the deaths of heads of state.
“Circumstances in Ukraine are fast moving,” Mr. Clegg wrote in an internal memo that was reviewed by The Times and first reported by Bloomberg. “We try to think through all the consequences, and we keep our guidance under constant review because the context is always evolving.”
Meta amended other policies. This month, it made a temporary exception to its hate speech guidelines so users could post about the “removal of Russians” and “explicit exclusion against Russians” in 12 Eastern European countries, according to internal documents. But within a week, Meta tweaked the rule to note that it should be applied only to users in Ukraine.
The constant adjustments left moderators who oversee users in Central and Eastern European countries confused, the six people with knowledge of the situation said.
Russia-Ukraine War: Key Developments
Card 1 of 3
Putin’s advisers. U.S. intelligence suggests that President Vladimir V. Putin has been misinformed by his advisers about the Russian military’s struggles in Ukraine. The intelligence shows what appears to be growing tension between Mr. Putin and the Ministry of Defense, U.S. officials said.
The policy changes were onerous because moderators were generally given less than 90 seconds to decide on whether images of dead bodies, videos of limbs being blown off, or outright calls to violence violated Meta’s rules, they said. In some instances, they added, moderators were shown posts about the war in Chechen, Kazakh or Kyrgyz, despite not knowing those languages.
Ms. Lever declined to comment on whether Meta had hired content moderators who specialize in those languages.
take action against Russia Today and Sputnik, said two people who attended. Russian state activity was at the center of Facebook’s failure to protect the 2016 U.S. presidential election, they said, and it didn’t make sense that those outlets had continued to operate on Meta’s platforms.
While Meta has no employees in Russia, the company held a separate meeting this month for workers with Russian connections. Those employees said they were concerned that Moscow’s actions against the company would affect them, according to an internal document.
In discussions on Meta’s internal forums, which were viewed by The Times, some Russian employees said they had erased their place of work from their online profiles. Others wondered what would happen if they worked in the company’s offices in places with extradition treaties to Russia and “what kind of risks will be associated with working at Meta not just for us but our families.”
Ms. Lever said Meta’s “hearts go out to all of our employees who are affected by the war in Ukraine, and our teams are working to make sure they and their families have the support they need.”
At a separate company meeting this month, some employees voiced unhappiness with the changes to the speech policies during the war, according to an internal poll. Some asked if the new rules were necessary, calling the changes “a slippery slope” that were “being used as proof that Westerners hate Russians.”
Others asked about the effect on Meta’s business. “Will Russian ban affect our revenue for the quarter? Future quarters?” read one question. “What’s our recovery strategy?”
In the days after Russia’s invasion of Ukraine, thousands of Twitter accounts shared messages of support for Vladimir V. Putin, the Russian president.
They tried to deflect criticism of the war by comparing it to conflicts instigated by Western countries. Their commentary — along with tweets from other users who condemned it — made the hashtag #IStandWithPutin trend on Twitter in several regions around the world.
While some of the accounts said they were based in Nigeria and South Africa, the majority of those with a declared location on Twitter claimed to be from India and targeted their messages to other Indian users, researchers said.
evacuating nearly 20,000 of its citizens who were in the country when Russia’s invasion began. Hundreds of Indian students remained stuck amid heavy shelling at the time. India’s prime minister, Narendra Modi, who has avoided condemning Russia, appealed to Mr. Putin and his Ukrainian counterpart, President Volodymyr Zelensky, for help.
Russia’s local embassy used Twitter to instruct Indian media outlets to not use the word “war” but to instead refer to it as a “special military operation,” as media outlets in Russia have been forced by law to do. Some Indian Twitter users responded by mocking the embassy, while others chastised local media outlets as inept and needing instruction from Russia.
Pro-Russian sentiment has taken hold in right-wing circles in the United States, misinformation that has spread within Russia claims Ukrainians have staged bombings or bombed their own neighborhoods, and myths about Ukrainian fortitude have gone viral across social media platforms. But in India and other countries where social media users joined the hashtag, pro-Russian narratives have focused on ethnonationalism and Western hypocrisy over the war, themes that have resonated with social media users.
“There were dense clusters of communities engaging with it, many of which were based in India or based in Pakistan,” said Marc Owen Jones, an assistant professor of Middle East studies and digital humanities at Hamad Bin Khalifa University who analyzed the accounts using #IStandWithPutin.
It was not clear whether the accounts promoting pro-Putin messages in India were authentic, although Dr. Jones said some of the most popular ones engaged in suspicious behavior, like using stock photos as profile pictures or racking up likes and retweets despite having few followers.
blog post this month. “These accounts represent a wide range of attempts to manipulate the service — including opportunistic, financially motivated spam — and we don’t currently believe they represent a specific, coordinated campaign associated with a government actor.”
Russia-Ukraine War: Key Developments
Card 1 of 3
Ongoing peace talks. During peace talks between Russia and Ukraine in Istanbul, Russia promised it would “reduce military activity” near Kyiv, and Ukraine said it was ready to declare itself permanently neutral. Even so, weeks of further negotiation may be needed to reach an agreement, and Russia appears determined to capture more territory in eastern Ukraine.
On the ground. Russia’s apparent concessions in the north of Ukraine reflected a successful Ukrainian resistance that has bogged down Russia’s forces around Kyiv’s suburbs and retaken territory near the capital and cities closer to the Russian border.
New sanctions. The United States is preparing new sanctions targeting the supply chains of Russia’s military industrial sector as it seeks to erode Moscow’s ability to attack Ukraine. The new measures will be rolled out in coordination with Western allies.
But some of the accounts in India most likely belonged to real people, Dr. Jones said. “If you can get enough people spreading a message, then real people will join in,” he said. “It becomes hard to sort the organic behavior from the inorganic because it’s a mesh.”
In India, some right-wing groups have advanced similar messages. An organization called the Hindu Sena marched in support of Russia this month in the heart of India’s capital. Carrying Russian flags ordered for the occasion as well as saffron ones often flown by Hindu nationalists, participants were led by the group’s president, Vishnu Gupta.
Over 300 activists chanted, “Russia you fight on, we are with you” and “Long live the friendship of India and Russia.”
“Russia has always stood by India and is its best friend. While America supports Pakistan and does not want any Asian power to rise,” Mr. Gupta said in an interview. “We don’t believe in war. But now that it’s happening, India must go with Russia. We must make our position clear.”
Russia’s embassy in India has also used Twitter and Facebook to promote conspiracy theories about biological research labs in Ukraine and to pressure the Indian media.
largest supplier of weapons, and Ukraine by abstaining from voting against Russia at the United Nations. India has also sent medical supplies to Ukraine. It has been looking for ways to maintain its trade relations with Russia despite sanctions imposed on it by many Western countries.
But public sentiment about the war could pressure local politicians to choose a side, experts said.
“It’s a major, major flashpoint for a truly global competition for information,” Mr. Brookie said. “Its an inflection point where a number of countries — not just Russia but the United States, its allies and partners, as well as China — are positioning themselves.”
When Victoria Nuland, an under secretary of state, was questioned in the Senate this month over whether Ukraine had biological weapons, she said laboratories in the country had materials that could be dangerous if they fell into Russian hands. Jack Posobiec, a far-right commentator, insinuated on his March 9 podcast that Ms. Nuland’s answer bolstered the conspiracy theory.
“Everybody needs to come clean about what was going on in those labs, because I guarantee you the Russians are about to put all of it onto the world stage,” said Mr. Posobiec, who did not respond to calls seeking comment.
Russian officials also latched on to Ms. Nuland’s comments. “The nervous reaction confirms that Russia’s allegations are grounded,” the country’s official account for the Ministry of Foreign Affairs posted on Twitter.
Beyond the bioweapons conspiracy theory, Joseph Jordan, a white nationalist podcaster who goes by the pseudonym Eric Striker, repeated Russia’s claim that a pregnant woman who was injured in the bombing of a Ukrainian maternity hospital had faked her injuries. In his Telegram channel, Mr. Jordan told his 15,000 followers that the hospital photos had been “staged.” He did not respond to a request for comment.
Some Russians have publicly commented on what appears to be common ground with far-right Americans. Last week on the Russian state-backed news program “60 Minutes,” which is not connected to the CBS show of the same name, the host, Olga Skabeeva, addressed the country’s strengthening ties with Mr. Carlson.
“Our acquaintance, the host of Fox News Tucker Carlson, obviously has his own interests,” she said, airing several clips of Mr. Carlson’s show where he suggested the United States had pushed for conflict in Ukraine. “But lately, more and more often, they’re in tune with our own.”