But by the early 2010s, it had grown into a global water cooler where millions of people went to make sense of the world around them. Its rapid-fire, 140-character bursts made it a valuable tool for those wanting to steer a conversation, attract attention to a cause or simply peer into the kaleidoscope of human thought.
On any given day, Twitter was the place to: talk about the news, complain about airline food, flirt with strangers, announce an earthquake, yell at your senator, cheer for your sports teams, post nudes, make dumb jokes, ruin your own reputation, ruin somebody else’s reputation, document police brutality, argue about anime, fall for a cryptocurrency scam, start a music career, procrastinate, follow the stock market, issue a public apology, share scientific papers, discuss “Game of Thrones,” find skillet chicken recipes.
And while it was never the biggest social media platform, or the most profitable, Twitter did seem to level the playing field in a way other apps didn’t.
But as Twitter and other social networks grew, powerful people found that these apps could help them extend their power in new ways. Authoritarians discovered they could use them to crack down on dissent. Extremists learned they could stir up hateful mobs to drive women and people of color offline. Celebrities and influencers realized that the crazier you acted, the more attention you got, and dialed up their behavior accordingly. A foundational belief of social media’s pioneers — that simply giving people the tools to express themselves would create a fairer and more connected society — began to look hopelessly naïve.
And when Donald J. Trump rode a wave of retweets to the White House in 2016, and used his Twitter account as president to spread conspiracy theories, wage culture wars, undermine public health and threaten nuclear war, the idea that the app was a gift to the downtrodden became even harder to argue.
Since 2016, Twitter has tried to clean up its mess, putting into effect new rules on misinformation and hate speech and barring some high-profile trolls. Those changes made the platform safer and less chaotic, but they also alienated users who were uncomfortable with how powerful Twitter itself had become.
These users chafed at the company’s content moderation decisions, like the one made to permanently suspend Mr. Trump’s account after the Jan. 6, 2021, insurrection. They accused the platform’s leaders of bowing to a censorious mob. And some users grew nostalgic for the messier, more freewheeling Twitter they’d loved.
On the morning of July 8, former President Donald J. Trump took to Truth Social, a social media platform he founded with people close to him, to claim that he had in fact won the 2020 presidential vote in Wisconsin, despite all evidence to the contrary.
Barely 8,000 people shared that missive on Truth Social, a far cry from the hundreds of thousands of responses his posts on Facebook and Twitter had regularly generated before those services suspended his megaphones after the deadly riot on Capitol Hill on Jan. 6, 2021.
And yet Mr. Trump’s baseless claim pulsed through the public consciousness anyway. It jumped from his app to other social media platforms — not to mention podcasts, talk radio or television.
Within 48 hours of Mr. Trump’s post, more than one million people saw his claim on at least dozen other media. It appeared on Facebook and Twitter, from which he has been banished, but also YouTube, Gab, Parler and Telegram, according to an analysis by The New York Times.
gone mainstream among Republican Party members, driving state and county officials to impose new restrictions on casting ballots, often based on mere conspiracy theories percolating in right-wing media.
Voters must now sift through not only an ever-growing torrent of lies and falsehoods about candidates and their policies, but also information on when and where to vote. Officials appointed or elected in the name of fighting voter fraud have put themselves in the position to refuse to certify outcomes that are not to their liking.
a primary battleground in today’s fight against disinformation. A report last month by NewsGuard, an organization that tracks the problem online, showed that nearly 20 percent of videos presented as search results on TikTok contained false or misleading information on topics such as school shootings and Russia’s war in Ukraine.
continued to amplify “election denialism” in ways that undermined trust in the democratic system.
Another challenge is the proliferation of alternative platforms for those falsehoods and even more extreme views.
new survey by the Pew Research Center found that 15 percent of prominent accounts on those seven platforms had previously been banished from others like Twitter and Facebook.
F.B.I. raid on Mar-a-Lago thrust his latest pronouncements into the eye of the political storm once again.
study of Truth Social by Media Matters for America, a left-leaning media monitoring group, examined how the platform had become a home for some of the most fringe conspiracy theories. Mr. Trump, who began posting on the platform in April, has increasingly amplified content from QAnon, the online conspiracy theory.
He has shared posts from QAnon accounts more than 130 times. QAnon believers promote a vast and complex conspiracy that centers on Mr. Trump as a leader battling a cabal of Democratic Party pedophiles. Echoes of such views reverberated through Republican election campaigns across the country during this year’s primaries.
Ms. Jankowicz, the disinformation expert, said the nation’s social and political divisions had churned the waves of disinformation.
The controversies over how best to respond to the Covid-19 pandemic deepened distrust of government and medical experts, especially among conservatives. Mr. Trump’s refusal to accept the outcome of the 2020 election led to, but did not end with, the Capitol Hill violence.
“They should have brought us together,” Ms. Jankowicz said, referring to the pandemic and the riots. “I thought perhaps they could be kind of this convening power, but they were not.”
Social media companies are now using artificial intelligence to detect hate speech online.
Throughout the last decade, the U.S. has seen immense growth in frequent internet usage, as one-third of Americans say they’re online constantly, while nine out of ten say they surf the web several times a week — according to a March 2021 Pew Research poll. That immense surge in activity has helped people stay more connected to one another, but it’s also allowed for the widespread proliferation and exposure of hate speech. One fix that social media companies and other online networks have relied on is artificial intelligence – to varying degrees of success.
For companies with giant user bases, like Meta, artificial intelligence is a key, if not necessary tool for detecting hate speech — as there are too many users and pieces of violative content to be reviewed by the thousands of human content moderators already employed by the company. AI can help alleviate that burden by scaling up or down to fill in those gaps based on new influxes of users.
Facebook, for instance, has seen massive growth – from 400 million users in the early 2010s, to more than two billion by the end of the decade. Between January and March 2022, Meta took action on more than 15 million pieces of hate speech content on Facebook. Roughly 95% of that was detected proactively by Facebook with the help of AI.
That combination of AI and human moderators can still let huge misinformation themes fall through the cracks. Paul Barrett, deputy director of NYU’s Stern Center for Human Rights, found that every day, 3 million Facebook posts are flagged for review by 15,000 Facebook content moderators. The ratio of moderators to users is one to 160,000.
“If you have a volume of that nature, those humans, those people are going to have an enormous burden of making decisions on hundreds of discrete items each work day,” Barrett said.
Another issue: AI detected to root out hate speech is primarily trained by text and still images. This means that video content, especially if it’s live, is much more difficult to automatically detect as possible hate speech.
Zeve Sanderson is the founding executive director of NYU’s Center for Social Media and Politics.
“Live video is incredibly difficult to moderate because it’s live you know, we’ve seen this unfortunately recently with some tragic shootings where, you know, people have used live video in order to spread, you know, sort of content related to that. And even though actually platforms have been relatively quick to respond to that, we’ve seen copies of those videos spread. So it’s not just the original video, but also the ability to just sort of to record it and then share it in other forms. So, so live is extraordinarily challenging,” Sanderson said.
And, many AI systems are not robust enough to be able to detect that hate speech in real time. Extremism researcher Linda Schiegl told Newsy that this has become a problem in online multiplayer games where players can use voice chat to spread hateful ideologies or thoughts.
“It’s really difficult for automatic detection to pick stuff up because if you’re you’re talking about weapons or you’re talking about sort of how are we going to, I don’t know, a take on this school or whatever it could be in the game. And so artificial intelligence or automatic detection is really difficult in gaming spaces. And so it would have to be something that is more sophisticated than that or done by hand, which is really difficult, I think, even for these companies,” Schiegl said.
Video games, including those targeted at children, have been the subject of a new phenomenon in extremism.
World of Warcraft, Fortnite, Roblox, Rocket League — these immensely popular games, many with child audiences, have been the subject of a new phenomenon in extremism.
Some critics say online gaming and adjacent chat-based platforms are being used to expose and possibly recruit users to potentially dangerous ideologies.
What remains to be seen is how widespread the problem really is.
According to the Anti-Defamation League, three-fifths of gamers between the ages of 13 and 17 reported experiencing harassment while playing online games. Minority and LGBTQ+ gamers reported higher levels of harassment than their counterparts. Keeping up and detecting those instances of hate speech is not as easy as it sounds. It’s not always clear if and when someone exposed to extremist ideology recognizes it.
Some experts said the games can be used to foster relationships through things like voice chat and in other private conversations.
Others believe extremist ideas can be normalized through the lens of gaming, though there is a lack of conclusive research on the topic.
All of the experts Newsy spoke with said it would take a mixture of civil, governmental and platform enforcement to deal with the problem of any potential extremism.
To fight disinformation, California lawmakers are advancing a bill that would force social media companies to divulge their process for removing false, hateful or extremist material from their platforms. Texas lawmakers, by contrast, want to ban the largest of the companies — Facebook, Twitter and YouTube — from removing posts because of political points of view.
In Washington, the state attorney general persuaded a court to fine a nonprofit and its lawyer $28,000 for filing a baseless legal challenge to the 2020 governor’s race. In Alabama, lawmakers want to allow people to seek financial damages from social media platforms that shut down their accounts for having posted false content.
In the absence of significant action on disinformation at the federal level, officials in state after state are taking aim at the sources of disinformation and the platforms that propagate them — only they are doing so from starkly divergent ideological positions. In this deeply polarized era, even the fight for truth breaks along partisan lines.
a nation increasingly divided over a variety of issues — including abortion, guns, the environment — and along geographic lines.
a similar law in Florida that would have fined social media companies as much as $250,000 a day if they blocked political candidates from their platforms, which have become essential tools of modern campaigning. Other states with Republican-controlled legislatures have proposed similar measures, including Alabama, Mississippi, South Carolina, West Virginia, Ohio, Indiana, Iowa and Alaska.
Alabama’s attorney general, Steve Marshall, has created an online portal through which residents can complain that their access to social media has been restricted: alabamaag.gov/Censored. In a written response to questions, he said that social media platforms stepped up efforts to restrict content during the pandemic and the presidential election of 2020.
“During this period (and continuing to present day), social media platforms abandoned all pretense of promoting free speech — a principle on which they sold themselves to users — and openly and arrogantly proclaimed themselves the Ministry of Truth,” he wrote. “Suddenly, any viewpoint that deviated in the slightest from the prevailing orthodoxy was censored.”
Much of the activity on the state level today has been animated by the fraudulent assertion that Mr. Trump, and not President Biden, won the 2020 presidential election. Although disproved repeatedly, the claim has been cited by Republicans to introduce dozens of bills that would clamp down on absentee or mail-in voting in the states they control.
memoirist and Republican nominee for Senate, railed against social media giants, saying they stifled news about the foreign business dealings of Hunter Biden, the president’s son.
massacre at a supermarket in Buffalo in May.
Connecticut plans to spend nearly $2 million on marketing to share factual information about voting and to create a position for an expert to root out misinformation narratives about voting before they go viral. A similar effort to create a disinformation board at the Department of Homeland Security provoked a political fury before its work was suspended in May pending an internal review.
In California, the State Senate is moving forward with legislation that would require social media companies to disclose their policies regarding hate speech, disinformation, extremism, harassment and foreign political interference. (The legislation would not compel them to restrict content.) Another bill would allow civil lawsuits against large social media platforms like TikTok and Meta’s Facebook and Instagram if their products were proven to have addicted children.
“All of these different challenges that we’re facing have a common thread, and the common thread is the power of social media to amplify really problematic content,” said Assemblyman Jesse Gabriel of California, a Democrat, who sponsored the legislation to require greater transparency from social media platforms. “That has significant consequences both online and in physical spaces.”
It seems unlikely that the flurry of legislative activity will have a significant impact before this fall’s elections; social media companies will have no single response acceptable to both sides when accusations of disinformation inevitably arise.
“Any election cycle brings intense new content challenges for platforms, but the November midterms seem likely to be particularly explosive,” said Matt Perault, a director of the Center on Technology Policy at the University of North Carolina. “With abortion, guns, democratic participation at the forefront of voters’ minds, platforms will face intense challenges in moderating speech. It’s likely that neither side will be satisfied by the decisions platforms make.”
Elon Musk had a plan to buy Twitter and undo its content moderation policies. On Tuesday, just a day after reaching his $44 billion deal to buy the company, Mr. Musk was already at work on his agenda. He tweeted that past moderation decisions by a top Twitter lawyer were “obviously incredibly inappropriate.” Later, he shared a meme mocking the lawyer, sparking a torrent of attacks from other Twitter users.
Mr. Musk’s personal critique was a rough reminder of what faces employees who create and enforce Twitter’s complex content moderation policies. His vision for the company would take it right back to where it started, employees said, and force Twitter to relive the last decade.
Twitter executives who created the rules said they had once held views about online speech that were similar to Mr. Musk’s. They believed Twitter’s policies should be limited, mimicking local laws. But more than a decade of grappling with violence, harassment and election tampering changed their minds. Now, many executives at Twitter and other social media companies view their content moderation policies as essential safeguards to protect speech.
The question is whether Mr. Musk, too, will change his mind when confronted with the darkest corners of Twitter.
The tweets must flow. That meant Twitter did little to moderate the conversations on its platform.
Twitter’s founders took their cues from Blogger, the publishing platform, owned by Google, that several of them had helped build. They believed that any reprehensible content would be countered or drowned out by other users, said three employees who worked at Twitter during that time.
“There’s a certain amount of idealistic zeal that you have: ‘If people just embrace it as a platform of self-expression, amazing things will happen,’” said Jason Goldman, who was on Twitter’s founding team and served on its board of directors. “That mission is valuable, but it blinds you to think certain bad things that happen are bugs rather than equally weighted uses of the platform.”
The company typically removed content only if it contained spam, or violated American laws forbidding child exploitation and other criminal acts.
In 2008, Twitter hired Del Harvey, its 25th employee and the first person it assigned the challenge of moderating content full time. The Arab Spring protests started in 2010, and Twitter became a megaphone for activists, reinforcing many employees’ belief that good speech would win out online. But Twitter’s power as a tool for harassment became clear in 2014 when it became the epicenter of Gamergate, a mass harassment campaign that flooded women in the video game industry with death and rape threats.
2,700 fake Twitter profiles and used them to sow discord about the upcoming presidential election between Mr. Trump and Hillary Clinton.
The profiles went undiscovered for months, while complaints about harassment continued. In 2017, Jack Dorsey, the chief executive at the time, declared that policy enforcement would become the company’s top priority. Later that year, women boycotted Twitter during the #MeToo movement, and Mr. Dorsey acknowledged the company was “still not doing enough.”
He announced a list of content that the company would no longer tolerate: nude images shared without the consent of the person pictured, hate symbols and tweets that glorified violence.
Alex Jones from its service because they repeatedly violated policies.
How Elon Musk Bought Twitter
Card 1 of 6
A blockbuster deal. Elon Musk, the world’s wealthiest man, capped what seemed an improbable attempt by the famously mercurial billionaire to buy Twitter for roughly $44 billion. Here’s how the deal unfolded:
The initial offer. Mr. Musk made an unsolicited bid worth more than $40 billion for the influential social network, saying that he wanted to make Twitter a private company and that he wanted people to be able to speak more freely on the service.
The next year, Twitter rolled out new policies that were intended to prevent the spread of misinformation in future elections, banning tweets that could dissuade people from voting or mislead them about how to do so. Mr. Dorsey banned all forms of political advertising, but often left difficult moderation decisions to Ms. Gadde.
landmark legislation called the Digital Services Act, which requires social media platforms like Twitter to more aggressively police their services for hate speech, misinformation and illicit content.
The new law will require Twitter and other social media companies with more than 45 million users in the European Union to conduct annual risk assessments about the spread of harmful content on their platforms and outline plans to combat the problem. If they are not seen as doing enough, the companies can be fined up to 6 percent of their global revenue, or even be banned from the European Union for repeat offenses.
Inside Twitter, frustrations have mounted over Mr. Musk’s moderation plans, and some employees have wondered if he would really halt their work during such a critical moment, when they are set to begin moderating tweets about elections in Brazil and another national election in the United States.
For months, former President Donald J. Trump has promoted Truth Social, the soon-to-be-released flagship app of his fledging social media company, as a platform where free speech can thrive without the constraints imposed by Big Tech.
At least seven other social media companies have promised to do the same.
Gettr, a right-wing alternative to Twitter founded last year by a former adviser to Mr. Trump, bills itself as a haven from censorship. That’s similar to Parler — essentially another Twitter clone backed by Rebekah Mercer, a big donor to the Republican Party. MeWe and CloutHub are similar to Facebook, but with the pitch that they promote speech without restraint.
Truth Social was supposed to go live on Presidents’ Day, but the start date was recently pushed to March, though a limited test version was unveiled recently. A full rollout could be hampered by a regulatory investigation into a proposed merger of its parent company, the Trump Media & Technology Group, with a publicly traded blank-check company.
If and when it does open its doors, Mr. Trump’s app will be the newest — and most conspicuous — entrant in the tightly packed universe of social media companies that have cropped up in recent years, promising to build a parallel internet after Twitter, Facebook, Google and other mainstream platforms began to crack down on hate speech.
211 million daily active users on Twitter who see ads.
Many people who claim to crave a social network that caters to their political cause often aren’t ready to abandon Twitter or Facebook, said Weiai Xu, an assistant professor of communications at the University of Massachusetts-Amherst. So the big platforms remain important vehicles for “partisan users” to get their messages out, Mr. Xu said.
Gettr, Parler and Rumble have relied on Twitter to announce the signing of a new right-wing personality or influencer. Parler, for instance, used Twitter to post a link to an announcement that Melania Trump, the former first lady, was making its platform her “social media home.”
Alternative social media companies mainly thrive off politics, said Mark Weinstein, the founder of MeWe, a platform with 20 million registered users that has positioned itself as an option to Facebook.
certain subscription services. His start-up has raised $24 million from 100 investors.
But since political causes drive the most engagement for alternative social media, most other platforms are quick to embrace such opportunities. This month, CloutHub, which has just four million registered users, said its platform could be used to raise money for the protesting truckers of Ottawa.
Mr. Trump wasn’t far behind. “Facebook and Big Tech are seeking to destroy the Freedom Convoy of Truckers,” he said in a statement. (Meta, the parent company of Facebook, said it removed several groups associated with the convoy for violating their rules.)
Trump Media, Mr. Trump added, would let the truckers “communicate freely on Truth Social when we launch — coming very soon!”
Of all the alt-tech sites, Mr. Trump’s venture may have the best chance of success if it launches, not just because of the former president’s star power but also because of its financial heft. In September, Trump Media agreed to merge with Digital World Acquisition, a blank-check or special purpose acquisition company that raised $300 million. The two entities have raised $1 billion from 36 investors in a private placement.
But none of that money can be tapped until regulators wrap up their inquiry into whether Digital World flouted securities regulations in planning its merger with Trump Media. In the meantime, Trump Media, currently valued at more than $10 billion based on Digital World’s stock price, is trying to hire people to build its platform.
Trump supporter, and the venture fund of Mr. Thiel’s protégé J.D. Vance, who is running for a Senate seat from Ohio.
Rumble is also planning to go public through a merger with a special-purpose acquisition company. SPACs are shell companies created solely for the purpose of merging with an operating entity. The deal, arranged by the Wall Street firm Cantor Fitzgerald, will give Rumble $400 million in cash and a $2.1 billion valuation.
The site said in January that it had 39 million monthly active users, up from two million two years ago. It has struck various content deals, including one to provide video and streaming services to Truth Social. Representatives for Rumble did not respond to requests for comment.
removed it from their app stores and Amazon cut off web services after the riot, according to SensorTower, a digital analytics company.
John Matze, one of its founders, from his position as chief executive. Mr. Matze has said he was dismissed after a dispute with Ms. Mercer — the daughter of a wealthy hedge fund executive who is Parler’s main backer — over how to deal with extreme content posted on the platform.
Christina Cravens, a spokeswoman for Parler,said the company had always “prohibited violent and inciting content” and had invested in “content moderation best practices.”
Moderating content will also be a challenge for Truth Social, whose main star, Mr. Trump, has not been able to post messages since early 2021, when Twitter and Facebook kicked him off their platforms for inciting violence tied to the outcome of the 2020 presidential election.
With Mr. Trump as its main poster, it was unclear if Truth Social would grow past subscribers who sign up simply to read the former president’s missives, Mr. Matze said.
“Trump is building a community that will fight for something or whatever he stands for that day,” he said. “This is not social media for friends and family to share pictures.”
On a recent episode of his podcast, Rick Wiles, a pastor and self-described “citizen reporter,” endorsed a conspiracy theory: that Covid-19 vaccines were the product of a “global coup d’état by the most evil cabal of people in the history of mankind.”
“It’s an egg that hatches into a synthetic parasite and grows inside your body,” Mr. Wiles said on his Oct. 13 episode. “This is like a sci-fi nightmare, and it’s happening in front of us.”
Mr. Wiles belongs to a group of hosts who have made false or misleading statements about Covid-19 and effective treatments for it. Like many of them, he has access to much of his listening audience because his show appears on a platform provided by a large media corporation.
Mr. Wiles’s podcast is available through iHeart Media, an audio company based in San Antonio that says it reaches nine out of 10 Americans each month. Spotify and Apple are other major companies that provide significant audio platforms for hosts who have shared similar views with their listeners about Covid-19 and vaccination efforts, or have had guests on their shows who promoted such notions.
protect people against the coronavirus for long periods and have significantly reduced the spread of Covid-19. As the global death toll related to Covid-19 exceeds five million — and at a time when more than 40 percent of Americans are not fully vaccinated — iHeart, Spotify, Apple and many smaller audio companies have done little to rein in what radio hosts and podcasters say about the virus and vaccination efforts.
“There’s really no curb on it,” said Jason Loviglio, an associate professor of media and communication studies at the University of Maryland, Baltimore County. “There’s no real mechanism to push back, other than advertisers boycotting and corporate executives saying we need a culture change.”
Audio industry executives appear less likely than their counterparts in social media to try to check dangerous speech. TruNews, a conservative Christian media outlet founded by Mr. Wiles, who used the phrase “Jew coup” to describe efforts to impeach former President Donald J. Trump, has been banned by YouTube. His podcast remains available on iHeart.
Asked about his false statements concerning Covid-19 vaccines, Mr. Wiles described pandemic mitigation efforts as “global communism.” “If the Needle Nazis win, freedom is over for generations, maybe forever,” he said in an email.
The reach of radio shows and podcasts is great, especially among young people: A recent survey from the National Research Group, a consulting firm, found that 60 percent of listeners under 40 get their news primarily through audio, a type of media they say they trust more than print or video.
unfounded claim that “45,000 people have died from taking the vaccine.” In his final Twitter post, on July 30, Mr. Bernier accused the government of “acting like Nazis” for encouraging Covid-19 vaccines.
Jimmy DeYoung Sr., whose program was available on iHeart, Apple and Spotify, died of Covid-19 complications after making his show a venue for false or misleading statements about vaccines. One of his frequent guests was Sam Rohrer, a former Pennsylvania state representative who likened the promotion of Covid-19 vaccines to Nazi tactics and made a sweeping false statement. “This is not a vaccine, by definition,” Mr. Rohrer said on an April episode. “It is a permanent altering of my immune system, which God created to handle the kinds of things that are coming that way.” Mr. DeYoung thanked his guest for his “insight.” Mr. DeYoung died four months later.
has said his research has been “misinterpreted” by anti-vaccine activists. He added that Covid-19 vaccines have been found to reduce transmissions substantially, whereas chickens inoculated with the Marek’s disease vaccine were still able to transmit the disease. Mr. Sexton did not reply to a request for comment.
more than 600 podcasts and operates a vast online archive of audio programs — has rules for the podcasters on its platform prohibiting them from making statements that incite hate, promote Nazi propaganda or are defamatory. It would not say whether it has a policy concerning false statements on Covid-19 or vaccination efforts.
Apple’s content guidelines for podcasts prohibit “content that may lead to harmful or dangerous outcomes, or content that is obscene or gratuitous.” Apple did not reply to requests for comment for this article.
Spotify, which says its podcast platform has 299 million monthly listeners, prohibits hate speech in its guidelines. In a response to inquiries, the company said in a written statement that it also prohibits content “that promotes dangerous false or dangerous deceptive content about Covid-19, which may cause offline harm and/or pose a direct threat to public health.” The company added that it had removed content that violated its policies. But the episode with Mr. DeYoung’s conversation with Mr. Rohrer was still available via Spotify.
Dawn Ostroff, Spotify’s content and advertising business officer, said at a conference last month that the company was making “very aggressive moves” to invest more in content moderation. “There’s a difference between the content that we make and the content that we license and the content that’s on the platform,” she said, “but our policies are the same no matter what type of content is on our platform. We will not allow any content that infringes or that in any way is inaccurate.”
The audio industry has not drawn the same scrutiny as large social media companies, whose executives have been questioned in congressional hearings about the platforms’ role in spreading false or misleading information.
The social media giants have made efforts over the last year to stop the flow of false reports related to the pandemic. In September, YouTube said it was banning the accounts of several prominent anti-vaccine activists. It also removes or de-emphasizes content it deems to be misinformation or close to it. Late last year, Twitter announced that it would remove posts and ads with false claims about coronavirus vaccines. Facebook followed suit in February, saying it would remove false claims about vaccines generally.
now there’s podcasting.”
The Federal Communications Commission, which grants licenses to companies using the public airwaves, has oversight over radio operators, but not podcasts or online audio, which do not make use of the public airwaves.
The F.C.C. is barred from violating American citizens’ right to free speech. When it takes action against a media company over programming, it is typically in response to complaints about content considered obscene or indecent, as when it fined a Virginia television station in 2015 for a newscast that included a segment on a pornographic film star.
In a statement, an F.C.C. spokesman said the agency “reviews all complaints and determines what is actionable under the Constitution and the law.” It added that the main responsibility for what goes on the air lies with radio station owners, saying that “broadcast licensees have a duty to act in the public interest.”
The world of talk radio and podcasting is huge, and anti-vaccine sentiment is a small part of it. iHeart offers an educational podcast series about Covid-19 vaccines, and Spotify created a hub for podcasts about Covid-19 from news outlets including ABC and Bloomberg.
on the air this year, describing his decision to get vaccinated and encouraging his listeners to do the same.
Recently, he expressed his eagerness to get a booster shot and mentioned that he had picked up a new nickname: “The Vaxxinator.”
SAN FRANCISCO — In 2019, Facebook researchers began a new study of one of the social network’s foundational features: the Like button.
They examined what people would do if Facebook removed the distinct thumbs-up icon and other emoji reactions from posts on its photo-sharing app Instagram, according to company documents. The buttons had sometimes caused Instagram’s youngest users “stress and anxiety,” the researchers found, especially if posts didn’t get enough Likes from friends.
But the researchers discovered that when the Like button was hidden, users interacted less with posts and ads. At the same time, it did not alleviate teenagers’ social anxiety and young users did not share more photos, as the company thought they might, leading to a mixed bag of results.
Mark Zuckerberg, Facebook’s chief executive, and other managers discussed hiding the Like button for more Instagram users, according to the documents. In the end, a larger test was rolled out in just a limited capacity to “build a positive press narrative” around Instagram.
misinformation, privacy and hate speech, a central issue has been whether the basic way that the platform works has been at fault — essentially, the features that have made Facebook be Facebook.
Apart from the Like button, Facebook has scrutinized its share button, which lets users instantly spread content posted by other people; its groups feature, which is used to form digital communities; and other tools that define how more than 3.5 billion people behave and interact online. The research, laid out in thousands of pages of internal documents, underlines how the company has repeatedly grappled with what it has created.
What researchers found was often far from positive. Time and again, they determined that people misused key features or that those features amplified toxic content, among other effects. In an August 2019 internal memo, several researchers said it was Facebook’s “core product mechanics” — meaning the basics of how the product functioned — that had let misinformation and hate speech flourish on the site.
“The mechanics of our platform are not neutral,” they concluded.
hide posts they do not want to see and turning off political group recommendations to reduce the spread of misinformation.
But the core way that Facebook operates — a network where information can spread rapidly and where people can accumulate friends and followers and Likes — ultimately remains largely unchanged.
Many significant modifications to the social network were blocked in the service of growth and keeping users engaged, some current and former executives said. Facebook is valued at more than $900 billion.
“There’s a gap between the fact that you can have pretty open conversations inside of Facebook as an employee,” said Brian Boland, a Facebook vice president who left last year. “Actually getting change done can be much harder.”
The company documents are part of the Facebook Papers, a cache provided to the Securities and Exchange Commission and to Congress by a lawyer representing Frances Haugen, a former Facebook employee who has become a whistle-blower. Ms. Haugen earlier gave the documents to The Wall Street Journal. This month, a congressional staff member supplied the redacted disclosures to more than a dozen other news organizations, including The New York Times.
In a statement, Andy Stone, a Facebook spokesman, criticized articles based on the documents, saying that they were built on a “false premise.”
“Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or well-being misunderstands where our own commercial interests lie,” he said. He said Facebook had invested $13 billion and hired more than 40,000 people to keep people safe, adding that the company has called “for updated regulations where democratic governments set industry standards to which we can all adhere.”
post this month, Mr. Zuckerberg said it was “deeply illogical” that the company would give priority to harmful content because Facebook’s advertisers don’t want to buy ads on a platform that spreads hate and misinformation.
“At the most basic level, I think most of us just don’t recognize the false picture of the company that is being painted,” he wrote.
The Foundations of Success
When Mr. Zuckerberg founded Facebook 17 years ago in his Harvard University dorm room, the site’s mission was to connect people on college campuses and bring them into digital groups with common interests and locations.
Growth exploded in 2006 when Facebook introduced the News Feed, a central stream of photos, videos and status updates posted by people’s friends. Over time, the company added more features to keep people interested in spending time on the platform.
In 2009, Facebook introduced the Like button. The tiny thumbs-up symbol, a simple indicator of people’s preferences, became one of the social network’s most important features. The company allowed other websites to adopt the Like button so users could share their interests back to their Facebook profiles.
That gave Facebook insight into people’s activities and sentiments outside of its own site, so it could better target them with advertising. Likes also signified what users wanted to see more of in their News Feeds so people would spend more time on Facebook.
Facebook also added the groups feature, where people join private communication channels to talk about specific interests, and pages, which allowed businesses and celebrities to amass large fan bases and broadcast messages to those followers.
Adam Mosseri, the head of Instagram, has said that research on users’ well-being led to investments in anti-bullying measures on Instagram.
Understand the Facebook Papers
Card 1 of 6
A tech giant in trouble. The leak of internal documents by a former Facebook employee has provided an intimate look at the operations of the secretive social media company and renewed calls for better regulations of the company’s wide reach into the lives of its users.
The whistle-blower. During an interview with “60 Minutes” that aired Oct. 3, Frances Haugen, a Facebook product manager who left the company in May, revealed that she was responsible for the leak of those internal documents.
Ms. Haugen’s testimony in Congress. On Oct. 5, Ms. Haugen testified before a Senate subcommittee, saying that Facebook was willing to use hateful and harmful content on its site to keep users coming back. Facebook executives, including Mark Zuckerberg, called her accusations untrue.
The Facebook Papers. Ms. Haugen also filed a complaint with the Securities and Exchange Commission and provided the documents to Congress in redacted form. A congressional staff member then supplied the documents, known as the Facebook Papers, to several news organizations, including The New York Times.
Yet Facebook cannot simply tweak itself so that it becomes a healthier social network when so many problems trace back to core features, said Jane Lytvynenko, a senior fellow at the Harvard Kennedy Shorenstein Center, who studies social networks and misinformation.
“When we talk about the Like button, the share button, the News Feed and their power, we’re essentially talking about the infrastructure that the network is built on top of,” she said. “The crux of the problem here is the infrastructure itself.”
Self-Examination
As Facebook’s researchers dug into how its products worked, the worrisome results piled up.
In a July 2019 study of groups, researchers traced how members in those communities could be targeted with misinformation. The starting point, the researchers said, were people known as “invite whales,” who sent invitations out to others to join a private group.
These people were effective at getting thousands to join new groups so that the communities ballooned almost overnight, the study said. Then the invite whales could spam the groups with posts promoting ethnic violence or other harmful content, according to the study.
Another 2019 report looked at how some people accrued large followings on their Facebook pages, often using posts about cute animals and other innocuous topics. But once a page had grown to tens of thousands of followers, the founders sold it. The buyers then used the pages to show followers misinformation or politically divisive content, according to the study.
As researchers studied the Like button, executives considered hiding the feature on Facebook as well, according to the documents. In September 2019, it removed Likes from users’ Facebook posts in a small experiment in Australia.
The company wanted to see if the change would reduce pressure and social comparison among users. That, in turn, might encourage people to post more frequently to the network.
But people did not share more posts after the Like button was removed. Facebook chose not to roll the test out more broadly, noting, “Like counts are extremely low on the long list of problems we need to solve.”
Last year, company researchers also evaluated the share button. In a September 2020 study, a researcher wrote that the button and so-called reshare aggregation units in the News Feed, which are automatically generated clusters of posts that have already been shared by people’s friends, were “designed to attract attention and encourage engagement.”
But gone unchecked, the features could “serve to amplify bad content and sources,” such as bullying and borderline nudity posts, the researcher said.
That’s because the features made people less hesitant to share posts, videos and messages with one another. In fact, users were three times more likely to share any kind of content from the reshare aggregation units, the researcher said.
One post that spread widely this way was an undated message from an account called “The Angry Patriot.” The post notified users that people protesting police brutality were “targeting a police station” in Portland, Ore. After it was shared through reshare aggregation units, hundreds of hate-filled comments flooded in. It was an example of “hate bait,” the researcher said.
A common thread in the documents was how Facebook employees argued for changes in how the social network worked and often blamed executives for standing in the way.
In an August 2020 internal post, a Facebook researcher criticized the recommendation system that suggests pages and groups for people to follow and said it can “very quickly lead users down the path to conspiracy theories and groups.”
“Out of fears over potential public and policy stakeholder responses, we are knowingly exposing users to risks of integrity harms,” the researcher wrote. “During the time that we’ve hesitated, I’ve seen folks from my hometown go further and further down the rabbit hole” of conspiracy theory movements like QAnon and anti-vaccination and Covid-19 conspiracies.
The researcher added, “It has been painful to observe.”
Reporting was contributed by Davey Alba, Sheera Frenkel, Cecilia Kang and Ryan Mac.
On Feb. 4, 2019, a Facebook researcher created a new user account to see what it was like to experience the social media site as a person living in Kerala, India.
For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site.
The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month.
bots and fake accounts tied to the country’s ruling party and opposition figures were wreaking havoc on national elections. They also detail how a plan championed by Mark Zuckerberg, Facebook’s chief executive, to focus on “meaningful social interactions,” or exchanges between friends and family, was leading to more misinformation in India, particularly during the pandemic.
a violent coup in the country. Facebook said that after the coup, it implemented a special policy to remove praise and support of violence in the country, and later banned the Myanmar military from Facebook and Instagram.
In Sri Lanka, people were able to automatically add hundreds of thousands of users to Facebook groups, exposing them to violence-inducing and hateful content. In Ethiopia, a nationalist youth militia group successfully coordinated calls for violence on Facebook and posted other inflammatory content.
Facebook has invested significantly in technology to find hate speech in various languages, including Hindi and Bengali, two of the most widely used languages, Mr. Stone said. He added that Facebook reduced the amount of hate speech that people see globally by half this year.
suicide bombing in the disputed border region of Kashmir set off a round of violence and a spike in accusations, misinformation and conspiracies between Indian and Pakistani nationals.
After the attack, anti-Pakistan content began to circulate in the Facebook-recommended groups that the researcher had joined. Many of the groups, she noted, had tens of thousands of users. A different report by Facebook, published in December 2019, found Indian Facebook users tended to join large groups, with the country’s median group size at 140,000 members.
Graphic posts, including a meme showing the beheading of a Pakistani national and dead bodies wrapped in white sheets on the ground, circulated in the groups she joined.
After the researcher shared her case study with co-workers, her colleagues commented on the posted report that they were concerned about misinformation about the upcoming elections in India.
Two months later, after India’s national elections had begun, Facebook put in place a series of steps to stem the flow of misinformation and hate speech in the country, according to an internal document called Indian Election Case Study.
The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners — the third-party network of outlets with which Facebook works to outsource fact-checking — and increasing the amount of misinformation it removed. It also noted how Facebook had created a “political white list to limit P.R. risk,” essentially a list of politicians who received a special exemption from fact-checking.
The study did not note the immense problem the company faced with bots in India, nor issues like voter suppression. During the election, Facebook saw a spike in bots — or fake accounts — linked to various political groups, as well as efforts to spread misinformation that could have affected people’s understanding of the voting process.
In a separate report produced after the elections, Facebook found that over 40 percent of top views, or impressions, in the Indian state of West Bengal were “fake/inauthentic.” One inauthentic account had amassed more than 30 million impressions.
A report published in March 2021 showed that many of the problems cited during the 2019 elections persisted.
In the internal document, called Adversarial Harmful Networks: India Case Study, Facebook researchers wrote that there were groups and pages “replete with inflammatory and misleading anti-Muslim content” on Facebook.
The report said there were a number of dehumanizing posts comparing Muslims to “pigs” and “dogs,” and misinformation claiming that the Quran, the holy book of Islam, calls for men to rape their female family members.
Much of the material circulated around Facebook groups promoting Rashtriya Swayamsevak Sangh, an Indian right-wing and nationalist group with close ties to India’s ruling Bharatiya Janata Party, or B.J.P. The groups took issue with an expanding Muslim minority population in West Bengal and near the Pakistani border, and published posts on Facebook calling for the ouster of Muslim populations from India and promoting a Muslim population control law.
Facebook knew that such harmful posts proliferated on its platform, the report indicated, and it needed to improve its “classifiers,” which are automated systems that can detect and remove posts containing violent and inciting language. Facebook also hesitated to designate R.S.S. as a dangerous organization because of “political sensitivities” that could affect the social network’s operation in the country.
Of India’s 22 officially recognized languages, Facebook said it has trained its A.I. systems on five. (It said it had human reviewers for some others.) But in Hindi and Bengali, it still did not have enough data to adequately police the content, and much of the content targeting Muslims “is never flagged or actioned,” the Facebook report said.
Five months ago, Facebook was still struggling to efficiently remove hate speech against Muslims. Another company report detailed efforts by Bajrang Dal, an extremist group linked with the B.J.P., to publish posts containing anti-Muslim narratives on the platform.
Facebook is considering designating the group as a dangerous organization because it is “inciting religious violence” on the platform, the document showed. But it has not yet done so.
“Join the group and help to run the group; increase the number of members of the group, friends,” said one post seeking recruits on Facebook to spread Bajrang Dal’s messages. “Fight for truth and justice until the unjust are destroyed.”
Ryan Mac, Cecilia Kang and Mike Isaac contributed reporting.