Mr. Gregory said a similar phenomenon had occurred with social media posts claiming various liquids turned at-home coronavirus tests positive.

On Dec. 23, 2020, a video on YouTube showed coronavirus tests turning positive after being tested on kiwi, orange and berry fruit juice. It collected over 102,000 views. In the same month, a video producing the same results with Coca-Cola was posted on YouTube, collecting 16,800 views.

One year later, a spate of similar videos with the same theme appeared on TikTok and Instagram.

For Ms. Koltai, the re-emergence of false narratives even after social media companies labeled them a year earlier shows the power of misinformation to “thrive when it can latch on to a current event.”

“That is how narratives can peak at different times,” she said.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Tucker Carlson’s ‘Patriot Purge’ Special Leads Two Fox News Contributors to Quit

For his part, Mr. Goldberg said he has been thinking about William F. Buckley, the late founder of National Review, who saw as part of his mission “imposing seriousness on conservative arguments” and purging some extreme fringe groups, including the John Birch Society, from the right.

“Whether it’s ‘Patriot Purge’ or anti-vax stuff, I don’t want it in my name, and I want to call it out and criticize it,” Mr. Goldberg said. “I don’t want to feel like I am betraying a trust that I had by being a Fox News contributor. And I also don’t want to be accused of not really pulling the punches. And then this was just an untenable tension for me.”

Now, their views have put them outside the current Republican mainstream, or at least outside what mainstream right-wing institutions and politicians are willing to say out loud. But while in recent years both appeared occasionally on the evening show “Special Report” and on “Fox News Sunday,” which the network classifies as news, it’s been years since they were welcome on Fox’s prime time, and Mr. Goldberg clashed bitterly with the prime-time host Sean Hannity in 2016. (Mr. Hayes and Mr. Goldberg emailed their readers Sunday to announce their departure.)

Despite the former contributors’ hopes, Fox’s programming has hewed to Mr. Trump’s line, as have its personnel moves. The network, for instance, fired the veteran political editor who accurately projected Mr. Biden’s victory in the key state of Arizona on election night, and has hired the former Trump White House press secretary Kayleigh McEnany.

Mr. Hayes and Mr. Goldberg are the first members of Fox’s payroll to resign over “Patriot Purge,” but others have signaled their unhappiness. Geraldo Rivera, a Fox News correspondent since 2001, captured the difficulty of internal dissent at the network when he voiced cautious criticism of Mr. Carlson and “Patriot Purge” to my colleague Michael Grynbaum. “I worry that — and I’m probably going to get in trouble for this — but I’m wondering how much is done to provoke, rather than illuminate,” he said.

On air, two programs with smaller audiences than Mr. Carlson’s scrambled after his special to rebut the false theories presented in “Patriot Purge.” “Special Report” called in a former C.I.A. officer on Oct. 29 to debunk “false flag” theories. And on “Fox News Sunday,” Chris Wallace turned the same question over to one of Mr. Trump’s few foes in the Republican congressional delegation, Representative Liz Cheney of Wyoming.

Mr. Carlson called Mr. Hayes’s and Mr. Goldberg’s resignations “great news” in a telephone interview on Sunday. “Our viewers will be grateful.”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Covid-19 Misinformation Goes Unchecked on Radio and Podcasts

On a recent episode of his podcast, Rick Wiles, a pastor and self-described “citizen reporter,” endorsed a conspiracy theory: that Covid-19 vaccines were the product of a “global coup d’état by the most evil cabal of people in the history of mankind.”

“It’s an egg that hatches into a synthetic parasite and grows inside your body,” Mr. Wiles said on his Oct. 13 episode. “This is like a sci-fi nightmare, and it’s happening in front of us.”

Mr. Wiles belongs to a group of hosts who have made false or misleading statements about Covid-19 and effective treatments for it. Like many of them, he has access to much of his listening audience because his show appears on a platform provided by a large media corporation.

Mr. Wiles’s podcast is available through iHeart Media, an audio company based in San Antonio that says it reaches nine out of 10 Americans each month. Spotify and Apple are other major companies that provide significant audio platforms for hosts who have shared similar views with their listeners about Covid-19 and vaccination efforts, or have had guests on their shows who promoted such notions.

protect people against the coronavirus for long periods and have significantly reduced the spread of Covid-19. As the global death toll related to Covid-19 exceeds five million — and at a time when more than 40 percent of Americans are not fully vaccinated — iHeart, Spotify, Apple and many smaller audio companies have done little to rein in what radio hosts and podcasters say about the virus and vaccination efforts.

“There’s really no curb on it,” said Jason Loviglio, an associate professor of media and communication studies at the University of Maryland, Baltimore County. “There’s no real mechanism to push back, other than advertisers boycotting and corporate executives saying we need a culture change.”

Audio industry executives appear less likely than their counterparts in social media to try to check dangerous speech. TruNews, a conservative Christian media outlet founded by Mr. Wiles, who used the phrase “Jew coup” to describe efforts to impeach former President Donald J. Trump, has been banned by YouTube. His podcast remains available on iHeart.

Asked about his false statements concerning Covid-19 vaccines, Mr. Wiles described pandemic mitigation efforts as “global communism.” “If the Needle Nazis win, freedom is over for generations, maybe forever,” he said in an email.

The reach of radio shows and podcasts is great, especially among young people: A recent survey from the National Research Group, a consulting firm, found that 60 percent of listeners under 40 get their news primarily through audio, a type of media they say they trust more than print or video.

unfounded claim that “45,000 people have died from taking the vaccine.” In his final Twitter post, on July 30, Mr. Bernier accused the government of “acting like Nazis” for encouraging Covid-19 vaccines.

Jimmy DeYoung Sr., whose program was available on iHeart, Apple and Spotify, died of Covid-19 complications after making his show a venue for false or misleading statements about vaccines. One of his frequent guests was Sam Rohrer, a former Pennsylvania state representative who likened the promotion of Covid-19 vaccines to Nazi tactics and made a sweeping false statement. “This is not a vaccine, by definition,” Mr. Rohrer said on an April episode. “It is a permanent altering of my immune system, which God created to handle the kinds of things that are coming that way.” Mr. DeYoung thanked his guest for his “insight.” Mr. DeYoung died four months later.

has said his research has been “misinterpreted” by anti-vaccine activists. He added that Covid-19 vaccines have been found to reduce transmissions substantially, whereas chickens inoculated with the Marek’s disease vaccine were still able to transmit the disease. Mr. Sexton did not reply to a request for comment.

more than 600 podcasts and operates a vast online archive of audio programs — has rules for the podcasters on its platform prohibiting them from making statements that incite hate, promote Nazi propaganda or are defamatory. It would not say whether it has a policy concerning false statements on Covid-19 or vaccination efforts.

Apple’s content guidelines for podcasts prohibit “content that may lead to harmful or dangerous outcomes, or content that is obscene or gratuitous.” Apple did not reply to requests for comment for this article.

Spotify, which says its podcast platform has 299 million monthly listeners, prohibits hate speech in its guidelines. In a response to inquiries, the company said in a written statement that it also prohibits content “that promotes dangerous false or dangerous deceptive content about Covid-19, which may cause offline harm and/or pose a direct threat to public health.” The company added that it had removed content that violated its policies. But the episode with Mr. DeYoung’s conversation with Mr. Rohrer was still available via Spotify.

Dawn Ostroff, Spotify’s content and advertising business officer, said at a conference last month that the company was making “very aggressive moves” to invest more in content moderation. “There’s a difference between the content that we make and the content that we license and the content that’s on the platform,” she said, “but our policies are the same no matter what type of content is on our platform. We will not allow any content that infringes or that in any way is inaccurate.”

The audio industry has not drawn the same scrutiny as large social media companies, whose executives have been questioned in congressional hearings about the platforms’ role in spreading false or misleading information.

The social media giants have made efforts over the last year to stop the flow of false reports related to the pandemic. In September, YouTube said it was banning the accounts of several prominent anti-vaccine activists. It also removes or de-emphasizes content it deems to be misinformation or close to it. Late last year, Twitter announced that it would remove posts and ads with false claims about coronavirus vaccines. Facebook followed suit in February, saying it would remove false claims about vaccines generally.

now there’s podcasting.”

The Federal Communications Commission, which grants licenses to companies using the public airwaves, has oversight over radio operators, but not podcasts or online audio, which do not make use of the public airwaves.

The F.C.C. is barred from violating American citizens’ right to free speech. When it takes action against a media company over programming, it is typically in response to complaints about content considered obscene or indecent, as when it fined a Virginia television station in 2015 for a newscast that included a segment on a pornographic film star.

In a statement, an F.C.C. spokesman said the agency “reviews all complaints and determines what is actionable under the Constitution and the law.” It added that the main responsibility for what goes on the air lies with radio station owners, saying that “broadcast licensees have a duty to act in the public interest.”

The world of talk radio and podcasting is huge, and anti-vaccine sentiment is a small part of it. iHeart offers an educational podcast series about Covid-19 vaccines, and Spotify created a hub for podcasts about Covid-19 from news outlets including ABC and Bloomberg.

on the air this year, describing his decision to get vaccinated and encouraging his listeners to do the same.

Recently, he expressed his eagerness to get a booster shot and mentioned that he had picked up a new nickname: “The Vaxxinator.”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Trump Allies Help Bolsonaro Sow Doubt in Brazil’s Elections

BRASÍLIA — The conference hall was packed, with a crowd of more than 1,000 cheering attacks on the press, the liberals and the politically correct. There was Donald Trump Jr. warning that the Chinese could meddle in the election, a Tennessee congressman who voted against certifying the 2020 vote, and the president complaining about voter fraud.

In many ways, the September gathering looked like just another CPAC, the conservative political conference. But it was happening in Brazil, most of it was in Portuguese and the president at the lectern was Jair Bolsonaro, the country’s right-wing leader.

Fresh from their assault on the results of the 2020 U.S. presidential election, former President Donald J. Trump and his allies are exporting their strategy to Latin America’s largest democracy, working to support Mr. Bolsonaro’s bid for re-election next year — and helping sow doubt in the electoral process in the event that he loses.

pillow executive being sued for defaming voting-machine makers.

academics, Brazil’s electoral officials and the U.S. government, all have said that there has not been fraud in Brazil’s elections. Eduardo Bolsonaro has insisted there was. “I can’t prove — they say — that I have fraud,” he said in South Dakota. “So, OK, you can’t prove that you don’t.”

Mr. Trump’s circle has cozied up to other far-right leaders, including in Hungary, Poland and the Philippines, and tried to boost rising nationalist politicians elsewhere. But the ties are the strongest, and the stakes perhaps the highest, in Brazil.

WhatsApp groups for Bolsonaro supporters recently began circulating the trailer for a new series from Fox News host Tucker Carlson that sympathizes with the Jan. 6 Capitol riot, Mr. Nemer said. The United States, which has been a democracy for 245 years, withstood that attack. Brazil passed its constitution in 1988 after two decades under a military dictatorship.

advised President Bolsonaro to respect the democratic process.

In October, 64 members of Congress asked President Biden for a reset in the United States’ relationship with Brazil, citing President Bolsonaro’s pursuit of policies that threaten democratic rule. In response, Brazil’s ambassador to the United States defended President Bolsonaro, saying debate over election security is normal in democracies. “Brazil is and will continue to be one of the world’s freest countries,” he said.

Unemployment and inflation have risen. He has been operating without a political party for two years. And Brazil’s Supreme Court and Congress are closing in on investigations into him, his sons and his allies.

Late last month, a Brazil congressional panel recommended that President Bolsonaro be charged with “crimes against humanity,” asserting that he intentionally let the coronavirus tear through Brazil in a bid for herd immunity. The panel blamed his administration for more than 100,000 deaths.

Minutes after the panel voted, Mr. Trump issued his endorsement. “Brazil is lucky to have a man such as Jair Bolsonaro working for them,” he said in a statement. “He is a great president and will never let the people of his great country down!”

instant.

“They say he’s the Donald Trump of South America,” Mr. Trump said in 2019. “I like him.”

To many others, Mr. Bolsonaro was alarming. As a congressman and candidate, he had waxed poetic about Brazil’s military dictatorship, which tortured its political rivals. He said he would be incapable of loving a gay son. And he said a rival congresswoman was too ugly to be raped.

Three months into his term, President Bolsonaro went to Washington. At his welcome dinner, the Brazilian embassy sat him next to Mr. Bannon. At the White House later, Mr. Trump and Mr. Bolsonaro made deals that would allow the Brazilian government to spend more with the U.S. defense industry and American companies to launch rockets from Brazil.

announced Eduardo Bolsonaro would represent South America in The Movement, a right-wing, nationalist group that Mr. Bannon envisioned taking over the Western world. In the news release, Eduardo Bolsonaro said they would “reclaim sovereignty from progressive globalist elitist forces.”

pacts to increase commerce. American investors plowed billions of dollars into Brazilian companies. And Brazil spent more on American imports, including fuel, plastics and aircraft.

Now a new class of companies is salivating over Brazil: conservative social networks.

Gettr and Parler, two Twitter clones, have grown rapidly in Brazil by promising a hands-off approach to people who believe Silicon Valley is censoring conservative voices. One of their most high-profile recruits is President Bolsonaro.

partly funded by Guo Wengui, an exiled Chinese billionaire who is close with Mr. Bannon. (When Mr. Bannon was arrested on fraud charges, he was on Mr. Guo’s yacht.) Parler is funded by Rebekah Mercer, the American conservative megadonor who was Mr. Bannon’s previous benefactor.

Companies like Gettr and Parler could prove critical to President Bolsonaro. Like Mr. Trump, he built his political movement with social media. But now Facebook, YouTube and Twitter are more aggressively policing hate speech and misinformation. They blocked Mr. Trump and have started cracking down on President Bolsonaro. Last month, YouTube suspended his channel for a week after he falsely suggested coronavirus vaccines could cause AIDS.

In response, President Bolsonaro has tried to ban the companies from removing certain posts and accounts, but his policy was overturned. Now he has been directing his supporters to follow him elsewhere, including on Gettr, Parler and Telegram, a messaging app based in Dubai.

He will likely soon have another option. Last month, Mr. Trump announced he was starting his own social network. The company financing his new venture is partly led by Luiz Philippe de Orleans e Bragança, a Brazilian congressman and Bolsonaro ally.

said the rioters’ efforts were weak. “If it were organized, they would have taken the Capitol and made demands,” he said.

The day after the riot, President Bolsonaro warned that Brazil was “going to have a worse problem” if it didn’t change its own electoral systems, which rely on voting machines without paper backups. (Last week, he suddenly changed his tune after announcing that he would have Brazil’s armed forces monitor the election.)

Diego Aranha, a Brazilian computer scientist who studies the country’s election systems, said that Brazil’s system does make elections more vulnerable to attacks — but that there has been no evidence of fraud.

“Bolsonaro turned a technical point into a political weapon,” he said.

President Bolsonaro’s American allies have helped spread his claims.

At the CPAC in Brazil, Donald Trump Jr. told the audience that if they didn’t think the Chinese were aiming to undermine their election, “you haven’t been watching.” Mr. Bannon has called President Bolsonaro’s likely opponent, former President Luiz Inácio Lula da Silva, a “transnational, Marxist criminal” and “the most dangerous leftist in the world.” Mr. da Silva served 18 months in prison but his corruption charges were later tossed out by a Supreme Court justice.

Eduardo Bolsonaro’s slide show detailing claims of Brazilian voter fraud, delivered in South Dakota, was broadcast by One America News, a conservative cable network that reaches 35 million U.S. households. It was also translated into Portuguese and viewed nearly 600,000 times on YouTube and Facebook.

protest his enemies in the Supreme Court and on the left.

The weekend before, just down the road from the presidential palace, Mr. Bolsonaro’s closest allies gathered at CPAC. Eduardo Bolsonaro and the American Conservative Union, the Republican lobbying group that runs CPAC, organized the event. Eduardo Bolsonaro’s political committee mostly financed it. Tickets sold out.

a fiery speech. Then he flew to São Paulo, where he used Mr. Miller’s detainment as evidence of judicial overreach. He told the crowd he would no longer recognize decisions from a Supreme Court judge.

He then turned to the election.

“We have three alternatives for me: Prison, death or victory,” he said. “Tell the bastards I’ll never be arrested.”

Leonardo Coelho and Kenneth P. Vogel contributed reporting.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

In Romania, Hard-Hit by Covid, Doctors Fight Vaccine Refusal

COPACENI, Romania — As a new wave of the coronavirus pandemic crashed over Eastern Europe last month, devastating unvaccinated populations, an Orthodox Church bishop in southern Romania offered solace to his flock: “Don’t be fooled by what you see on TV — don’t be scared of Covid.”

Most important, Bishop Ambrose of Giurgiu told worshipers in this small Romanian town on Oct. 14, “don’t rush to get vaccinated.”

The bishop is now under criminal investigation by the police for spreading dangerous disinformation, but his anti-vaccine clarion call, echoed by prominent politicians, influential voices on the internet and many others, helps explain why Romania has in recent weeks reported the world’s highest per capita death rate from Covid-19.

On Tuesday, nearly 600 Romanians died, the most during the pandemic. The country’s death rate relative to population is almost seven times as high as the United States’, and almost 17 times as high as Germany’s.

Europe’s second-lowest vaccination rate; around 44 percent of adults have had at least one dose, ahead of only Bulgaria, at 29 percent. Overall, the European Union stands at 81 percent, with several countries above 90 percent. Complicating matters, Romania has been without a government since last month, when a centrist coalition unraveled.

some form of a vaccine requirement. Here’s a closer look.

As she spoke, a 66-year-old Covid patient, Nicu Paul, gasped for breath on a bed nearby. His wife, Maria, also suffering severe pulmonary problems from Covid, lay in the next bed. Mr. Paul said he had worked for 40 years as an ambulance driver and never gotten sick — “God saved me,” he said — so he decided against vaccination because “there are so many rumors about the vaccine that I did not know what to believe.”

Romania began vaccinating its citizens last December and put the program under the military, the country’s most respected institution, according to opinion polls. The second most trusted institution, however, is the Orthodox church, which has sent mixed signals on vaccines, with Patriarch Daniel in Bucharest telling people to make up their own minds and listen to doctors, while many local clerics and some influential bishops denounced vaccines as the Devil’s work.

Colonel Ghorghita said he had been shocked and mystified by the reach of anti-vaccination sentiment. “They really believe that vaccines are not the proper way to stop Covid,” he said, adding that this was despite the fact that “more than 90 percent of deaths are unvaccinated people.” Old people, the most vulnerable demographic, have been the hardest to convince, he said, with only 25 percent of people over 80 vaccinated.

In central Bucharest, huge signs display photographs of gravely ill patients in hospitals as part of a campaign to jolt people back to reality. “They are suffocating. They beg. They regret,” reads a caption.

Dr. Streinu-Cercel said she was uneasy with trying to reach people by scaring them. “We should be talking about science, not fear,” she said, but “fear is the only thing that got the attention of the general population.”

Distrust of just about everyone and everything is so deep, she said, that some of her patients “are gasping for breath but tell me that Covid does not exist.”

“It is very difficult when so many people are denying all reality,” she added.

At a vaccination center at her hospital, only a trickle of people pass through most days, though vaccines are free and increasingly necessary following new rules requiring vaccination certificates to enter many public buildings.

One of those getting vaccinated was Norica Gheorghe, 82. She said she had held off for months on getting a shot but decided to go ahead this past week after seeing reports that nearly 600 had died in one day. “My hair stood on end when I saw this number, and I decided that I should get vaccinated,” she said.

At the start of the pandemic in 2020, Covid disinformation in Romania mostly followed themes that found traction in many other countries, according to Alina Bargaoanu, a Bucharest communications professor who tracks disinformation, with people spreading wild conspiracy theories under fake names on Facebook and other social media.

But as the pandemic dragged on, she added, this largely fake virtual phenomenon morphed into a political movement driven by real people like Diana Sosoaca, an elected member of Romania’s upper house of Parliament. Ms. Sosoaca led a protest in the north of the country that blocked the opening of a vaccination center, denouncing the pandemic as “the biggest lie of the century,” and organized anti-mask rallies in Bucharest. Videos of her antics have attracted millions of views.

Ms. Bargaoanu, the disinformation researcher, said she suspected a Russian hand in spreading alarm over vaccines, but conceded that many of the most popular anti-vaccination conspiracy theories originate in the United States, making them particularly hard to debunk because “Romania is a very pro-American country.”

Colonel Ghorghita has taken to social media to rebut the more outlandish falsehoods, and also met with Christian, Jewish and Muslim leaders to ask them not to fan the flames of disinformation. “They don’t have a duty to recommend vaccination but they do have a duty not to recommend against it,” he said.

The Orthodox church is particularly important because of its strong influence in rural areas, where vaccination rates are half those in cities like Bucharest, where more than 80 percent of adults have received at least one shot.

In Copaceni, a rural county south of Bucharest, workers at a small clinic offering vaccines said they were appalled by Bishop Ambrose’s anti-vaccine tirades.

“I am fighting to get people vaccinated every day, and then he comes along and tells them not to bother,” said Balota Hajnalka, a doctor running the clinic.

Boryana Dzhambazova contributed reporting from Sofia, Bulgaria, and Anton Troianovski from Moscow.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Facebook Debates What to Do With Its Like and Share Buttons

SAN FRANCISCO — In 2019, Facebook researchers began a new study of one of the social network’s foundational features: the Like button.

They examined what people would do if Facebook removed the distinct thumbs-up icon and other emoji reactions from posts on its photo-sharing app Instagram, according to company documents. The buttons had sometimes caused Instagram’s youngest users “stress and anxiety,” the researchers found, especially if posts didn’t get enough Likes from friends.

But the researchers discovered that when the Like button was hidden, users interacted less with posts and ads. At the same time, it did not alleviate teenagers’ social anxiety and young users did not share more photos, as the company thought they might, leading to a mixed bag of results.

Mark Zuckerberg, Facebook’s chief executive, and other managers discussed hiding the Like button for more Instagram users, according to the documents. In the end, a larger test was rolled out in just a limited capacity to “build a positive press narrative” around Instagram.

misinformation, privacy and hate speech, a central issue has been whether the basic way that the platform works has been at fault — essentially, the features that have made Facebook be Facebook.

Apart from the Like button, Facebook has scrutinized its share button, which lets users instantly spread content posted by other people; its groups feature, which is used to form digital communities; and other tools that define how more than 3.5 billion people behave and interact online. The research, laid out in thousands of pages of internal documents, underlines how the company has repeatedly grappled with what it has created.

What researchers found was often far from positive. Time and again, they determined that people misused key features or that those features amplified toxic content, among other effects. In an August 2019 internal memo, several researchers said it was Facebook’s “core product mechanics” — meaning the basics of how the product functioned — that had let misinformation and hate speech flourish on the site.

“The mechanics of our platform are not neutral,” they concluded.

hide posts they do not want to see and turning off political group recommendations to reduce the spread of misinformation.

But the core way that Facebook operates — a network where information can spread rapidly and where people can accumulate friends and followers and Likes — ultimately remains largely unchanged.

Many significant modifications to the social network were blocked in the service of growth and keeping users engaged, some current and former executives said. Facebook is valued at more than $900 billion.

“There’s a gap between the fact that you can have pretty open conversations inside of Facebook as an employee,” said Brian Boland, a Facebook vice president who left last year. “Actually getting change done can be much harder.”

The company documents are part of the Facebook Papers, a cache provided to the Securities and Exchange Commission and to Congress by a lawyer representing Frances Haugen, a former Facebook employee who has become a whistle-blower. Ms. Haugen earlier gave the documents to The Wall Street Journal. This month, a congressional staff member supplied the redacted disclosures to more than a dozen other news organizations, including The New York Times.

In a statement, Andy Stone, a Facebook spokesman, criticized articles based on the documents, saying that they were built on a “false premise.”

“Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or well-being misunderstands where our own commercial interests lie,” he said. He said Facebook had invested $13 billion and hired more than 40,000 people to keep people safe, adding that the company has called “for updated regulations where democratic governments set industry standards to which we can all adhere.”

post this month, Mr. Zuckerberg said it was “deeply illogical” that the company would give priority to harmful content because Facebook’s advertisers don’t want to buy ads on a platform that spreads hate and misinformation.

“At the most basic level, I think most of us just don’t recognize the false picture of the company that is being painted,” he wrote.

When Mr. Zuckerberg founded Facebook 17 years ago in his Harvard University dorm room, the site’s mission was to connect people on college campuses and bring them into digital groups with common interests and locations.

Growth exploded in 2006 when Facebook introduced the News Feed, a central stream of photos, videos and status updates posted by people’s friends. Over time, the company added more features to keep people interested in spending time on the platform.

In 2009, Facebook introduced the Like button. The tiny thumbs-up symbol, a simple indicator of people’s preferences, became one of the social network’s most important features. The company allowed other websites to adopt the Like button so users could share their interests back to their Facebook profiles.

That gave Facebook insight into people’s activities and sentiments outside of its own site, so it could better target them with advertising. Likes also signified what users wanted to see more of in their News Feeds so people would spend more time on Facebook.

Facebook also added the groups feature, where people join private communication channels to talk about specific interests, and pages, which allowed businesses and celebrities to amass large fan bases and broadcast messages to those followers.

Adam Mosseri, the head of Instagram, has said that research on users’ well-being led to investments in anti-bullying measures on Instagram.

Yet Facebook cannot simply tweak itself so that it becomes a healthier social network when so many problems trace back to core features, said Jane Lytvynenko, a senior fellow at the Harvard Kennedy Shorenstein Center, who studies social networks and misinformation.

“When we talk about the Like button, the share button, the News Feed and their power, we’re essentially talking about the infrastructure that the network is built on top of,” she said. “The crux of the problem here is the infrastructure itself.”

As Facebook’s researchers dug into how its products worked, the worrisome results piled up.

In a July 2019 study of groups, researchers traced how members in those communities could be targeted with misinformation. The starting point, the researchers said, were people known as “invite whales,” who sent invitations out to others to join a private group.

These people were effective at getting thousands to join new groups so that the communities ballooned almost overnight, the study said. Then the invite whales could spam the groups with posts promoting ethnic violence or other harmful content, according to the study.

Another 2019 report looked at how some people accrued large followings on their Facebook pages, often using posts about cute animals and other innocuous topics. But once a page had grown to tens of thousands of followers, the founders sold it. The buyers then used the pages to show followers misinformation or politically divisive content, according to the study.

As researchers studied the Like button, executives considered hiding the feature on Facebook as well, according to the documents. In September 2019, it removed Likes from users’ Facebook posts in a small experiment in Australia.

The company wanted to see if the change would reduce pressure and social comparison among users. That, in turn, might encourage people to post more frequently to the network.

But people did not share more posts after the Like button was removed. Facebook chose not to roll the test out more broadly, noting, “Like counts are extremely low on the long list of problems we need to solve.”

Last year, company researchers also evaluated the share button. In a September 2020 study, a researcher wrote that the button and so-called reshare aggregation units in the News Feed, which are automatically generated clusters of posts that have already been shared by people’s friends, were “designed to attract attention and encourage engagement.”

But gone unchecked, the features could “serve to amplify bad content and sources,” such as bullying and borderline nudity posts, the researcher said.

That’s because the features made people less hesitant to share posts, videos and messages with one another. In fact, users were three times more likely to share any kind of content from the reshare aggregation units, the researcher said.

One post that spread widely this way was an undated message from an account called “The Angry Patriot.” The post notified users that people protesting police brutality were “targeting a police station” in Portland, Ore. After it was shared through reshare aggregation units, hundreds of hate-filled comments flooded in. It was an example of “hate bait,” the researcher said.

A common thread in the documents was how Facebook employees argued for changes in how the social network worked and often blamed executives for standing in the way.

In an August 2020 internal post, a Facebook researcher criticized the recommendation system that suggests pages and groups for people to follow and said it can “very quickly lead users down the path to conspiracy theories and groups.”

“Out of fears over potential public and policy stakeholder responses, we are knowingly exposing users to risks of integrity harms,” the researcher wrote. “During the time that we’ve hesitated, I’ve seen folks from my hometown go further and further down the rabbit hole” of conspiracy theory movements like QAnon and anti-vaccination and Covid-19 conspiracies.

The researcher added, “It has been painful to observe.”

Reporting was contributed by Davey Alba, Sheera Frenkel, Cecilia Kang and Ryan Mac.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

In India, Facebook Struggles to Combat Misinformation and Hate Speech

On Feb. 4, 2019, a Facebook researcher created a new user account to see what it was like to experience the social media site as a person living in Kerala, India.

For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site.

The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month.

bots and fake accounts tied to the country’s ruling party and opposition figures were wreaking havoc on national elections. They also detail how a plan championed by Mark Zuckerberg, Facebook’s chief executive, to focus on “meaningful social interactions,” or exchanges between friends and family, was leading to more misinformation in India, particularly during the pandemic.

a violent coup in the country. Facebook said that after the coup, it implemented a special policy to remove praise and support of violence in the country, and later banned the Myanmar military from Facebook and Instagram.

In Sri Lanka, people were able to automatically add hundreds of thousands of users to Facebook groups, exposing them to violence-inducing and hateful content. In Ethiopia, a nationalist youth militia group successfully coordinated calls for violence on Facebook and posted other inflammatory content.

Facebook has invested significantly in technology to find hate speech in various languages, including Hindi and Bengali, two of the most widely used languages, Mr. Stone said. He added that Facebook reduced the amount of hate speech that people see globally by half this year.

suicide bombing in the disputed border region of Kashmir set off a round of violence and a spike in accusations, misinformation and conspiracies between Indian and Pakistani nationals.

After the attack, anti-Pakistan content began to circulate in the Facebook-recommended groups that the researcher had joined. Many of the groups, she noted, had tens of thousands of users. A different report by Facebook, published in December 2019, found Indian Facebook users tended to join large groups, with the country’s median group size at 140,000 members.

Graphic posts, including a meme showing the beheading of a Pakistani national and dead bodies wrapped in white sheets on the ground, circulated in the groups she joined.

After the researcher shared her case study with co-workers, her colleagues commented on the posted report that they were concerned about misinformation about the upcoming elections in India.

Two months later, after India’s national elections had begun, Facebook put in place a series of steps to stem the flow of misinformation and hate speech in the country, according to an internal document called Indian Election Case Study.

The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners — the third-party network of outlets with which Facebook works to outsource fact-checking — and increasing the amount of misinformation it removed. It also noted how Facebook had created a “political white list to limit P.R. risk,” essentially a list of politicians who received a special exemption from fact-checking.

The study did not note the immense problem the company faced with bots in India, nor issues like voter suppression. During the election, Facebook saw a spike in bots — or fake accounts — linked to various political groups, as well as efforts to spread misinformation that could have affected people’s understanding of the voting process.

In a separate report produced after the elections, Facebook found that over 40 percent of top views, or impressions, in the Indian state of West Bengal were “fake/inauthentic.” One inauthentic account had amassed more than 30 million impressions.

A report published in March 2021 showed that many of the problems cited during the 2019 elections persisted.

In the internal document, called Adversarial Harmful Networks: India Case Study, Facebook researchers wrote that there were groups and pages “replete with inflammatory and misleading anti-Muslim content” on Facebook.

The report said there were a number of dehumanizing posts comparing Muslims to “pigs” and “dogs,” and misinformation claiming that the Quran, the holy book of Islam, calls for men to rape their female family members.

Much of the material circulated around Facebook groups promoting Rashtriya Swayamsevak Sangh, an Indian right-wing and nationalist group with close ties to India’s ruling Bharatiya Janata Party, or B.J.P. The groups took issue with an expanding Muslim minority population in West Bengal and near the Pakistani border, and published posts on Facebook calling for the ouster of Muslim populations from India and promoting a Muslim population control law.

Facebook knew that such harmful posts proliferated on its platform, the report indicated, and it needed to improve its “classifiers,” which are automated systems that can detect and remove posts containing violent and inciting language. Facebook also hesitated to designate R.S.S. as a dangerous organization because of “political sensitivities” that could affect the social network’s operation in the country.

Of India’s 22 officially recognized languages, Facebook said it has trained its A.I. systems on five. (It said it had human reviewers for some others.) But in Hindi and Bengali, it still did not have enough data to adequately police the content, and much of the content targeting Muslims “is never flagged or actioned,” the Facebook report said.

Five months ago, Facebook was still struggling to efficiently remove hate speech against Muslims. Another company report detailed efforts by Bajrang Dal, an extremist group linked with the B.J.P., to publish posts containing anti-Muslim narratives on the platform.

Facebook is considering designating the group as a dangerous organization because it is “inciting religious violence” on the platform, the document showed. But it has not yet done so.

“Join the group and help to run the group; increase the number of members of the group, friends,” said one post seeking recruits on Facebook to spread Bajrang Dal’s messages. “Fight for truth and justice until the unjust are destroyed.”

Ryan Mac, Cecilia Kang and Mike Isaac contributed reporting.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

What Happened When Facebook Employees Warned About Election Misinformation

WHAT HAPPENED

1. From Wednesday through Saturday there was a lot of content circulating which implied fraud in the election, at around 10% of all civic content and 1-2% of all US VPVs. There was also a fringe of incitement to violence.

2. There were dozens of employees monitoring this, and FB launched ~15 measures prior to the election, and another ~15 in the days afterwards. Most of the measures made existings processes more aggressive: e.g. by lowering thresholds, by making penalties more severe, or expanding eligibility for existing measures. Some measures were qualitative: reclassifying certain types of content as violating, which had not been before.

3. I would guess these measures reduced prevalence of violating content by at least 2X. However they had collateral damage (removing and demoting non-violating content), and the episode caused noticeable resentment by Republican Facebook users who feel they are being unfairly targeted.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Inside Facebook’s Push to Defend Its Image

The changes have involved Facebook executives from its marketing, communications, policy and integrity teams. Alex Schultz, a 14-year company veteran who was named chief marketing officer last year, has also been influential in the image reshaping effort, said five people who worked with him. But at least one of the decisions was driven by Mr. Zuckerberg, and all were approved by him, three of the people said.

Credit…Tommaso Boddi/Getty Images

Joe Osborne, a Facebook spokesman, denied that the company had changed its approach.

“People deserve to know the steps we’re taking to address the different issues facing our company — and we’re going to share those steps widely,” he said in a statement.

For years, Facebook executives have chafed at how their company appeared to receive more scrutiny than Google and Twitter, said current and former employees. They attributed that attention to Facebook’s leaving itself more exposed with its apologies and providing access to internal data, the people said.

So in January, executives held a virtual meeting and broached the idea of a more aggressive defense, one attendee said. The group discussed using the News Feed to promote positive news about the company, as well as running ads that linked to favorable articles about Facebook. They also debated how to define a pro-Facebook story, two participants said.

That same month, the communications team discussed ways for executives to be less conciliatory when responding to crises and decided there would be less apologizing, said two people with knowledge of the plan.

Mr. Zuckerberg, who had become intertwined with policy issues including the 2020 election, also wanted to recast himself as an innovator, the people said. In January, the communications team circulated a document with a strategy for distancing Mr. Zuckerberg from scandals, partly by focusing his Facebook posts and media appearances on new products, they said.

The Information, a tech news site, previously reported on the document.

The impact was immediate. On Jan. 11, Sheryl Sandberg, Facebook’s chief operating officer — and not Mr. Zuckerberg — told Reuters that the storming of the U.S. Capitol a week earlier had little to do with Facebook. In July, when President Biden said the social network was “killing people” by spreading Covid-19 misinformation, Guy Rosen, Facebook’s vice president for integrity, disputed the characterization in a blog post and pointed out that the White House had missed its coronavirus vaccination goals.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

How Facebook Relies on Accenture to Scrub Toxic Content

In 2010, Accenture scored an accounting contract with Facebook. By 2012, that had expanded to include a deal for moderating content, particularly outside the United States.

That year, Facebook sent employees to Manila and Warsaw to train Accenture workers to sort through posts, two former Facebook employees involved with the trip said. Accenture’s workers were taught to use a Facebook software system and the platform’s guidelines for leaving content up, taking it down or escalating it for review.

What started as a few dozen Accenture moderators grew rapidly.

By 2015, Accenture’s office in the San Francisco Bay Area had set up a team, code-named Honey Badger, just for Facebook’s needs, former employees said. Accenture went from providing about 300 workers in 2015 to about 3,000 in 2016. They are a mix of full-time employees and contractors, depending on the location and task.

The firm soon parlayed its work with Facebook into moderation contracts with YouTube, Twitter, Pinterest and others, executives said. (The digital content moderation industry is projected to reach $8.8 billion next year, according to Everest Group, roughly double the 2020 total.) Facebook also gave Accenture contracts in areas like checking for fake or duplicate user accounts and monitoring celebrity and brand accounts to ensure they were not flooded with abuse.

After federal authorities discovered in 2016 that Russian operatives had used Facebook to spread divisive posts to American voters for the presidential election, the company ramped up the number of moderators. It said it would hire more than 3,000 people — on top of the 4,500 it already had — to police the platform.

“If we’re going to build a safe community, we need to respond quickly,” Mr. Zuckerberg said in a 2017 post.

The next year, Facebook hired Arun Chandra, a former Hewlett Packard Enterprise executive, as vice president of scaled operations to help oversee the relationship with Accenture and others. His division is overseen by Ms. Sandberg.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<