landmark legislation called the Digital Services Act, which requires social media platforms like Twitter to more aggressively police their services for hate speech, misinformation and illicit content.

The new law will require Twitter and other social media companies with more than 45 million users in the European Union to conduct annual risk assessments about the spread of harmful content on their platforms and outline plans to combat the problem. If they are not seen as doing enough, the companies can be fined up to 6 percent of their global revenue, or even be banned from the European Union for repeat offenses.

Inside Twitter, frustrations have mounted over Mr. Musk’s moderation plans, and some employees have wondered if he would really halt their work during such a critical moment, when they are set to begin moderating tweets about elections in Brazil and another national election in the United States.

Adam Satariano contributed reporting.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

How War in Ukraine Roiled Facebook and Instagram

Meta, which owns Facebook and Instagram, took an unusual step last week: It suspended some of the quality controls that ensure that posts from users in Russia, Ukraine and other Eastern European countries meet its rules.

Under the change, Meta temporarily stopped tracking whether its workers who monitor Facebook and Instagram posts from those areas were accurately enforcing its content guidelines, six people with knowledge of the situation said. That’s because the workers could not keep up with shifting rules about what kinds of posts were allowed about the war in Ukraine, they said.

Meta has made more than half a dozen content policy revisions since Russia invaded Ukraine last month. The company has permitted posts about the conflict that it would normally have taken down — including some calling for the death of President Vladimir V. Putin of Russia and violence against Russian soldiers — before changing its mind or drawing up new guidelines, the people said.

The result has been internal confusion, especially among the content moderators who patrol Facebook and Instagram for text and images with gore, hate speech and incitements to violence. Meta has sometimes shifted its rules on a daily basis, causing whiplash, said the people, who were not authorized to speak publicly.

contended with pressure from Russian and Ukrainian authorities over the information battle about the conflict. And internally, it has dealt with discontent about its decisions, including from Russian employees concerned for their safety and Ukrainian workers who want the company to be tougher on Kremlin-affiliated organizations online, three people said.

Meta has weathered international strife before — including the genocide of a Muslim minority in Myanmar last decade and skirmishes between India and Pakistan — with varying degrees of success. Now the largest conflict on the European continent since World War II has become a litmus test of whether the company has learned to police its platforms during major global crises — and so far, it appears to remain a work in progress.

“All the ingredients of the Russia-Ukraine conflict have been around for a long time: the calls for violence, the disinformation, the propaganda from state media,” said David Kaye, a law professor at the University of California, Irvine, and a former special rapporteur to the United Nations. “What I find mystifying was that they didn’t have a game plan to deal with it.”

Dani Lever, a Meta spokeswoman, declined to directly address how the company was handling content decisions and employee concerns during the war.

After Russia invaded Ukraine, Meta said it established a round-the-clock special operations team staffed by employees who are native Russian and Ukrainian speakers. It also updated its products to aid civilians in the war, including features that direct Ukrainians toward reliable, verified information to locate housing and refugee assistance.

Mark Zuckerberg, Meta’s chief executive, and Sheryl Sandberg, the chief operating officer, have been directly involved in the response to the war, said two people with knowledge of the efforts. But as Mr. Zuckerberg focuses on transforming Meta into a company that will lead the digital worlds of the so-called metaverse, many responsibilities around the conflict have fallen — at least publicly — to Nick Clegg, the president for global affairs.

announced that Meta would restrict access within the European Union to the pages of Russia Today and Sputnik, which are Russian state-controlled media, following requests by Ukraine and other European governments. Russia retaliated by cutting off access to Facebook inside the country, claiming the company discriminated against Russian media, and then blocking Instagram.

This month, President Volodymyr Zelensky of Ukraine praised Meta for moving quickly to limit Russian war propaganda on its platforms. Meta also acted rapidly to remove an edited “deepfake” video from its platforms that falsely featured Mr. Zelensky yielding to Russian forces.

a group called the Ukrainian Legion to run ads on its platforms this month to recruit “foreigners” for the Ukrainian army, a violation of international laws. It later removed the ads — which were shown to people in the United States, Ireland, Germany and elsewhere — because the group may have misrepresented ties to the Ukrainian government, according to Meta.

Internally, Meta had also started changing its content policies to deal with the fast-moving nature of posts about the war. The company has long forbidden posts that might incite violence. But on Feb. 26, two days after Russia invaded Ukraine, Meta informed its content moderators — who are typically contractors — that it would allow calls for the death of Mr. Putin and “calls for violence against Russians and Russian soldiers in the context of the Ukraine invasion,” according to the policy changes, which were reviewed by The New York Times.

Reuters reported on Meta’s shifts with a headline that suggested that posts calling for violence against all Russians would be tolerated. In response, Russian authorities labeled Meta’s activities as “extremist.”

Shortly thereafter, Meta reversed course and said it would not let its users call for the deaths of heads of state.

“Circumstances in Ukraine are fast moving,” Mr. Clegg wrote in an internal memo that was reviewed by The Times and first reported by Bloomberg. “We try to think through all the consequences, and we keep our guidance under constant review because the context is always evolving.”

Meta amended other policies. This month, it made a temporary exception to its hate speech guidelines so users could post about the “removal of Russians” and “explicit exclusion against Russians” in 12 Eastern European countries, according to internal documents. But within a week, Meta tweaked the rule to note that it should be applied only to users in Ukraine.

The constant adjustments left moderators who oversee users in Central and Eastern European countries confused, the six people with knowledge of the situation said.

The policy changes were onerous because moderators were generally given less than 90 seconds to decide on whether images of dead bodies, videos of limbs being blown off, or outright calls to violence violated Meta’s rules, they said. In some instances, they added, moderators were shown posts about the war in Chechen, Kazakh or Kyrgyz, despite not knowing those languages.

Ms. Lever declined to comment on whether Meta had hired content moderators who specialize in those languages.

take action against Russia Today and Sputnik, said two people who attended. Russian state activity was at the center of Facebook’s failure to protect the 2016 U.S. presidential election, they said, and it didn’t make sense that those outlets had continued to operate on Meta’s platforms.

While Meta has no employees in Russia, the company held a separate meeting this month for workers with Russian connections. Those employees said they were concerned that Moscow’s actions against the company would affect them, according to an internal document.

In discussions on Meta’s internal forums, which were viewed by The Times, some Russian employees said they had erased their place of work from their online profiles. Others wondered what would happen if they worked in the company’s offices in places with extradition treaties to Russia and “what kind of risks will be associated with working at Meta not just for us but our families.”

Ms. Lever said Meta’s “hearts go out to all of our employees who are affected by the war in Ukraine, and our teams are working to make sure they and their families have the support they need.”

At a separate company meeting this month, some employees voiced unhappiness with the changes to the speech policies during the war, according to an internal poll. Some asked if the new rules were necessary, calling the changes “a slippery slope” that were “being used as proof that Westerners hate Russians.”

Others asked about the effect on Meta’s business. “Will Russian ban affect our revenue for the quarter? Future quarters?” read one question. “What’s our recovery strategy?”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Trump’s Truth Social Is Poised to Join a Crowded Field

For months, former President Donald J. Trump has promoted Truth Social, the soon-to-be-released flagship app of his fledging social media company, as a platform where free speech can thrive without the constraints imposed by Big Tech.

At least seven other social media companies have promised to do the same.

Gettr, a right-wing alternative to Twitter founded last year by a former adviser to Mr. Trump, bills itself as a haven from censorship. That’s similar to Parler — essentially another Twitter clone backed by Rebekah Mercer, a big donor to the Republican Party. MeWe and CloutHub are similar to Facebook, but with the pitch that they promote speech without restraint.

Truth Social was supposed to go live on Presidents’ Day, but the start date was recently pushed to March, though a limited test version was unveiled recently. A full rollout could be hampered by a regulatory investigation into a proposed merger of its parent company, the Trump Media & Technology Group, with a publicly traded blank-check company.

If and when it does open its doors, Mr. Trump’s app will be the newest — and most conspicuous — entrant in the tightly packed universe of social media companies that have cropped up in recent years, promising to build a parallel internet after Twitter, Facebook, Google and other mainstream platforms began to crack down on hate speech.

211 million daily active users on Twitter who see ads.

Many people who claim to crave a social network that caters to their political cause often aren’t ready to abandon Twitter or Facebook, said Weiai Xu, an assistant professor of communications at the University of Massachusetts-Amherst. So the big platforms remain important vehicles for “partisan users” to get their messages out, Mr. Xu said.

Gettr, Parler and Rumble have relied on Twitter to announce the signing of a new right-wing personality or influencer. Parler, for instance, used Twitter to post a link to an announcement that Melania Trump, the former first lady, was making its platform her “social media home.”

Alternative social media companies mainly thrive off politics, said Mark Weinstein, the founder of MeWe, a platform with 20 million registered users that has positioned itself as an option to Facebook.

certain subscription services. His start-up has raised $24 million from 100 investors.

But since political causes drive the most engagement for alternative social media, most other platforms are quick to embrace such opportunities. This month, CloutHub, which has just four million registered users, said its platform could be used to raise money for the protesting truckers of Ottawa.

Mr. Trump wasn’t far behind. “Facebook and Big Tech are seeking to destroy the Freedom Convoy of Truckers,” he said in a statement. (Meta, the parent company of Facebook, said it removed several groups associated with the convoy for violating their rules.)

Trump Media, Mr. Trump added, would let the truckers “communicate freely on Truth Social when we launch — coming very soon!”

Of all the alt-tech sites, Mr. Trump’s venture may have the best chance of success if it launches, not just because of the former president’s star power but also because of its financial heft. In September, Trump Media agreed to merge with Digital World Acquisition, a blank-check or special purpose acquisition company that raised $300 million. The two entities have raised $1 billion from 36 investors in a private placement.

But none of that money can be tapped until regulators wrap up their inquiry into whether Digital World flouted securities regulations in planning its merger with Trump Media. In the meantime, Trump Media, currently valued at more than $10 billion based on Digital World’s stock price, is trying to hire people to build its platform.

Trump supporter, and the venture fund of Mr. Thiel’s protégé J.D. Vance, who is running for a Senate seat from Ohio.

Rumble is also planning to go public through a merger with a special-purpose acquisition company. SPACs are shell companies created solely for the purpose of merging with an operating entity. The deal, arranged by the Wall Street firm Cantor Fitzgerald, will give Rumble $400 million in cash and a $2.1 billion valuation.

The site said in January that it had 39 million monthly active users, up from two million two years ago. It has struck various content deals, including one to provide video and streaming services to Truth Social. Representatives for Rumble did not respond to requests for comment.

removed it from their app stores and Amazon cut off web services after the riot, according to SensorTower, a digital analytics company.

John Matze, one of its founders, from his position as chief executive. Mr. Matze has said he was dismissed after a dispute with Ms. Mercer — the daughter of a wealthy hedge fund executive who is Parler’s main backer — over how to deal with extreme content posted on the platform.

Christina Cravens, a spokeswoman for Parler, said the company had always “prohibited violent and inciting content” and had invested in “content moderation best practices.”

Moderating content will also be a challenge for Truth Social, whose main star, Mr. Trump, has not been able to post messages since early 2021, when Twitter and Facebook kicked him off their platforms for inciting violence tied to the outcome of the 2020 presidential election.

With Mr. Trump as its main poster, it was unclear if Truth Social would grow past subscribers who sign up simply to read the former president’s missives, Mr. Matze said.

“Trump is building a community that will fight for something or whatever he stands for that day,” he said. “This is not social media for friends and family to share pictures.”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

As Officials Look Away, Hate Speech in India Nears Dangerous Levels

HARIDWAR, India — The police officer arrived at the Hindu temple here with a warning to the monks: Don’t repeat your hate speech.

Ten days earlier, before a packed audience and thousands watching online, the monks had called for violence against the country’s minority Muslims. Their speeches, in one of India’s holiest cities, promoted a genocidal campaign to “kill two million of them” and urged an ethnic cleansing of the kind that targeted Rohingya Muslims in Myanmar.

When videos of the event provoked national outrage, the police came. The saffron-clad preachers questioned whether the officer could be objective.

Yati Narsinghanand, the event’s firebrand organizer known for his violent rhetoric, assuaged their concerns.

warned that “inciting people against each other is a crime against the nation” without making a specific reference to Haridwar. Junior members of Mr. Modi’s party attended the event, and the monks have often posted pictures with senior leaders.

“You have persons giving hate speech, actually calling for genocide of an entire group, and we find reluctance of the authorities to book these people,” Rohinton Fali Nariman, a recently retired Indian Supreme Court judge, said in a public lecture. “Unfortunately, the other higher echelons of the ruling party are not only being silent on hate speech, but almost endorsing it.”

increasingly emboldened vigilante groups.

Vigilantes have beaten people accused of disrespecting cows, considered holy by some Hindus; dragged couples out of trains, cafes and homes on suspicion that Hindu women might be seduced by Muslim men; and barged into religious gatherings where they suspect people are being converted.

Myanmar was an example of how the easy dissemination of misinformation and hate speech on social media prepares the ground for violence. The difference in India, he said, is that it would be the mobs taking action instead of the military.

“You have to stop it now,” he said, “because once the mobs take over it could really turn deadly.”


The Dasna Devi temple in Uttar Pradesh state, where Mr. Narsinghanand is the chief priest, is peppered with signs that call to prepare for a “dharm yudh,” or religious war. One calls on “Hindus, my lions” to value their weapons “just the way dedicated wives value their husbands.”

The temple’s main sign prohibits Muslims from entering.

vast network of volunteers to mobilize voters and secure victories.

When he was chief minister of Gujarat, Mr. Modi saw firsthand how unchecked communal tensions could turn into bloodletting.

In 2002, a train fire killed 59 Hindu pilgrims. Although the cause was disputed, violent mobs, in response, targeted the Muslim community, leaving more than 1,000 people dead, many burned alive.

Rights organizations and opposition leaders accused Mr. Modi of looking the other way. He rejected the allegations as political attacks.

took an oath to turn India into a Hindu state, even if it meant killing for it.

role model.”

he said.

telling them.

The police arrested Mr. Narsinghanand on Jan. 15, and he was charged in court with hate speech.

“He said nothing wrong,” said Swami Amritanand, an organizer of the Haridwar event. “We are doing what America is doing, we are doing what Britain is doing.”

Mr. Amritanand said the call for arms was justified because “within the next 10 to 12 years there will be a horrible war that will play out in India.”

Late last month, the monks again sounded a violent call to create a Hindu state, this time at an event hundreds of miles away from Haridwar in Uttar Pradesh. They threatened violence — referencing a bombing of India’s assembly — if Mr. Narsinghanand was not released.

Ms. Pandey described their actions as defensive. “We must prepare to protect ourselves,” she said.

To the Haridwar police, the event in Uttar Pradesh did not count as a repeat offense. Rakendra Singh Kathait, the senior police officer in Haridwar, said Mr. Narsinghanand was in jail because he had acted again in the city; others like Ms. Pandey got a warning.

“If she goes and says it from Kolkata, it doesn’t count as repeat here,” Mr. Kathait said.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

The New Political Cry in South Korea: ‘Out With Man Haters’

SEOUL — They have shown up whenever women rallied against sexual violence and gender biases in South Korea. Dozens of young men, mostly dressed in black, taunted the protesters, squealing and chanting, “Thud! Thud!” to imitate the noise they said the “ugly feminist pigs” made when they walked.

“Out with man haters!” they shouted. “Feminism is a mental illness!”

On the streets, such rallies would be easy to dismiss as the extreme rhetoric of a fringe group. But the anti-feminist sentiments are being amplified online, finding a vast audience that is increasingly imposing its agenda on South Korean society and politics.

These male activists have targeted anything that smacks of feminism, forcing a university to cancel a lecture by a woman they accused of spreading misandry. They have vilified prominent women, criticizing An San, a three-time gold medalist in the Tokyo Olympics, for her short haircut.

They have threatened businesses with boycotts, prompting companies to pull advertisements with the image of pinching fingers they said ridiculed the size of male genitalia. And they have taken aim at the government for promoting a feminist agenda, eliciting promises from rival presidential candidates to reform the country’s 20-year-old Ministry of Gender Equality and Family.

runaway housing prices, a lack of jobs and a widening income gap.

YouTube channel with 450,000 subscribers. To its members, feminists equal man haters.

Its motto once read, “Till the day all feminists are exterminated!”

The backlash against feminism in South Korea may seem bewildering.

the highest gender wage gap among the wealthy countries. Less than one-fifth of its national lawmakers are women. Women make up only 5.2 percent of the board members of publicly listed businesses, compared with 28 percent in the United States.

And yet, most young men in the country argue that it is men, not women, in South Korea who feel threatened and marginalized. Among South Korean men in their 20s, nearly 79 percent said they were victims of serious gender discrimination, according to a poll in May.

“There is a culture of misogyny in male-dominant online communities, depicting feminists as radical misandrists and spreading fear of feminists,” said Kim Ju-hee, 26, a nurse who has organized protests denouncing anti-feminists.

The wave of anti-feminism in South Korea shares many of the incendiary taglines with right-wing populist movements in the West that peddle such messages. Women who argue for abortion rights are labeled “destroyers of family.” Feminists are not champions of gender equality, but “female supremacists.”

In South Korea, “women” and “feminists” are two of the most common targets of online hate speech, according to the country’s National Human Rights Commission.

abortions were common.

mandatory military service. But many women drop out of the work force after giving birth, and much of the domestic duties fall to them.

“What more do you want? We gave you your own space in the subway, bus, parking lot,” the male rapper San E writes in his 2018 song “Feminist,” which has a cult following among young anti-feminists. “Oh girls don’t need a prince! Then pay half for the house when we marry.”

The gender wars have infused the South Korean presidential race, largely seen as a contest for young voters. With the virulent anti-feminist voice surging, no major candidate is speaking out for women’s rights, once such a popular cause that President Moon Jae-in called himself a “feminist” when he campaigned about five years ago.

has said.

It is hard to tell how many young men support the kind of extremely provocative​ and often theatrical​ activism championed by groups like Man on Solidarity. Its firebrand leader, Mr. Bae, showed up at a recent feminist rally​​ dressed as the Joker from “Batman” comics and toting a toy water gun. He followed female protesters around, pretending to, as he put it, “kill flies.”

Tens of thousands of fans have watched his stunts livestreamed online, sending in cash donations. During one online talk-fest in August, Mr. Bae raised nine million won ($7,580) in three minutes.

legalize abortion and started one of the most powerful #MeToo campaigns in Asia.

Lee Hyo-lin, 29, said that “feminist” has become such a dirty word that women who wear their hair short or carry a novel by a feminist writer risk ostracism. When she was a member of a K-pop group, she said that male colleagues routinely commented on her body, jeering that she “gave up being a woman” when she gained weight.

“The #MeToo problem is part of being a woman in South Korea,” she said. “Now we want to speak out, but they want us to shut up. It’s so frustrating.”

On the other side of the culture war are young men with a litany of grievances — concerns that are endlessly regurgitated by male-dominated forums. They have fixated, in particular, on limited cases of false accusations, as a way to give credence to a broader anti-feminist agenda.

Son Sol-bin, a used-furniture seller, was 29 when his former girlfriend accused him of rape and kidnapping in 2018. Online trolls called for his castration, he said. His mother found closed-circuit TV footage proving the accusations never took place.

“The feminist influence has left the system so biased against men that the police took a woman’s testimony and a mere drop of her tears as enough evidence to land an innocent man in jail,” said Mr. Son, who spent eight months in jail before he was cleared. “I think the country has gone crazy.”

As Mr. Son fought back tears during a recent anti-feminist rally, other young men chanted: “Be strong! We are with you!”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Mass Detentions of Civilians Fan ‘Climate of Fear’ in Ethiopia

NAIROBI, Kenya — The family was startled awake by a loud bang in the middle of the night on the gate of their home on the outskirts of Addis Ababa, the capital of Ethiopia.

Police officers barged in without a warrant, ransacking the living room and looking under the beds. They seized three members of the family, among them a 76-year-old, one-legged amputee yanked from bed while his sons begged to go in his place.

“They showed him no mercy even after he cried, ‘I am disabled and diabetic,’” said the man’s nephew, Kirubel, who would give only his first name for fear of reprisals.

The family is among hundreds, and perhaps thousands, of Ethiopians belonging to the Tigrayan ethnic group who have been rounded up and detained in the capital and beyond in recent weeks.

routed the Ethiopian army in Tigray, swept south, recently captured two strategic towns and threatened to advance toward the capital.

On Nov. 2, the government declared a state of emergency, and the resulting roundups have swept up anyone of Tigrayan descent, many of whom had no ties to the rebels or even affinity for them. They were not just young men and women, but also mothers with children and the elderly, according to human rights advocates and interviews with nearly a dozen family members and friends of detainees.

They have been seized off the streets, in their homes and even in workplaces — including banks, schools and shopping centers — and taken to overcrowded cells in police stations and detention facilities.

Tigrayans have been targeted by the police based on a mix of hints: their surnames, details listed on identification cards and drivers licenses, even the way they speak Amharic, the national language of Ethiopia.

said Tuesday through a spokeswoman. “Its provisions are extremely broad, with vague prohibitions going as far as encompassing ‘indirect moral’ support for what the government has labeled ‘terrorist groups.’”

The ethnically motivated detentions come amid a significant rise in online hate speech, which is only adding fuel to the civil war tearing apart Africa’s second-most populous nation. Reports of massacres, ethnic cleansing and widespread sexual assault by all sides in the conflict have undermined the vision of Ethiopian unity that Mr. Abiy, the prime minister and a Nobel Peace Prize laureate, promised when he rose to power more than three years ago.

The war between Ethiopian federal forces and their allies and Tigrayan rebel fighters has left thousands of people dead, at least 400,000 living in famine-like conditions and millions displaced. It risks engulfing the whole of Ethiopia and the wider Horn of Africa.

Mr. Abiy’s determination to prosecute the war seems to have been only hardened by economic threats from the Biden administration, which has imposed sanctions on his military allies in neighboring Eritrea and suspended Ethiopia from duty-free access to the U.S. market.

Secretary of State Antony J. Blinken, who is traveling to Kenya, Nigeria and Senegal this week, has expressed worry that Ethiopia could “implode.”

defend the capital “with our blood” even as African and Western envoys sought to broker a cease-fire.

Police officials have defended the arrests, saying they were seizing supporters of the Tigray People’s Liberation Front, the country’s former dominant party, which Ethiopia now classifies as a terrorist organization.

Activists, however, say the state of emergency provisions are so nebulous that they give security officials unfettered latitude. The provisions allow for the search of any person’s home or their arrest without a warrant “upon reasonable suspicion” that they cooperate with terrorist groups.

Laetitia Bader, the Horn of Africa director at Human Rights Watch, said “the state of emergency is legitimizing and legalizing unlawful practices” and creating “a real climate of fear.”

Many ethnic Tigrayans say they now fear leaving home. Almost all those who agreed to be interviewed declined to be identified by name for fear that they might be arrested or face retaliation.

began a military campaign in the country’s northern Tigray region, hoping to vanquish the Tigray People’s Liberation Front — his most troublesome political foe.

In Addis Ababa, security officers have demanded that landlords identify Tigrayan tenants. In one secondary school, a teacher said four Tigrayan teachers had been taken into custody as they ate lunch after officers arrived with a letter from the intelligence service containing their names.

A merchant in Addis Ababa, 38, was picked up by security officers after he opened his mobile phone accessories shop. A nearby shop owner phoned that news to the seized merchant’s wife, who said she left their two children with a neighbor and rushed to the shop — only to find it closed and her husband gone.

After a three-day search, the wife said, she found her husband in a crowded Addis Ababa detention facility with no proper bedding or food.

In Addis Ababa, rights groups say, police stations are so full of detainees that the authorities have moved the overflow to heavily guarded makeshift facilities, among them youth recreation centers, warehouses and one major prison. With no access to lawyers, some relatives of detainees say they will not approach these facilities, fearful they could be arrested too.

whistle-blower, have long accused Facebook of failing to moderate hateful incitement speech. With pressure mounting, Facebook this month deleted a post by Mr. Abiy urging citizens to “bury” the Tigray People’s Liberation Front.

Twitter also disabled its Trends section in Ethiopia, citing “the risks of coordination that could incite violence or cause harm.”

Timnit Gebru, an Ethiopian-born American computer scientist who spotted and reported some of the posts on Facebook, said the measures were insufficient and amounted to “a game of whack-a-mole.”

For now, many Tigrayans worry that it’s only a matter of time before they are seized. One businessman, who paid a $400 bribe for his release, said officers had told him they would come for him again.

It’s a fate Kirubel said he worried about as his disabled uncle and cousins remained detained.

“My children worry that I will not come back when I leave the house,” he said. “Everyone is afraid.”

Employees of The New York Times contributed reporting from Addis Ababa, Ethiopia.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Covid-19 Misinformation Goes Unchecked on Radio and Podcasts

On a recent episode of his podcast, Rick Wiles, a pastor and self-described “citizen reporter,” endorsed a conspiracy theory: that Covid-19 vaccines were the product of a “global coup d’état by the most evil cabal of people in the history of mankind.”

“It’s an egg that hatches into a synthetic parasite and grows inside your body,” Mr. Wiles said on his Oct. 13 episode. “This is like a sci-fi nightmare, and it’s happening in front of us.”

Mr. Wiles belongs to a group of hosts who have made false or misleading statements about Covid-19 and effective treatments for it. Like many of them, he has access to much of his listening audience because his show appears on a platform provided by a large media corporation.

Mr. Wiles’s podcast is available through iHeart Media, an audio company based in San Antonio that says it reaches nine out of 10 Americans each month. Spotify and Apple are other major companies that provide significant audio platforms for hosts who have shared similar views with their listeners about Covid-19 and vaccination efforts, or have had guests on their shows who promoted such notions.

protect people against the coronavirus for long periods and have significantly reduced the spread of Covid-19. As the global death toll related to Covid-19 exceeds five million — and at a time when more than 40 percent of Americans are not fully vaccinated — iHeart, Spotify, Apple and many smaller audio companies have done little to rein in what radio hosts and podcasters say about the virus and vaccination efforts.

“There’s really no curb on it,” said Jason Loviglio, an associate professor of media and communication studies at the University of Maryland, Baltimore County. “There’s no real mechanism to push back, other than advertisers boycotting and corporate executives saying we need a culture change.”

Audio industry executives appear less likely than their counterparts in social media to try to check dangerous speech. TruNews, a conservative Christian media outlet founded by Mr. Wiles, who used the phrase “Jew coup” to describe efforts to impeach former President Donald J. Trump, has been banned by YouTube. His podcast remains available on iHeart.

Asked about his false statements concerning Covid-19 vaccines, Mr. Wiles described pandemic mitigation efforts as “global communism.” “If the Needle Nazis win, freedom is over for generations, maybe forever,” he said in an email.

The reach of radio shows and podcasts is great, especially among young people: A recent survey from the National Research Group, a consulting firm, found that 60 percent of listeners under 40 get their news primarily through audio, a type of media they say they trust more than print or video.

unfounded claim that “45,000 people have died from taking the vaccine.” In his final Twitter post, on July 30, Mr. Bernier accused the government of “acting like Nazis” for encouraging Covid-19 vaccines.

Jimmy DeYoung Sr., whose program was available on iHeart, Apple and Spotify, died of Covid-19 complications after making his show a venue for false or misleading statements about vaccines. One of his frequent guests was Sam Rohrer, a former Pennsylvania state representative who likened the promotion of Covid-19 vaccines to Nazi tactics and made a sweeping false statement. “This is not a vaccine, by definition,” Mr. Rohrer said on an April episode. “It is a permanent altering of my immune system, which God created to handle the kinds of things that are coming that way.” Mr. DeYoung thanked his guest for his “insight.” Mr. DeYoung died four months later.

has said his research has been “misinterpreted” by anti-vaccine activists. He added that Covid-19 vaccines have been found to reduce transmissions substantially, whereas chickens inoculated with the Marek’s disease vaccine were still able to transmit the disease. Mr. Sexton did not reply to a request for comment.

more than 600 podcasts and operates a vast online archive of audio programs — has rules for the podcasters on its platform prohibiting them from making statements that incite hate, promote Nazi propaganda or are defamatory. It would not say whether it has a policy concerning false statements on Covid-19 or vaccination efforts.

Apple’s content guidelines for podcasts prohibit “content that may lead to harmful or dangerous outcomes, or content that is obscene or gratuitous.” Apple did not reply to requests for comment for this article.

Spotify, which says its podcast platform has 299 million monthly listeners, prohibits hate speech in its guidelines. In a response to inquiries, the company said in a written statement that it also prohibits content “that promotes dangerous false or dangerous deceptive content about Covid-19, which may cause offline harm and/or pose a direct threat to public health.” The company added that it had removed content that violated its policies. But the episode with Mr. DeYoung’s conversation with Mr. Rohrer was still available via Spotify.

Dawn Ostroff, Spotify’s content and advertising business officer, said at a conference last month that the company was making “very aggressive moves” to invest more in content moderation. “There’s a difference between the content that we make and the content that we license and the content that’s on the platform,” she said, “but our policies are the same no matter what type of content is on our platform. We will not allow any content that infringes or that in any way is inaccurate.”

The audio industry has not drawn the same scrutiny as large social media companies, whose executives have been questioned in congressional hearings about the platforms’ role in spreading false or misleading information.

The social media giants have made efforts over the last year to stop the flow of false reports related to the pandemic. In September, YouTube said it was banning the accounts of several prominent anti-vaccine activists. It also removes or de-emphasizes content it deems to be misinformation or close to it. Late last year, Twitter announced that it would remove posts and ads with false claims about coronavirus vaccines. Facebook followed suit in February, saying it would remove false claims about vaccines generally.

now there’s podcasting.”

The Federal Communications Commission, which grants licenses to companies using the public airwaves, has oversight over radio operators, but not podcasts or online audio, which do not make use of the public airwaves.

The F.C.C. is barred from violating American citizens’ right to free speech. When it takes action against a media company over programming, it is typically in response to complaints about content considered obscene or indecent, as when it fined a Virginia television station in 2015 for a newscast that included a segment on a pornographic film star.

In a statement, an F.C.C. spokesman said the agency “reviews all complaints and determines what is actionable under the Constitution and the law.” It added that the main responsibility for what goes on the air lies with radio station owners, saying that “broadcast licensees have a duty to act in the public interest.”

The world of talk radio and podcasting is huge, and anti-vaccine sentiment is a small part of it. iHeart offers an educational podcast series about Covid-19 vaccines, and Spotify created a hub for podcasts about Covid-19 from news outlets including ABC and Bloomberg.

on the air this year, describing his decision to get vaccinated and encouraging his listeners to do the same.

Recently, he expressed his eagerness to get a booster shot and mentioned that he had picked up a new nickname: “The Vaxxinator.”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Trump Allies Help Bolsonaro Sow Doubt in Brazil’s Elections

BRASÍLIA — The conference hall was packed, with a crowd of more than 1,000 cheering attacks on the press, the liberals and the politically correct. There was Donald Trump Jr. warning that the Chinese could meddle in the election, a Tennessee congressman who voted against certifying the 2020 vote, and the president complaining about voter fraud.

In many ways, the September gathering looked like just another CPAC, the conservative political conference. But it was happening in Brazil, most of it was in Portuguese and the president at the lectern was Jair Bolsonaro, the country’s right-wing leader.

Fresh from their assault on the results of the 2020 U.S. presidential election, former President Donald J. Trump and his allies are exporting their strategy to Latin America’s largest democracy, working to support Mr. Bolsonaro’s bid for re-election next year — and helping sow doubt in the electoral process in the event that he loses.

pillow executive being sued for defaming voting-machine makers.

academics, Brazil’s electoral officials and the U.S. government, all have said that there has not been fraud in Brazil’s elections. Eduardo Bolsonaro has insisted there was. “I can’t prove — they say — that I have fraud,” he said in South Dakota. “So, OK, you can’t prove that you don’t.”

Mr. Trump’s circle has cozied up to other far-right leaders, including in Hungary, Poland and the Philippines, and tried to boost rising nationalist politicians elsewhere. But the ties are the strongest, and the stakes perhaps the highest, in Brazil.

WhatsApp groups for Bolsonaro supporters recently began circulating the trailer for a new series from Fox News host Tucker Carlson that sympathizes with the Jan. 6 Capitol riot, Mr. Nemer said. The United States, which has been a democracy for 245 years, withstood that attack. Brazil passed its constitution in 1988 after two decades under a military dictatorship.

advised President Bolsonaro to respect the democratic process.

In October, 64 members of Congress asked President Biden for a reset in the United States’ relationship with Brazil, citing President Bolsonaro’s pursuit of policies that threaten democratic rule. In response, Brazil’s ambassador to the United States defended President Bolsonaro, saying debate over election security is normal in democracies. “Brazil is and will continue to be one of the world’s freest countries,” he said.

Unemployment and inflation have risen. He has been operating without a political party for two years. And Brazil’s Supreme Court and Congress are closing in on investigations into him, his sons and his allies.

Late last month, a Brazil congressional panel recommended that President Bolsonaro be charged with “crimes against humanity,” asserting that he intentionally let the coronavirus tear through Brazil in a bid for herd immunity. The panel blamed his administration for more than 100,000 deaths.

Minutes after the panel voted, Mr. Trump issued his endorsement. “Brazil is lucky to have a man such as Jair Bolsonaro working for them,” he said in a statement. “He is a great president and will never let the people of his great country down!”

instant.

“They say he’s the Donald Trump of South America,” Mr. Trump said in 2019. “I like him.”

To many others, Mr. Bolsonaro was alarming. As a congressman and candidate, he had waxed poetic about Brazil’s military dictatorship, which tortured its political rivals. He said he would be incapable of loving a gay son. And he said a rival congresswoman was too ugly to be raped.

Three months into his term, President Bolsonaro went to Washington. At his welcome dinner, the Brazilian embassy sat him next to Mr. Bannon. At the White House later, Mr. Trump and Mr. Bolsonaro made deals that would allow the Brazilian government to spend more with the U.S. defense industry and American companies to launch rockets from Brazil.

announced Eduardo Bolsonaro would represent South America in The Movement, a right-wing, nationalist group that Mr. Bannon envisioned taking over the Western world. In the news release, Eduardo Bolsonaro said they would “reclaim sovereignty from progressive globalist elitist forces.”

pacts to increase commerce. American investors plowed billions of dollars into Brazilian companies. And Brazil spent more on American imports, including fuel, plastics and aircraft.

Now a new class of companies is salivating over Brazil: conservative social networks.

Gettr and Parler, two Twitter clones, have grown rapidly in Brazil by promising a hands-off approach to people who believe Silicon Valley is censoring conservative voices. One of their most high-profile recruits is President Bolsonaro.

partly funded by Guo Wengui, an exiled Chinese billionaire who is close with Mr. Bannon. (When Mr. Bannon was arrested on fraud charges, he was on Mr. Guo’s yacht.) Parler is funded by Rebekah Mercer, the American conservative megadonor who was Mr. Bannon’s previous benefactor.

Companies like Gettr and Parler could prove critical to President Bolsonaro. Like Mr. Trump, he built his political movement with social media. But now Facebook, YouTube and Twitter are more aggressively policing hate speech and misinformation. They blocked Mr. Trump and have started cracking down on President Bolsonaro. Last month, YouTube suspended his channel for a week after he falsely suggested coronavirus vaccines could cause AIDS.

In response, President Bolsonaro has tried to ban the companies from removing certain posts and accounts, but his policy was overturned. Now he has been directing his supporters to follow him elsewhere, including on Gettr, Parler and Telegram, a messaging app based in Dubai.

He will likely soon have another option. Last month, Mr. Trump announced he was starting his own social network. The company financing his new venture is partly led by Luiz Philippe de Orleans e Bragança, a Brazilian congressman and Bolsonaro ally.

said the rioters’ efforts were weak. “If it were organized, they would have taken the Capitol and made demands,” he said.

The day after the riot, President Bolsonaro warned that Brazil was “going to have a worse problem” if it didn’t change its own electoral systems, which rely on voting machines without paper backups. (Last week, he suddenly changed his tune after announcing that he would have Brazil’s armed forces monitor the election.)

Diego Aranha, a Brazilian computer scientist who studies the country’s election systems, said that Brazil’s system does make elections more vulnerable to attacks — but that there has been no evidence of fraud.

“Bolsonaro turned a technical point into a political weapon,” he said.

President Bolsonaro’s American allies have helped spread his claims.

At the CPAC in Brazil, Donald Trump Jr. told the audience that if they didn’t think the Chinese were aiming to undermine their election, “you haven’t been watching.” Mr. Bannon has called President Bolsonaro’s likely opponent, former President Luiz Inácio Lula da Silva, a “transnational, Marxist criminal” and “the most dangerous leftist in the world.” Mr. da Silva served 18 months in prison but his corruption charges were later tossed out by a Supreme Court justice.

Eduardo Bolsonaro’s slide show detailing claims of Brazilian voter fraud, delivered in South Dakota, was broadcast by One America News, a conservative cable network that reaches 35 million U.S. households. It was also translated into Portuguese and viewed nearly 600,000 times on YouTube and Facebook.

protest his enemies in the Supreme Court and on the left.

The weekend before, just down the road from the presidential palace, Mr. Bolsonaro’s closest allies gathered at CPAC. Eduardo Bolsonaro and the American Conservative Union, the Republican lobbying group that runs CPAC, organized the event. Eduardo Bolsonaro’s political committee mostly financed it. Tickets sold out.

a fiery speech. Then he flew to São Paulo, where he used Mr. Miller’s detainment as evidence of judicial overreach. He told the crowd he would no longer recognize decisions from a Supreme Court judge.

He then turned to the election.

“We have three alternatives for me: Prison, death or victory,” he said. “Tell the bastards I’ll never be arrested.”

Leonardo Coelho and Kenneth P. Vogel contributed reporting.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Facebook Debates What to Do With Its Like and Share Buttons

SAN FRANCISCO — In 2019, Facebook researchers began a new study of one of the social network’s foundational features: the Like button.

They examined what people would do if Facebook removed the distinct thumbs-up icon and other emoji reactions from posts on its photo-sharing app Instagram, according to company documents. The buttons had sometimes caused Instagram’s youngest users “stress and anxiety,” the researchers found, especially if posts didn’t get enough Likes from friends.

But the researchers discovered that when the Like button was hidden, users interacted less with posts and ads. At the same time, it did not alleviate teenagers’ social anxiety and young users did not share more photos, as the company thought they might, leading to a mixed bag of results.

Mark Zuckerberg, Facebook’s chief executive, and other managers discussed hiding the Like button for more Instagram users, according to the documents. In the end, a larger test was rolled out in just a limited capacity to “build a positive press narrative” around Instagram.

misinformation, privacy and hate speech, a central issue has been whether the basic way that the platform works has been at fault — essentially, the features that have made Facebook be Facebook.

Apart from the Like button, Facebook has scrutinized its share button, which lets users instantly spread content posted by other people; its groups feature, which is used to form digital communities; and other tools that define how more than 3.5 billion people behave and interact online. The research, laid out in thousands of pages of internal documents, underlines how the company has repeatedly grappled with what it has created.

What researchers found was often far from positive. Time and again, they determined that people misused key features or that those features amplified toxic content, among other effects. In an August 2019 internal memo, several researchers said it was Facebook’s “core product mechanics” — meaning the basics of how the product functioned — that had let misinformation and hate speech flourish on the site.

“The mechanics of our platform are not neutral,” they concluded.

hide posts they do not want to see and turning off political group recommendations to reduce the spread of misinformation.

But the core way that Facebook operates — a network where information can spread rapidly and where people can accumulate friends and followers and Likes — ultimately remains largely unchanged.

Many significant modifications to the social network were blocked in the service of growth and keeping users engaged, some current and former executives said. Facebook is valued at more than $900 billion.

“There’s a gap between the fact that you can have pretty open conversations inside of Facebook as an employee,” said Brian Boland, a Facebook vice president who left last year. “Actually getting change done can be much harder.”

The company documents are part of the Facebook Papers, a cache provided to the Securities and Exchange Commission and to Congress by a lawyer representing Frances Haugen, a former Facebook employee who has become a whistle-blower. Ms. Haugen earlier gave the documents to The Wall Street Journal. This month, a congressional staff member supplied the redacted disclosures to more than a dozen other news organizations, including The New York Times.

In a statement, Andy Stone, a Facebook spokesman, criticized articles based on the documents, saying that they were built on a “false premise.”

“Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or well-being misunderstands where our own commercial interests lie,” he said. He said Facebook had invested $13 billion and hired more than 40,000 people to keep people safe, adding that the company has called “for updated regulations where democratic governments set industry standards to which we can all adhere.”

post this month, Mr. Zuckerberg said it was “deeply illogical” that the company would give priority to harmful content because Facebook’s advertisers don’t want to buy ads on a platform that spreads hate and misinformation.

“At the most basic level, I think most of us just don’t recognize the false picture of the company that is being painted,” he wrote.

When Mr. Zuckerberg founded Facebook 17 years ago in his Harvard University dorm room, the site’s mission was to connect people on college campuses and bring them into digital groups with common interests and locations.

Growth exploded in 2006 when Facebook introduced the News Feed, a central stream of photos, videos and status updates posted by people’s friends. Over time, the company added more features to keep people interested in spending time on the platform.

In 2009, Facebook introduced the Like button. The tiny thumbs-up symbol, a simple indicator of people’s preferences, became one of the social network’s most important features. The company allowed other websites to adopt the Like button so users could share their interests back to their Facebook profiles.

That gave Facebook insight into people’s activities and sentiments outside of its own site, so it could better target them with advertising. Likes also signified what users wanted to see more of in their News Feeds so people would spend more time on Facebook.

Facebook also added the groups feature, where people join private communication channels to talk about specific interests, and pages, which allowed businesses and celebrities to amass large fan bases and broadcast messages to those followers.

Adam Mosseri, the head of Instagram, has said that research on users’ well-being led to investments in anti-bullying measures on Instagram.

Yet Facebook cannot simply tweak itself so that it becomes a healthier social network when so many problems trace back to core features, said Jane Lytvynenko, a senior fellow at the Harvard Kennedy Shorenstein Center, who studies social networks and misinformation.

“When we talk about the Like button, the share button, the News Feed and their power, we’re essentially talking about the infrastructure that the network is built on top of,” she said. “The crux of the problem here is the infrastructure itself.”

As Facebook’s researchers dug into how its products worked, the worrisome results piled up.

In a July 2019 study of groups, researchers traced how members in those communities could be targeted with misinformation. The starting point, the researchers said, were people known as “invite whales,” who sent invitations out to others to join a private group.

These people were effective at getting thousands to join new groups so that the communities ballooned almost overnight, the study said. Then the invite whales could spam the groups with posts promoting ethnic violence or other harmful content, according to the study.

Another 2019 report looked at how some people accrued large followings on their Facebook pages, often using posts about cute animals and other innocuous topics. But once a page had grown to tens of thousands of followers, the founders sold it. The buyers then used the pages to show followers misinformation or politically divisive content, according to the study.

As researchers studied the Like button, executives considered hiding the feature on Facebook as well, according to the documents. In September 2019, it removed Likes from users’ Facebook posts in a small experiment in Australia.

The company wanted to see if the change would reduce pressure and social comparison among users. That, in turn, might encourage people to post more frequently to the network.

But people did not share more posts after the Like button was removed. Facebook chose not to roll the test out more broadly, noting, “Like counts are extremely low on the long list of problems we need to solve.”

Last year, company researchers also evaluated the share button. In a September 2020 study, a researcher wrote that the button and so-called reshare aggregation units in the News Feed, which are automatically generated clusters of posts that have already been shared by people’s friends, were “designed to attract attention and encourage engagement.”

But gone unchecked, the features could “serve to amplify bad content and sources,” such as bullying and borderline nudity posts, the researcher said.

That’s because the features made people less hesitant to share posts, videos and messages with one another. In fact, users were three times more likely to share any kind of content from the reshare aggregation units, the researcher said.

One post that spread widely this way was an undated message from an account called “The Angry Patriot.” The post notified users that people protesting police brutality were “targeting a police station” in Portland, Ore. After it was shared through reshare aggregation units, hundreds of hate-filled comments flooded in. It was an example of “hate bait,” the researcher said.

A common thread in the documents was how Facebook employees argued for changes in how the social network worked and often blamed executives for standing in the way.

In an August 2020 internal post, a Facebook researcher criticized the recommendation system that suggests pages and groups for people to follow and said it can “very quickly lead users down the path to conspiracy theories and groups.”

“Out of fears over potential public and policy stakeholder responses, we are knowingly exposing users to risks of integrity harms,” the researcher wrote. “During the time that we’ve hesitated, I’ve seen folks from my hometown go further and further down the rabbit hole” of conspiracy theory movements like QAnon and anti-vaccination and Covid-19 conspiracies.

The researcher added, “It has been painful to observe.”

Reporting was contributed by Davey Alba, Sheera Frenkel, Cecilia Kang and Ryan Mac.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

In India, Facebook Struggles to Combat Misinformation and Hate Speech

On Feb. 4, 2019, a Facebook researcher created a new user account to see what it was like to experience the social media site as a person living in Kerala, India.

For the next three weeks, the account operated by a simple rule: Follow all the recommendations generated by Facebook’s algorithms to join groups, watch videos and explore new pages on the site.

The result was an inundation of hate speech, misinformation and celebrations of violence, which were documented in an internal Facebook report published later that month.

bots and fake accounts tied to the country’s ruling party and opposition figures were wreaking havoc on national elections. They also detail how a plan championed by Mark Zuckerberg, Facebook’s chief executive, to focus on “meaningful social interactions,” or exchanges between friends and family, was leading to more misinformation in India, particularly during the pandemic.

a violent coup in the country. Facebook said that after the coup, it implemented a special policy to remove praise and support of violence in the country, and later banned the Myanmar military from Facebook and Instagram.

In Sri Lanka, people were able to automatically add hundreds of thousands of users to Facebook groups, exposing them to violence-inducing and hateful content. In Ethiopia, a nationalist youth militia group successfully coordinated calls for violence on Facebook and posted other inflammatory content.

Facebook has invested significantly in technology to find hate speech in various languages, including Hindi and Bengali, two of the most widely used languages, Mr. Stone said. He added that Facebook reduced the amount of hate speech that people see globally by half this year.

suicide bombing in the disputed border region of Kashmir set off a round of violence and a spike in accusations, misinformation and conspiracies between Indian and Pakistani nationals.

After the attack, anti-Pakistan content began to circulate in the Facebook-recommended groups that the researcher had joined. Many of the groups, she noted, had tens of thousands of users. A different report by Facebook, published in December 2019, found Indian Facebook users tended to join large groups, with the country’s median group size at 140,000 members.

Graphic posts, including a meme showing the beheading of a Pakistani national and dead bodies wrapped in white sheets on the ground, circulated in the groups she joined.

After the researcher shared her case study with co-workers, her colleagues commented on the posted report that they were concerned about misinformation about the upcoming elections in India.

Two months later, after India’s national elections had begun, Facebook put in place a series of steps to stem the flow of misinformation and hate speech in the country, according to an internal document called Indian Election Case Study.

The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners — the third-party network of outlets with which Facebook works to outsource fact-checking — and increasing the amount of misinformation it removed. It also noted how Facebook had created a “political white list to limit P.R. risk,” essentially a list of politicians who received a special exemption from fact-checking.

The study did not note the immense problem the company faced with bots in India, nor issues like voter suppression. During the election, Facebook saw a spike in bots — or fake accounts — linked to various political groups, as well as efforts to spread misinformation that could have affected people’s understanding of the voting process.

In a separate report produced after the elections, Facebook found that over 40 percent of top views, or impressions, in the Indian state of West Bengal were “fake/inauthentic.” One inauthentic account had amassed more than 30 million impressions.

A report published in March 2021 showed that many of the problems cited during the 2019 elections persisted.

In the internal document, called Adversarial Harmful Networks: India Case Study, Facebook researchers wrote that there were groups and pages “replete with inflammatory and misleading anti-Muslim content” on Facebook.

The report said there were a number of dehumanizing posts comparing Muslims to “pigs” and “dogs,” and misinformation claiming that the Quran, the holy book of Islam, calls for men to rape their female family members.

Much of the material circulated around Facebook groups promoting Rashtriya Swayamsevak Sangh, an Indian right-wing and nationalist group with close ties to India’s ruling Bharatiya Janata Party, or B.J.P. The groups took issue with an expanding Muslim minority population in West Bengal and near the Pakistani border, and published posts on Facebook calling for the ouster of Muslim populations from India and promoting a Muslim population control law.

Facebook knew that such harmful posts proliferated on its platform, the report indicated, and it needed to improve its “classifiers,” which are automated systems that can detect and remove posts containing violent and inciting language. Facebook also hesitated to designate R.S.S. as a dangerous organization because of “political sensitivities” that could affect the social network’s operation in the country.

Of India’s 22 officially recognized languages, Facebook said it has trained its A.I. systems on five. (It said it had human reviewers for some others.) But in Hindi and Bengali, it still did not have enough data to adequately police the content, and much of the content targeting Muslims “is never flagged or actioned,” the Facebook report said.

Five months ago, Facebook was still struggling to efficiently remove hate speech against Muslims. Another company report detailed efforts by Bajrang Dal, an extremist group linked with the B.J.P., to publish posts containing anti-Muslim narratives on the platform.

Facebook is considering designating the group as a dangerous organization because it is “inciting religious violence” on the platform, the document showed. But it has not yet done so.

“Join the group and help to run the group; increase the number of members of the group, friends,” said one post seeking recruits on Facebook to spread Bajrang Dal’s messages. “Fight for truth and justice until the unjust are destroyed.”

Ryan Mac, Cecilia Kang and Mike Isaac contributed reporting.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<