Graphic posts, including a meme showing the beheading of a Pakistani national and dead bodies wrapped in white sheets on the ground, circulated in the groups she joined.

After the researcher shared her case study with co-workers, her colleagues commented on the posted report that they were concerned about misinformation about the upcoming elections in India.

Two months later, after India’s national elections had begun, Facebook put in place a series of steps to stem the flow of misinformation and hate speech in the country, according to an internal document called Indian Election Case Study.

The case study painted an optimistic picture of Facebook’s efforts, including adding more fact-checking partners — the third-party network of outlets with which Facebook works to outsource fact-checking — and increasing the amount of misinformation it removed. It also noted how Facebook had created a “political white list to limit P.R. risk,” essentially a list of politicians who received a special exemption from fact-checking.

The study did not note the immense problem the company faced with bots in India, nor issues like voter suppression. During the election, Facebook saw a spike in bots — or fake accounts — linked to various political groups, as well as efforts to spread misinformation that could have affected people’s understanding of the voting process.

In a separate report produced after the elections, Facebook found that over 40 percent of top views, or impressions, in the Indian state of West Bengal were “fake/inauthentic.” One inauthentic account had amassed more than 30 million impressions.

A report published in March 2021 showed that many of the problems cited during the 2019 elections persisted.

In the internal document, called Adversarial Harmful Networks: India Case Study, Facebook researchers wrote that there were groups and pages “replete with inflammatory and misleading anti-Muslim content” on Facebook.

The report said there were a number of dehumanizing posts comparing Muslims to “pigs” and “dogs,” and misinformation claiming that the Quran, the holy book of Islam, calls for men to rape their female family members.

Much of the material circulated around Facebook groups promoting Rashtriya Swayamsevak Sangh, an Indian right-wing and nationalist group with close ties to India’s ruling Bharatiya Janata Party, or B.J.P. The groups took issue with an expanding Muslim minority population in West Bengal and near the Pakistani border, and published posts on Facebook calling for the ouster of Muslim populations from India and promoting a Muslim population control law.

Facebook knew that such harmful posts proliferated on its platform, the report indicated, and it needed to improve its “classifiers,” which are automated systems that can detect and remove posts containing violent and inciting language. Facebook also hesitated to designate R.S.S. as a dangerous organization because of “political sensitivities” that could affect the social network’s operation in the country.

Of India’s 22 officially recognized languages, Facebook said it has trained its A.I. systems on five. (It said it had human reviewers for some others.) But in Hindi and Bengali, it still did not have enough data to adequately police the content, and much of the content targeting Muslims “is never flagged or actioned,” the Facebook report said.

Five months ago, Facebook was still struggling to efficiently remove hate speech against Muslims. Another company report detailed efforts by Bajrang Dal, an extremist group linked with the B.J.P., to publish posts containing anti-Muslim narratives on the platform.

Facebook is considering designating the group as a dangerous organization because it is “inciting religious violence” on the platform, the document showed. But it has not yet done so.

“Join the group and help to run the group; increase the number of members of the group, friends,” said one post seeking recruits on Facebook to spread Bajrang Dal’s messages. “Fight for truth and justice until the unjust are destroyed.”

Ryan Mac, Cecilia Kang and Mike Isaac contributed reporting.

View

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Going to the Moon via the Cloud

Before the widespread availability of this kind of computing, organizations built expensive prototypes to test their designs. “We actually went and built a full-scale prototype, and ran it to the end of life before we deployed it in the field,” said Brandon Haugh, a core-design engineer, referring to a nuclear reactor he worked on with the U.S. Navy. “That was a 20-year, multibillion dollar test.”

Today, Mr. Haugh is the director of modeling and simulation at the California-based nuclear engineering start-up Kairos Power, where he hones the design for affordable and safe reactors that Kairos hopes will help speed the world’s transition to clean energy.

Nuclear energy has long been regarded as one of the best options for zero-carbon electricity production — except for its prohibitive cost. But Kairos Power’s advanced reactors are being designed to produce power at costs that are competitive with natural gas.

“The democratization of high-performance computing has now come all the way down to the start-up, enabling companies like ours to rapidly iterate and move from concept to field deployment in record time,” Mr. Haugh said.

But high-performance computing in the cloud also has created new challenges.

In the last few years, there has been a proliferation of custom computer chips purposely built for specific types of mathematical problems. Similarly, there are now different types of memory and networking configurations within high-performance computing. And the different cloud providers have different specializations; one may be better at computational fluid dynamics while another is better at structural analysis.

The challenge, then, is picking the right configuration and getting the capacity when you need it — because demand has risen sharply. And while scientists and engineers are experts in their domains, they aren’t necessarily in server configurations, processors and the like.

This has given rise to a new kind of specialization — experts in high-performance cloud computing — and new cross-cloud platforms that act as one-stop shops where companies can pick the right combination of software and hardware. Rescale, which works closely with all the major cloud providers, is the dominant company in this field. It matches computing problems for businesses, like Firefly and Kairos, with the right cloud provider to deliver computing that scientists and engineers can use to solve problems faster or at lowest possible cost.

View

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Racist Computer Engineering Words: ‘Master,’ ‘Slave’ and the Fight Over Offensive Terms

Anyone who joined a video call during the pandemic probably has a global volunteer organization called the Internet Engineering Task Force to thank for making the technology work.

The group, which helped create the technical foundations of the internet, designed the language that allows most video to run smoothly online. It made it possible for someone with a Gmail account to communicate with a friend who uses Yahoo, and for shoppers to safely enter their credit card information on e-commerce sites.

Now the organization is tackling an even thornier issue: getting rid of computer engineering terms that evoke racist history, like “master” and “slave” and “whitelist” and “blacklist.”

But what started as an earnest proposal has stalled as members of the task force have debated the history of slavery and the prevalence of racism in tech. Some companies and tech organizations have forged ahead anyway, raising the possibility that important technical terms will have different meanings to different people — a troubling proposition for an engineering world that needs broad agreement so technologies work together.

Black Lives Matter protests, engineers at social media platforms, coding groups and international standards bodies re-examined their code and asked themselves: Was it racist? Some of their databases were called “masters” and were surrounded by “slaves,” which received information from the masters and answered queries on their behalf, preventing them from being overwhelmed. Others used “whitelists” and “blacklists” to filter content.

statement about the draft from Ms. Knodel and Mr. ten Oever. “Exclusionary language is harmful,” it said.

A month later, two alternative proposals emerged. One came from Keith Moore, an I.E.T.F. contributor who initially backed Ms. Knodel’s draft before creating his own. His cautioned that fighting over language could bottleneck the group’s work and argued for minimizing disruption.

The other came from Bron Gondwana, the chief executive of the email company Fastmail, who said he had been motivated by the acid debate on the mailing list.

“I could see that there was no way we would reach a happy consensus,” he said. “So I tried to thread the needle.”

Mr. Gondwana suggested that the group should follow the tech industry’s example and avoid terms that would distract from technical advances.

Last month, the task force said it would create a new group to consider the three drafts and decide how to proceed, and members involved in the discussion appeared to favor Mr. Gondwana’s approach. Lars Eggert, the organization’s chair and the technical director for networking at the company NetApp, said he hoped guidance on terminology would be issued by the end of the year.

MySQL, a type of database software, chose “source” and “replica” as replacements for “master” and “slave.” GitHub, the code repository owned by Microsoft, opted for “main” instead of “master.”

terms after Regynald Augustin, an engineer at the company, came across the word “slave” in Twitter’s code and advocated change.

But while the industry abandons objectionable terms, there is no consensus about which new words to use. Without guidance from the Internet Engineering Task Force or another standards body, engineers decide on their own. The World Wide Web Consortium, which sets guidelines for the web, updated its style guide last summer to “strongly encourage” members to avoid terms like “master” and “slave,” and the IEEE, an organization that sets standards for chips and other computing hardware, is weighing a similar change.

Other tech workers are trying to solve the problem by forming a clearinghouse for ideas about changing language. That effort, the Inclusive Naming Initiative, aims to provide guidance to standards bodies and companies that want to change their terminology but don’t know where to begin. The group got together while working on an open-source software project, Kubernetes, which like the I.E.T.F. accepts contributions from volunteers. Like many others in tech, it began the debate over terminology last summer.

“We saw this blank space,” said Priyanka Sharma, the general manager of the Cloud Native Computing Foundation, a nonprofit that manages Kubernetes. Ms. Sharma worked with several other Kubernetes contributors, including Stephen Augustus and Celeste Horgan, to create a rubric that suggests alternative words and guides people through the process of making changes without causing systems to break. Several major tech companies, including IBM and Cisco, have signed on to follow the guidance.

email to task force participants, while a third remained up.

“We build consensus the hard way, so to speak, but in the end the consensus is usually stronger because people feel their opinions were reflected,” Mr. Eggert said. “I wish we could be faster, but on topics like this one that are controversial, it’s better to be slower.”

View