ALONG THE EASTERN POLAND BORDER — The father had walked in circles in the rain-drenched Polish forest, cradling his sick daughter, delirious after three days with barely any food or water as temperatures dipped toward freezing. He was soaked, shivering and facing a terrible choice.
His daughter, 2, has cerebral palsy and epilepsy. He had wrapped her in a thin coat to protect her from the cold, and she needed urgent medical attention. The father, an Iraqi Kurd who gave his name as Karwan, had guided his family across the border from Belarus but was now in a forested area patrolled by Polish soldiers and border guards.
The choice for the father was pitiless: seeking medical help would mean a return to Belarus and the end of his family’s desperate journey to Europe.
“I can call for an ambulance for you, but border guards will come with it,” Piotr Bystrianin, a Polish activist who arrived to help, told the family, who said they wanted to request asylum in Poland. He had found them after hours of searching in the dark, alerted to their whereabouts by a locator pin sent by cellphone.
geopolitical fight between Belarus and Poland that has escalated into a man-made humanitarian disaster for Europe. At least five people who crossed illegally into Poland have died in recent weeks, some of hypothermia and exhaustion, according to Polish officials, and three nearly drowned in a Polish swamp.
Aleksandr G. Lukashenko of Belarus is using migrants to punish the European Union for imposing sanctions on him for cracking down hard after a disputed election last year. The migrants — some fleeing poverty in Africa and elsewhere and others escaping war in countries like Afghanistan and Iraq — are allowed to enter Belarus, and then encouraged to cross over into Poland, a member of the European Union, with hopes of dispersing across the region.
Poland’s right-wing government, determined to keep out refugees and economic migrants, has flooded the eastern border area with security agents, while keeping out prying eyes by declaring it an emergency exclusion zone off limits to all but residents.
in an interview that it was “harmful” for the government to suggest that “every refugee is a terrorist or a sex offender,” adding: “We cannot accept that people die in front of our eyes.”
In a detailed report, Amnesty International last week documented how Polish border guards had held 32 Afghan asylum seekers in “horrendous conditions for weeks” and then pushed them back over the border into Belarus in violation of international law. In a separate report, the Helsinki Foundation for Human Rights said that “Poland is conducting mass illegal pushbacks at its border with Belarus.”
Some officials are pushing back against the government’s policy. Poland’s deputy commissioner for human rights denounced the treatment of asylum seekers as a “scandal” that shows “the darkest possible image of Poland.”
sanctions on Belarus for forcing down a passenger jet carrying a Belarusian dissident. Mr. Lukashenko’s government initially steered the migrants toward Lithuania, but directed them south to the Polish border after Lithuania erected a fence.
Both Lithuania and Poland have reinforced their borders, laying coils of razor wire and fortifying existing barriers, borrowing anti-migrant methods pioneered by Hungary at the height of Europe’s migrant crisis in 2015.
The European Union, loath to see a repeat of that crisis and another surge of support for populist, anti-immigration politicians, has mostly supported the efforts of Poland and Lithuania to keep out people trying to enter from Belarus.
report on the briefing: “He raped a cow and wanted to get into Poland? Details on migrants at the border.”
But the picture turned out to be a still from a zoophilia pornography movie available on the internet, and involved a horse, not a cow.
Poland has taken in hundreds of asylum-seekers airlifted from Afghanistan since the Taliban took power in August but hostility to migrants sneaking across the border has been a constant feature of Poland’s ruling Law and Justice party. In 2015, ahead of elections that brought it to power, its leader said they carried “all sorts of parasites and protozoa.”
Fundacja Ocalenie, waited patiently for the distraught family to make their decision.
Worried that his ailing daughter and others in the group might not survive, Karwan decidedit would be best to seek medical help. Two ambulances arrived and, as he had been warned, border guards came, too.
Four family members were taken to the hospital, and six others to the border to be forced back into Belarus. Mr. Bystrianin and a fellow activist, Dorota Nowok, in the area to provide food and clothing, were fined for entering a restricted zone.
Monika Pronczuk contributed reporting from Brussels, and Anatol Magdziarz from Warsaw.
In 2010, Accenture scored an accounting contract with Facebook. By 2012, that had expanded to include a deal for moderating content, particularly outside the United States.
That year, Facebook sent employees to Manila and Warsaw to train Accenture workers to sort through posts, two former Facebook employees involved with the trip said. Accenture’s workers were taught to use a Facebook software system and the platform’s guidelines for leaving content up, taking it down or escalating it for review.
What started as a few dozen Accenture moderators grew rapidly.
By 2015, Accenture’s office in the San Francisco Bay Area had set up a team, code-named Honey Badger, just for Facebook’s needs, former employees said. Accenture went from providing about 300 workers in 2015 to about 3,000 in 2016. They are a mix of full-time employees and contractors, depending on the location and task.
The firm soon parlayed its work with Facebook into moderation contracts with YouTube, Twitter, Pinterest and others, executives said. (The digital content moderation industry is projected to reach $8.8 billion next year, according to Everest Group, roughly double the 2020 total.) Facebook also gave Accenture contracts in areas like checking for fake or duplicate user accounts and monitoring celebrity and brand accounts to ensure they were not flooded with abuse.
After federal authorities discovered in 2016 that Russian operatives had used Facebook to spread divisive posts to American voters for the presidential election, the company ramped up the number of moderators. It said it would hire more than 3,000 people — on top of the 4,500 it already had — to police the platform.
“If we’re going to build a safe community, we need to respond quickly,” Mr. Zuckerberg said in a 2017 post.
The next year, Facebook hired Arun Chandra, a former Hewlett Packard Enterprise executive, as vice president of scaled operations to help oversee the relationship with Accenture and others. His division is overseen by Ms. Sandberg.
More than 1,500 workers for the video game maker Activision Blizzard walked out from their jobs this week. Thousands signed a letter rebuking their employer. And even as the chief executive apologized, current and former employees said they would not stop raising a ruckus.
Shay Stein, who used to work at Activision, said it was “heartbreaking.” Lisa Welch, a former vice president, said she felt “profound disappointment.” Others took to Twitter or waved signs outside one of the company’s offices on Wednesday to share their anger.
Activision, known for its hugely popular Call of Duty, World of Warcraft and StarCraft gaming franchises, has been thrown into an uproar over workplace behavior issues. The upheaval stems from an explosive lawsuit that California’s Department of Fair Employment and Housing filed on July 20, accusing the $65 billion company of fostering a “frat boy workplace culture” in which men joked about rape and women were routinely harassed and paid less than their male colleagues.
Activision publicly criticized the agency’s two-year investigation and allegations as “irresponsible behavior from unaccountable state bureaucrats.” But its dismissive tone angered employees, who called out the company for trying to sweep away what they said were heinous problems that had been ignored for too long.
Hollywood, restaurants and the media — the male-dominated video game sector has long stood out for its openly toxic behavior and lack of change. In 2014, feminist critics of the industry faced death threats in what became known as Gamergate. Executives at the gaming companies Riot Games and Ubisoft have also been accused of misconduct.
Now the actions at Activision may signal a new phase, where a critical mass of the industry’s own workers are indicating they will no longer tolerate such behavior.
“This could mean some real accountability for companies that aren’t taking care of their workers and are creating inequitable work environments where women and gender minorities are kept at the margins and abused,” said Carly Kocurek, an associate professor at the Illinois Institute of Technology who studies gender in gaming.
She said California’s lawsuit and the fallout at Activision were a “big deal” for an industry that had traditionally shrugged off claims of sexism and harassment. Other gaming companies are most likely watching the situation, she added, and considering whether they need to address their own cultures.
spared little detail. Many of the misconduct accusations focused on a division called Blizzard, which the company merged with through a deal with Vivendi Games in 2008.
The lawsuit accused Activision of being a “a breeding ground for harassment and discrimination against women.” Employees engaged in “cube crawls” in which they got drunk and acted inappropriately toward women at work cubicles, the lawsuit said.
In one case, a female employee died by suicide during a business trip because of the sexual relationship she had been having with her male supervisor, the lawsuit said. Before her death, male colleagues had shared an explicit photo of the woman, according to the lawsuit.
Employees reacted furiously. An open letter addressed to Activision’s leaders calling for them to take the accusations more seriously and “demonstrate compassion” for victims attracted more than 3,000 signatures from current and former employees by Wednesday. The company has nearly 10,000 employees.
“We no longer trust that our leaders will place employee safety above their own interests,” the letter said, calling Ms. Townsend’s remarks “unacceptable.”
a $155 million pay package that makes him one of the country’s highest-paid executives, added that the company would beef up the team that investigated reported misconduct, fire managers who were found to have impeded investigations and remove in-game content that had been flagged as inappropriate.
Employees said it was not enough.
“We will not return to silence; we will not be placated by the same processes that led us to this point,” organizers of the walkout said in a public statement. They declined to be identified out of fear of reprisal.
LONDON — Russia is increasingly pressuring Google, Twitter and Facebook to fall in line with Kremlin internet crackdown orders or risk restrictions inside the country, as more governments around the world challenge the companies’ principles on online freedom.
Russia’s internet regulator, Roskomnadzor, recently ramped up its demands for the Silicon Valley companies to remove online content that it deems illegal or restore pro-Kremlin material that had been blocked. The warnings have come at least weekly since services from Facebook, Twitter and Google were used as tools for anti-Kremlin protests in January. If the companies do not comply, the regulator has said, they face fines or access to their products may be throttled.
The latest clashes flared up this week, when Roskomnadzor told Google on Monday to block thousands of unspecified pieces of illegal content or it would slow access to the company’s services. On Tuesday, a Russian court fined Google 6 million rubles, or about $81,000, for not taking down another piece of content.
store all data on Russian users within the country by July 1 or face fines. In March, the authorities had made it harder for people to see and send posts on Twitter after the company did not take down content that the government considered illegal. Twitter has since removed roughly 6,000 posts to comply with the orders, according to Roskomnadzor. The regulator has threatened similar penalties against Facebook.
the police visited Twitter’s offices in New Delhi in a show of force. No employees were present, but India’s governing party has become increasingly upset with the perception that Twitter has sided with its critics during the coronavirus pandemic.
In Myanmar, Poland, Turkey and elsewhere, leaders are also tightening internet controls. In Belarus, President Aleksandr G. Lukashenko this week signed a law banning livestreams from unauthorized protests.
“All of these policies will have the effect of creating a fractured internet, where people have different access to different content,” said Jillian York, an internet censorship expert with the Electronic Frontier Foundation in Berlin.
The struggle over online speech in Russia has important ramifications because the internet companies have been seen as shields from government censors. The latest actions are a major shift in the country, where the internet, unlike television, had largely remained open despite President Vladimir V. Putin’s tight grip on society.
“sovereign internet,” a legal and technical system to block access to certain websites and fence off parts of the Russian internet from the rest of the world.
an interview this week with Kommersant, a leading Russian newspaper, Andrey Lipov, the head of Roskomnadzor, said slowing down access to internet services was a way to force the companies to comply with Russian laws and takedown orders. Mr. Lipov said blocking their services altogether was not the goal.
Google declined to discuss the situation in Russia and said it received government requests from the around the world, which it discloses in its transparency reports.
Facebook also would not discuss Russia, but said it restricted content that violated local laws or its terms of service. “We always strive to preserve voice for the greatest number of people,” a spokeswoman said.
Twitter said in a statement that it took down content flagged by the Russian authorities that violated its policies or local laws.
protests in support of the opposition leader Alexei A. Navalny after his arrest in January. The demonstrations were the biggest shows of dissent against Mr. Putin in years.
“This mobilization was happening online,” Ms. Zlobina said.
The Russian government has portrayed the tech industry as part of a foreign campaign to meddle in domestic affairs. The authorities have accused the companies of blocking pro-Kremlin online accounts while boosting the opposition, and said the platforms were also havens for child pornography and drug sales.
Twitter became the first major test of Russia’s censorship technology in March when access to its service was slowed down, according to researchers at the University of Michigan.
To resolve the conflict, a Twitter executive met at least twice with Russian officials, according to the company and Roskomnadzor. The government, which had threatened to ban Twitter entirely, said the company had eventually complied with 91 percent of its takedown requests.
Other internet companies have also been affected. Last month, TikTok, the popular social media platform owned by the Chinese company ByteDance, was fined 2.6 million rubles, or about $35,000, for not removing posts seen as encouraging minors to participate in illegal demonstrations. TikTok did not respond to a request for comment.
The fines are small, but larger penalties loom. The Russian government can increase fines to as much as 10 percent of a company’s revenue for repeat offenses, and, perhaps more important, authorities can disrupt their services.
Perhaps the biggest target has been Google. YouTube has been a key outlet for government critics such as Mr. Navalny to share information and organize. Unlike Facebook and Twitter, Google has employees in Russia. (The company would not say how many.)
In addition to this week’s warning, Russia has demanded that Google lift restrictions that limit the availability of some content from state media outlets like Sputnik and Russia Today outside Russia.
Russia’s antitrust regulator is also investigating Google over YouTube’s policies for blocking videos.
Google is trying to use the courts to fight some actions by the Russian government. Last month, it sued Roskomnadzor to fight an order to remove 12 YouTube videos related to opposition protests. In another case, the company appealed a ruling ordering YouTube to reinstate videos from Tsargrad, a nationalist online TV channel, which Google had taken down over what it said were violations of American sanctions.
Joanna Szymanska, a senior program officer for Article 19, an internet freedom group, said Google’s recent lawsuit to fight the YouTube takedown orders would influence what other countries did in the future, even if the company was likely to lose in court. Ms. Szymanska, who is based in Poland, called on the tech companies to be more transparent about what content they were being asked to delete, and what orders they were complying with.
“The Russian example will be used elsewhere if it works well,” she said.
Adam Satariano reported from London and Oleg Matsnev from Moscow. Anton Troianovski contributed reporting from Moscow.
On Chinese iPhones, Apple forbids apps about the Dalai Lama while hosting those from the Chinese paramilitary group accused of detaining and abusing Uyghurs, an ethnic minority group in China.
The company has also helped China spread its view of the world. Chinese iPhones censor the emoji of the Taiwanese flag, and their maps suggest Taiwan is part of China. For a time, simply typing the word “Taiwan” could make an iPhone crash, according to Patrick Wardle, a former hacker at the National Security Agency.
Sometimes, Mr. Shoemaker said, he was awakened in the middle of the night with demands from the Chinese government to remove an app. If the app appeared to mention the banned topics, he would remove it, but he would send more complicated cases to senior executives, including Mr. Cue and Mr. Schiller.
Apple resisted an order from the Chinese government in 2012 to remove The Times’s apps. But five years later, it ultimately did. Mr. Cook approved the decision, according to two people with knowledge of the matter who spoke on the condition of anonymity.
Apple recently began disclosing how often governments demand that it remove apps. In the two years ending June 2020, the most recent data available, Apple said it approved 91 percent of the Chinese government’s app-takedown requests, removing 1,217 apps.
In every other country combined over that period, Apple approved 40 percent of requests, removing 253 apps. Apple said that most of the apps it removed for the Chinese government were related to gambling or pornography or were operating without a government license, such as loan services and livestreaming apps.
Yet a Times analysis of Chinese app data suggests those disclosures represent a fraction of the apps that Apple has blocked in China. Since 2017, roughly 55,000 active apps have disappeared from Apple’s App Store in China, according to a Times analysis of data compiled by Sensor Tower, an app data firm. Most of those apps have remained available in other countries.
One is to avoid any investment prohibited by Islam, which includes banks, insurance, tobacco, alcohol and pornography. But Mr. Salam said he also looked at the companies themselves. So he can invest in Islamic banks, for instance, or in beverage makers like Monster, which may have been excluded under a broad screen.
He said he also looked for companies that did not have excessive debt, because debt is acceptable only in cases of necessity. He steers away from companies that are cash rich, because there is a prohibition on trading in them when more than 45 percent of their balance sheet is in cash. While a portfolio with Apple holds lots of cash, it is not enough to violate the prohibition, but it is a number the fund monitors.
He also screens companies that have a small portion of their earnings inforbidden revenue, like an airline that sells alcohol. In that case, the fund will look to see if the company gets less than 5 percent of its revenue from something that is prohibited.
“In an ideal world, we’d be buying something that is 100 percent compliant, but that’s just not possible,” Mr. Salam said.
To add diversification, Saturna has recently added the Islamic equivalent of a fixed-income fund, which invests in the market for sukuk, which are bondlike instruments. Instead of earning interest on the bonds, investors receive a lease payment from the sukuk. For example, if an airline like Emirates needs a new plane, it can borrow the money from the sukuk market and the obligation is structured as a lease of that plane to the sukuk.
Saturna’s oldest fund, the Amana Income Fund, has a five-year return of 13 percent, compared with more than 17 percent for the S&P 500. But the Amana Growth Fund has a five-year return of 21 percent. The sukuk fund has just hit its five-year benchmark, returning just over 3 percent.
“The difference between Islamic and non-Islamic investors is not in what they’re looking for but in what products are available to them,” Mr. Salam said.
PARIS — Marine Le Pen, the French far-right leader, was acquitted on Tuesday in a criminal case involving graphic photographs of acts of violence by the Islamic State that she posted on Twitter in 2015 after comparisons were drawn between the group and her party.
Ms. Le Pen, the head of the National Rally party, was acquitted by a court in Nanterre, a western suburb of Paris. The charge against her — the dissemination of violent messages — carried a sentence of up to three years in prison and a fine of 75,000 euros, about $90,000, but prosecutors had only sought a fine of €5,000.
Rodolphe Bosselut, Ms. Le Pen’s lawyer in the case, said, “The court judged that by publishing the photos, she was exercising her freedom of expression.” He added that the ruling underlined that the posts clearly were not Islamic State propaganda and had an “informative value” instead.
Prosecutors opened their investigation in December 2015, shortly after Ms. Le Pen — furious over a televised interview in which a French journalist compared her party to the Islamic State — posted three pictures on Twitter that showed killings carried out by the group. One showed the body of James Foley, an American journalist who was kidnapped in Syria in 2012 and later beheaded by the group.
deleted that post after criticism from Mr. Foley’s family, but the two other pictures, which showed a man in an orange jumpsuit being run over by a tank and a prisoner being burned alive in a cage, remained online.
“Daesh is THAT!” she wrote, using an Arabic acronym for the Islamic State, which is also known as ISIS.
The pictures — posted just weeks after a string of deadly terrorist attacks in and around Paris — caused outrage in France.
Ms. Le Pen lost to President Emmanuel Macron in the 2017 election in France, and her party has a limited presence in Parliament. But she is still seen as Mr. Macron’s main opponent on the national political scene, and the verdict will most likely help her prospects in presidential elections next year, with early polls suggesting that she will again face Mr. Macron in a runoff.
The killing of a police officer by a radicalized Tunisian man last month in a town southwest of Paris has fueled a resurgent debate about terrorism, security and immigration, all themes that have fed the rise of Ms. Le Pen’s far-right party, despite Mr. Macron’s attempts to court voters on those issues.
appeared increasingly fragile, and Ms. Le Pen has spent years trying to soften her image and pull her party from the extremist fringe into the mainstream.
Unlike other French politicians who have recently been convicted on serious charges like corruption or embezzlement, Ms. Le Pen was prosecuted under a more obscure article in the French penal code that prohibits disseminating messages that are “violent” or that could “seriously harm human dignity” and that could be seen by a minor.
While there is robust support for freedom of expression, laws regulating free speech in France are often considered more restrictive than in the United States, with laws against calls to violence or hate speech.
Ms. Le Pen has called the investigation a political witch hunt aimed at silencing her, arguing that she was being wrongly prosecuted for exercising her free speech, on charges normally meant to protect minors from violent propaganda or pornography.
“The crime is causing harm to human dignity, not its photographic reproduction,” she said during the trial, held in February.
Gilbert Collard, a lawyer and National Rally representative in the European Parliament who had also posted pictures of Islamic State violence on the same day as Ms. Le Pen did, was acquitted of the charges against him on Tuesday, too.
The court’s verdict on Ms. Le Pen comes amid an increasingly heated political climate in France, ahead of the presidential elections scheduled for next year but also regional elections this June.
BERLIN — German prosecutors have broken up an online platform for sharing images and videos showing the sexual abuse of children, mostly boys, that had an international following of more than 400,000 members, they said on Monday.
The site, named “Boystown,” had been around since at least June 2019 and included forums where members from around the globe exchanged images and videos showing children, including toddlers, being sexually abused. In addition to the forums, the site had chat rooms where members could connect with one another in various languages.
German federal prosecutors described it as “one of the largest child pornography sites operating on the dark net” in a statement they released on Monday announcing the arrest in mid-April of three German men who managed the site and a fourth who had posted thousands of images to it.
“This investigative success has a clear message: Those who prey on the weakest are not safe anywhere,” Germany’s interior minister, Horst Seehofer, said on Monday. “We are holding perpetrators accountable and doing what is humanly possible to protect children from such repugnant crimes.”
several sophisticated networks, tens of thousands of new cases of abuse are reported to the authorities each year. Parliament passed a law that would toughen sentences against those convicted of sexual exploitation or abuse of children last week.
The accused administrators of the “Boystown” site, aged 40 and 49, were arrested after raids in their homes in Paderborn and Munich, the prosecutors said. A third man accused of being an administrator, 58, was living in the Concepción region of Paraguay, where he has been detained awaiting extradition.
was handed a 10-month suspended sentence after he was convicted of 26 counts of possession and sharing photos of girls younger than 10 being severely sexually abused. Mr. Metzelder confessed to some of the charges and apologized to the victims, which the judge said she took into consideration in lessening his punishment.
But many Germans, including some of Mr. Metzelder’s former teammates, protested that the punishment was too lenient.
“I don’t see how that is supposed to act as a deterrent,” Lukas Podolski, who was a member of the 2014 team that won the soccer World Cup for Germany, told the Bild newspaper. “Whoever commits sins against children must be punished with the full weight of the law.”
SEOUL — A South Korean man was sentenced to 34 years in prison on Thursday as part of the country’s crackdown on an infamous network of online chat rooms that lured young women, including minors, with promises of high-paying jobs before forcing them into pornography.
The man, Moon Hyeong-wook, opened one of the first such sites in 2015, prosecutors said. Mr. Moon, 25, operated a clandestine members-only chat room under the nickname “GodGod” on the Telegram messenger app, offering more than 3,700 clips of illicit pornography, they said.
Mr. Moon, an architecture major who was expelled from his college after his arrest last year, was one of the most notorious of the hundreds of people the police have arrested in the course of their investigation. Another chat room operator, a man named Cho Joo-bin, was sentenced to 40 years in prison last November.
“The accused inflicted irreparable damage on his victims through his anti-society crime that undermined human dignity,” the presiding judge, Cho Soon-pyo, said of Mr. Moon in his ruling on Thursday. The trial took place in a district court in the city of Andong in central South Korea.
Mr. Moon was indicted in June on charges of forcing 21 young women, including minors, into making sexually explicit videos between 2017 and early last year.
He approached young women looking for high-paying jobs through social media platforms, then lured them into making sexually explicit videos, promising big payouts, prosecutors said. He also hacked into the online accounts of women who uploaded sexually explicit content, pretending to be a police officer investigating pornography.
Once he got hold of the images and personal data, he used them to blackmail the women, threatening to send the clips to their parents unless the victims supplied more footage, prosecutors said.
Prosecutors demanded a life sentence for Mr. Moon.
Last December, the police said they had investigated 3,500 suspects, most of them men in their 20s or teenagers, as part of their investigation of the online chat rooms that served as avenues for sexual exploitation and pornographic distribution. They arrested 245 of them.
The police also identified 1,100 victims.
The scandal, known in South Korea as “the Nth Room Case,” caused outrage over the cruel exploitation of the young women. Women’s rights groups picketed courthouses where chat room operators were on trial, accusing judges of condoning sex crimes by handing down what they considered light punishments.
On Thursday, outside the Andong courthouse, advocates held a rally demanding the maximum punishment for Mr. Moon.
In recent years, the South Korean police began cracking down on sexually explicit file-sharing websites as part of international efforts to fight child pornography. As smartphones proliferated, they soon realized that much of the illegal trade was migrating to online chat rooms on messaging services like Telegram.
The police said they had trouble tracking down customers of the online chat rooms because they often used cryptocurrency payments to avoid being caught.
Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.
Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.
The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.
But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.
especially with the current hype and demand for people in the field,” she wrote. “The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”
The A.I. community buzzed about the mini-manifesto. Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google.
She teamed with Margaret Mitchell, who was building a group inside Google dedicated to “ethical A.I.” Dr. Mitchell had previously worked in the research lab at Microsoft. She had grabbed attention when she told Bloomberg News in 2016 that A.I. suffered from a “sea of dudes” problem. She estimated that she had worked with hundreds of men over the previous five years and about 10 women.
said she had been fired after criticizing Google’s approach to minority hiring and, with a research paper, highlighting the harmful biases in the A.I. systems that underpin Google’s search engine and other services.
“Your life starts getting worse when you start advocating for underrepresented people,” Dr. Gebru said in an email before her firing. “You start making the other leaders upset.”
As Dr. Mitchell defended Dr. Gebru, the company removed her, too. She had searched through her own Google email account for material that would support their position and forwarded emails to another account, which somehow got her into trouble. Google declined to comment for this article.
Their departure became a point of contention for A.I. researchers and other tech workers. Some saw a giant company no longer willing to listen, too eager to get technology out the door without considering its implications. I saw an old problem — part technological and part sociological — finally breaking into the open.
talking digital assistants and conversational “chatbots,” Google Photos relied on an A.I. system that learned its skills by analyzing enormous amounts of digital data.
Called a “neural network,” this mathematical system could learn tasks that engineers could never code into a machine on their own. By analyzing thousands of photos of gorillas, it could learn to recognize a gorilla. It was also capable of egregious mistakes. The onus was on engineers to choose the right data when training these mathematical systems. (In this case, the easiest fix was to eliminate “gorilla” as a photo category.)
As a software engineer, Mr. Alciné understood the problem. He compared it to making lasagna. “If you mess up the lasagna ingredients early, the whole thing is ruined,” he said. “It is the same thing with A.I. You have to be very intentional about what you put into it. Otherwise, it is very difficult to undo.”
the study drove a backlash against facial recognition technology and, particularly, its use in law enforcement. Microsoft’s chief legal officer said the company had turned down sales to law enforcement when there was concern the technology could unreasonably infringe on people’s rights, and he made a public call for government regulation.
Twelve months later, Microsoft backed a bill in Washington State that would require notices to be posted in public places using facial recognition and ensure that government agencies obtained a court order when looking for specific people. The bill passed, and it takes effect later this year. The company, which did not respond to a request for comment for this article, did not back other legislation that would have provided stronger protections.
Ms. Buolamwini began to collaborate with Ms. Raji, who moved to M.I.T. They started testing facial recognition technology from a third American tech giant: Amazon. The company had started to market its technology to police departments and government agencies under the name Amazon Rekognition.
Ms. Buolamwini and Ms. Raji published a study showing that an Amazon face service also had trouble identifying the sex of female and darker-skinned faces. According to the study, the service mistook women for men 19 percent of the time and misidentified darker-skinned women for men 31 percent of the time. For lighter-skinned males, the error rate was zero.
New York Times article that described it.
In an open letter, Dr. Mitchell and Dr. Gebru rejected Amazon’s argument and called on it to stop selling to law enforcement. The letter was signed by 25 artificial intelligence researchers from Google, Microsoft and academia.
Last June, Amazon backed down. It announced that it would not let the police use its technology for at least a year, saying it wanted to give Congress time to create rules for the ethical use of the technology. Congress has yet to take up the issue. Amazon declined to comment for this article.
The End at Google
Dr. Gebru and Dr. Mitchell had less success fighting for change inside their own company. Corporate gatekeepers at Google were heading them off with a new review system that had lawyers and even communications staff vetting research papers.
Dr. Gebru’s dismissal in December stemmed, she said, from the company’s treatment of a research paper she wrote alongside six other researchers, including Dr. Mitchell and three others at Google. The paper discussed ways that a new type of language technology, including a system built by Google that underpins its search engine, can show bias against women and people of color.
After she submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper or remove the names of Google employees. She said she would resign if the company could not tell her why it wanted her to retract the paper and answer other concerns.
Cade Metz is a technology correspondent at The Times and the author of “Genius Makers: The Mavericks Who Brought A.I. to Google, Facebook, and the World,” from which this article is adapted.