The change affects more than a third of Facebook’s daily users who had facial recognition turned on for their accounts, according to the company. That meant they received alerts when new photos or videos of them were uploaded to the social network. The feature had also been used to flag accounts that might be impersonating someone else and was incorporated into software that described photos to blind users.
“Making this change required us to weigh the instances where facial recognition can be helpful against the growing concerns about the use of this technology as a whole,” said Jason Grosse, a Meta spokesman.
Let Us Help You Protect Your Digital Life
Although Facebook plans to delete more than one billion facial recognition templates, which are digital scans of facial features, by December, it will not eliminate the software that powers the system, which is an advanced algorithm called DeepFace. The company has also not ruled out incorporating facial recognition technology into future products, Mr. Grosse said.
Privacy advocates nonetheless applauded the decision.
“Facebook getting out of the face recognition business is a pivotal moment in the growing national discomfort with this technology,” said Adam Schwartz, a senior lawyer with the Electronic Frontier Foundation, a civil liberties organization. “Corporate use of face surveillance is very dangerous to people’s privacy.”
Facebook is not the first large technology company to pull back on facial recognition software. Amazon, Microsoft and IBM have paused or ceased selling their facial recognition products to law enforcement in recent years, while expressing concerns about privacy and algorithmic bias and calling for clearer regulation.
Facebook’s facial recognition software has a long and expensive history. When the software was rolled out to Europe in 2011, data protection authorities there said the move was illegal and that the company needed consent to analyze photos of a person and extract the unique pattern of an individual face. In 2015, the technology also led to the filing of the class action suit in Illinois.
Over the last decade, the Electronic Privacy Information Center, a Washington-based privacy advocacy group, filed two complaints about Facebook’s use of facial recognition with the F.T.C. When the F.T.C. fined Facebook in 2019, it named the site’s confusing privacy settings around facial recognition as one of the reasons for the penalty.
GENEVA — President Biden had three big tasks to accomplish on his first foreign trip since taking office: Convince the allies that America was back, and for good; gather them in common cause against the rising threat of China; and establish some red lines for President Vladimir V. Putin of Russia, whom he called his “worthy adversary.”
He largely accomplished the first, though many European leaders still wonder whether his presidency may yet be just an intermezzo, sandwiched between the Trump era and the election of another America First leader uninterested in the 72-year-old Atlantic alliance.
He made inroads on the second, at least in parts of Europe, where there has been enormous reluctance to think first of China as a threat — economically, technologically and militarily — and second as an economic partner.
Mr. Biden expressed cautious optimism about finding ways to reach a polite accommodation with Mr. Putin. But it is far from clear that any of the modest initiatives the two men described on Wednesday, after a stiff, three-hour summit meeting on the edge of Lake Geneva, will fundamentally change a bad dynamic.
when he refers to Beijing’s actions against the Uyghur population and other predominantly Muslim ethnic minorities as genocide.
So Mr. Biden toned down his autocracy vs. democracy talk for this trip. And that worked.
Yet while “Biden has gotten words from the Europeans, he hasn’t gotten deeds,” said James M. Lindsay, director of studies at the Council on Foreign Relations. “Settling some trade issues is a very good start. But it’s not how you start, but how you finish, how you translate the sentiments in the communiqués into common policies, and that will be very difficult.’’
Mr. Biden carefully choreographed the trip so that he demonstrated the repairs being made to the alliance before going on to meet Mr. Putin. Mr. Biden made clear he wanted to present a unified front to the Russian leader, to demonstrate that in the post-Trump era, the United States and the NATO allies were one.
That allowed Mr. Biden to take a softer tone when he got to Geneva for the summit meeting, where he sought to portray Mr. Putin as an isolated leader who has to worry about his country’s future. When Mr. Biden said in response to a reporter’s question that “I don’t think he’s looking for a Cold War with the United States,’’ it was a signal that Mr. Biden believes he has leverage that the rest of the world has underappreciated.
Mr. Putin’s economy is “struggling,’’ he said, and he faces a long border with China at a moment when Beijing is “hellbent” on domination.
“He still, I believe, is concerned about being ‘encircled,’ ” Mr. Biden said. “He still is concerned that we, in fact, are looking to take him down.” But, he added, he didn’t think those security fears “are the driving force as to the kind of relationship he’s looking for with the United States.”
He set as the first test of Mr. Putin’s willingness to deal with him seriously a review of how to improve “strategic stability,’’ which he described as controlling the introduction of “new and dangerous and sophisticated weapons that are coming on the scene now that reduce the times of response, that raise the prospects of accidental war.”
It is territory that has been neglected, and if Mr. Biden is successful he may save hundreds of billions of dollars that would otherwise be spent on hypersonic and space weapons, as well as the development of new nuclear delivery systems.
But none of that is likely to deter Mr. Putin in the world of cyberweapons, which are dirt cheap and give him an instrument of power each and every day. Mr. Biden warned during his news conference that “we have significant cyber capability,” and said that while Mr. Putin “doesn’t know exactly what it is,” if the Russians “violate these basic norms, we will respond with cyber.”
The U.S. has had those capabilities for years but has hesitated to use them, for fear that a cyberconflict with Russia might escalate into something much bigger.
But Mr. Biden thinks Mr. Putin is too invested in self-preservation to let it come to that. In the end, he said, just before boarding Air Force One for the flight home, “You have to figure out what the other guy’s self-interest is. Their self-interest. I don’t trust anybody.”
David E. Sanger reported from Geneva and Steven Erlanger from Brussels.
Good morning and happy Sunday. Here’s what you need to know in business and tech news for the week ahead. — Charlotte Cowles
had a rough week. Digital currencies saw several ugly crashes, with Bitcoin ending Friday nearly 30 percent below its price a week before. The plunge followed an announcement from China that effectively banned its financial institutions from providing services related to cryptocurrency transactions. (Elon Musk’s sudden about-face on Bitcoin probably didn’t help, either.) The volatility shook some investors’ confidence in crypto, which has ridden a seemingly unstoppable wave of popularity — and gained traction with mainstream investors — over the past year.
Texas, Oklahoma and Indiana joined more than a dozen other states that are ending federal pandemic unemployment benefits early, citing the need to incentivize people to get back to work. The decision will get rid of the $300-a-week supplement that unemployment recipients have been getting since March and were scheduled to receive through September. It will also end all benefits for freelancers, part-timers and those who have been out of work for more than six months. Some lawmakers believe that cutting off benefits will encourage more people to apply for jobs, but that’s not always the case — a persistent lack of child care has also prevented many parents from returning to work.
can cause premature death, according to a new study by the World Health Organization. Long hours — also known as overwork — are on the rise and are associated with an estimated 35 percent higher risk of stroke and 17 percent higher risk of heart disease compared with working 35 to 40 hours per week, researchers said.
give the Internal Revenue Service more money to chase down wealthy individuals and companies who cheat on their taxes. As part of the same effort to close tax loopholes, the U.S. Treasury Department is trying to convince other countries to back a 15 percent global minimum tax rate on big companies. The policy is meant to deter corporations from sheltering their operations in tax havens such as Bermuda and the British Virgin Islands. But a number of governments have been hesitant to sign on for fear that they’ll scare off businesses.
Eyeing the Competition
Congress wants to bolster the United States’ ability to compete with China and is willing to throw money at the problem. The senate is working on a bill that would invest $120 billion in the nation’s development of cutting-edge technology and manufacturing. Known as the Endless Frontier Act, the legislation would fund new research on a scale that its proponents say has not been seen since the Cold War. In related news, the European Union blocked an investment deal with China on Thursday, citing concerns with the country’s abysmal human rights record.
C.E.O.s in the Hot Seat
Executives from the largest U.S. banks, including JPMorgan, Bank of America and Goldman Sachs, will testify before lawmakers this week about their actions (or lack thereof) to help struggling Americans and small businesses during the pandemic. Democrats on the Senate Banking and House Financial Services committees organized the hearings to scrutinize the banks’ role in lending money to alleviate the financial pressures of the past 15 months. The testimony could affect how lawmakers seek to regulate Wall Street in the coming years.
soared 30 percent in its initial public offering on Wednesday. Amazon indefinitely extended its ban on police usage of its facial recognition software, which has faced ethical criticism. And New York City lifted nearly all of its pandemic restrictions, allowing businesses to welcome customers back at full capacity.
Amazon said Tuesday that it would indefinitely prohibit police departments from using its facial recognition tool, extending a moratorium the company announced last year during nationwide protests over racism and biased policing.
The tool has faced scrutiny from lawmakers and some employees inside Amazon who said they were worried that it led to unfair treatment of African-Americans. Amazon has repeatedly defended the accuracy of its algorithms.
When Amazon announced the pause in June, it did not cite a specific reason for the change. The company said it hoped a year was enough time for Congress to create legislation regulating the ethical use of facial recognition technology. Congress has not banned the technology, or issued any significant regulations on it, but some cities have.
The primary suppliers of facial recognition tools to police departments have not been tech giants like Amazon, but smaller outfits that are not household names.
The European Union unveiled strict regulations on Wednesday to govern the use of artificial intelligence, a first-of-its-kind policy that outlines how companies and governments can use a technology seen as one of the most significant, but ethically fraught, scientific breakthroughs in recent memory.
The draft rules would set limits around the use of artificial intelligence in a range of activities, from self-driving cars to hiring decisions, bank lending, school enrollment selections and the scoring of exams. It would also cover the use of artificial intelligence by law enforcement and court systems — areas considered “high risk” because they could threaten people’s safety or fundamental rights.
Some uses would be banned altogether, including live facial recognition in public spaces, though there would be several exemptions for national security and other purposes.
The 108-page policy is an attempt to regulate an emerging technology before it becomes mainstream. The rules have far-reaching implications for major technology companies including Amazon, Google, Facebook and Microsoft that have poured resources into developing artificial intelligence, but also scores of other companies that use the software to develop medicine, underwrite insurance policies, and judge credit worthiness. Governments have used versions of the technology in criminal justice and allocating public services like income support.
“deepfakes” would have to make clear to users that what they are seeing is computer generated.
For the past decade, the European Union has been the world’s most aggressive watchdog of the technology industry, with its policies often used as blueprints by other nations. The bloc has already enacted the world’s most far-reaching data-privacy regulations, and is debating additional antitrust and content-moderation laws.
governments around the world, each with their own political and policy motivations, to crimp the industry’s power.
Today in Business
In the United States, President Biden has filled his administration with industry critics. Britain is creating a tech regulator to police the industry. India is tightening oversight of social media. China has taken aim at domestic tech giants like Alibaba and Tencent.
The outcomes in the coming years could reshape how the global internet works and how new technologies are used, with people having access to different content, digital services or online freedoms based on where they are located.
Artificial intelligence — where machines are trained to perform jobs and make decisions on their own by studying huge volumes of data — is seen by technologists, business leaders and government officials as one of the world’s most transformative technologies, promising major gains in productivity.
But as the systems become more sophisticated it can be harder to understand why the software is making a decision, a problem that could get worse as computers become more powerful. Researchers have raised ethical questions about its use, suggesting that it could perpetuate existing biases in society, invade privacy, or result in more jobs being automated.
Release of the draft law by the European Commission, the bloc’s executive body, drew a mixed reaction. Many industry groups expressed relief the regulations were not more stringent, while civil society groups said they should have gone further.
“There has been a lot of discussion over the last few years about what it would mean to regulate A.I., and the fallback option to date has been to do nothing and wait and see what happens,” said Carly Kind, director of the Ada Lovelace Institute in London, which studies the ethical use of artificial intelligence. “This is the first time any country or regional bloc has tried.”
ethical uses of the software said she was fired for criticizing the company’s lack of diversity and the biases built into modern artificial intelligence software. Debates have raged inside Google and other companies about selling the cutting-edge software to governments for military use.
In the United States, the risks of artificial intelligence are also being considered by government authorities.
This week, the Federal Trade Commission warned against the sale of artificial intelligence systems that use racially biased algorithms, or ones that could “deny people employment, housing, credit, insurance, or other benefits.”
Elsewhere, in Massachusetts, and in cities like Oakland, Calif.; Portland, Ore.; and San Francisco, governments have taken steps to restrict police use of facial recognition.
Its pathway to power is building new networks rather than disrupting old ones. Economists debate when the Chinese will have the world’s largest gross domestic product — perhaps toward the end of this decade — and whether they can meet their other two big national goals: building the world’s most powerful military and dominating the race for key technologies by 2049, the 100th anniversary of Mao’s revolution.
Their power arises not from their relatively small nuclear arsenal or their expanding stockpile of conventional weapons. Instead, it arises from their expanding economic might and how they use their government-subsidized technology to wire nations be it Latin America or the Middle East, Africa or Eastern Europe, with 5G wireless networks intended to tie them ever closer to Beijing. It comes from the undersea cables they are spooling around the world so that those networks run on Chinese-owned circuits.
Ultimately, it will come from how they use those networks to make other nations dependent on Chinese technology. Once that happens, the Chinese could export some of their authoritarianism by, for example, selling other nations facial recognition software that has enabled them to clamp down on dissent at home.
Which is why Jake Sullivan, Mr. Biden’s national security adviser, who was with Secretary of State Antony J. Blinken for the meeting with their Chinese counterparts in Anchorage, warned in a series of writings in recent years that it could be a mistake to assume that China plans to prevail by directly taking on the United States military in the Pacific.
The New Washington
“The central premises of this alternative approach would be that economic and technological power is fundamentally more important than traditional military power in establishing global leadership,” he wrote, “and that a physical sphere of influence in East Asia is not a necessary precondition for sustaining such leadership.”
The Trump administration came to similar conclusions, though it did not publish a real strategy for dealing with China until weeks before it left office. Its attempts to strangle Huawei, China’s national champion in telecommunications, and wrest control of social media apps like TikTok, ended up as a disorganized effort that often involved threatening, and angering, allies who were thinking of buying Chinese technology.
Part of the goal of the Alaska meeting was to convince the Chinese that the Biden administration is determined to compete with Beijing across the board to offer competitive technology, like semiconductor manufacturing and artificial intelligence, even if that means spending billions on government-led research and development projects, and new industrial partnerships with Europe, India, Japan and Australia.
Hundreds of people gathered for the first lecture at what had become the world’s most important conference on artificial intelligence — row after row of faces. Some were East Asian, a few were Indian, and a few were women. But the vast majority were white men. More than 5,500 people attended the meeting, five years ago in Barcelona, Spain.
Timnit Gebru, then a graduate student at Stanford University, remembers counting only six Black people other than herself, all of whom she knew, all of whom were men.
The homogeneous crowd crystallized for her a glaring issue. The big thinkers of tech say A.I. is the future. It will underpin everything from search engines and email to the software that drives our cars, directs the policing of our streets and helps create our vaccines.
But it is being built in a way that replicates the biases of the almost entirely male, predominantly white work force making it.
especially with the current hype and demand for people in the field,” she wrote. “The people creating the technology are a big part of the system. If many are actively excluded from its creation, this technology will benefit a few while harming a great many.”
The A.I. community buzzed about the mini-manifesto. Soon after, Dr. Gebru helped create a new organization, Black in A.I. After finishing her Ph.D., she was hired by Google.
She teamed with Margaret Mitchell, who was building a group inside Google dedicated to “ethical A.I.” Dr. Mitchell had previously worked in the research lab at Microsoft. She had grabbed attention when she told Bloomberg News in 2016 that A.I. suffered from a “sea of dudes” problem. She estimated that she had worked with hundreds of men over the previous five years and about 10 women.
said she had been fired after criticizing Google’s approach to minority hiring and, with a research paper, highlighting the harmful biases in the A.I. systems that underpin Google’s search engine and other services.
“Your life starts getting worse when you start advocating for underrepresented people,” Dr. Gebru said in an email before her firing. “You start making the other leaders upset.”
As Dr. Mitchell defended Dr. Gebru, the company removed her, too. She had searched through her own Google email account for material that would support their position and forwarded emails to another account, which somehow got her into trouble. Google declined to comment for this article.
Their departure became a point of contention for A.I. researchers and other tech workers. Some saw a giant company no longer willing to listen, too eager to get technology out the door without considering its implications. I saw an old problem — part technological and part sociological — finally breaking into the open.
talking digital assistants and conversational “chatbots,” Google Photos relied on an A.I. system that learned its skills by analyzing enormous amounts of digital data.
Called a “neural network,” this mathematical system could learn tasks that engineers could never code into a machine on their own. By analyzing thousands of photos of gorillas, it could learn to recognize a gorilla. It was also capable of egregious mistakes. The onus was on engineers to choose the right data when training these mathematical systems. (In this case, the easiest fix was to eliminate “gorilla” as a photo category.)
As a software engineer, Mr. Alciné understood the problem. He compared it to making lasagna. “If you mess up the lasagna ingredients early, the whole thing is ruined,” he said. “It is the same thing with A.I. You have to be very intentional about what you put into it. Otherwise, it is very difficult to undo.”
the study drove a backlash against facial recognition technology and, particularly, its use in law enforcement. Microsoft’s chief legal officer said the company had turned down sales to law enforcement when there was concern the technology could unreasonably infringe on people’s rights, and he made a public call for government regulation.
Twelve months later, Microsoft backed a bill in Washington State that would require notices to be posted in public places using facial recognition and ensure that government agencies obtained a court order when looking for specific people. The bill passed, and it takes effect later this year. The company, which did not respond to a request for comment for this article, did not back other legislation that would have provided stronger protections.
Ms. Buolamwini began to collaborate with Ms. Raji, who moved to M.I.T. They started testing facial recognition technology from a third American tech giant: Amazon. The company had started to market its technology to police departments and government agencies under the name Amazon Rekognition.
Ms. Buolamwini and Ms. Raji published a study showing that an Amazon face service also had trouble identifying the sex of female and darker-skinned faces. According to the study, the service mistook women for men 19 percent of the time and misidentified darker-skinned women for men 31 percent of the time. For lighter-skinned males, the error rate was zero.
New York Times article that described it.
In an open letter, Dr. Mitchell and Dr. Gebru rejected Amazon’s argument and called on it to stop selling to law enforcement. The letter was signed by 25 artificial intelligence researchers from Google, Microsoft and academia.
Last June, Amazon backed down. It announced that it would not let the police use its technology for at least a year, saying it wanted to give Congress time to create rules for the ethical use of the technology. Congress has yet to take up the issue. Amazon declined to comment for this article.
The End at Google
Dr. Gebru and Dr. Mitchell had less success fighting for change inside their own company. Corporate gatekeepers at Google were heading them off with a new review system that had lawyers and even communications staff vetting research papers.
Dr. Gebru’s dismissal in December stemmed, she said, from the company’s treatment of a research paper she wrote alongside six other researchers, including Dr. Mitchell and three others at Google. The paper discussed ways that a new type of language technology, including a system built by Google that underpins its search engine, can show bias against women and people of color.
After she submitted the paper to an academic conference, Dr. Gebru said, a Google manager demanded that she either retract the paper or remove the names of Google employees. She said she would resign if the company could not tell her why it wanted her to retract the paper and answer other concerns.
Cade Metz is a technology correspondent at The Times and the author of “Genius Makers: The Mavericks Who Brought A.I. to Google, Facebook, and the World,” from which this article is adapted.