Less than two years after Google dismissed two researchers who criticized the biases built into artificial intelligence systems, the company has fired a researcher who questioned a paper it published on the abilities of a specialized type of artificial intelligence used in making computer chips.
The researcher, Satrajit Chatterjee, led a team of scientists in challenging the celebrated research paper, which appeared last year in the scientific journal Nature and said computers were able to design certain parts of a computer chip faster and better than human beings.
Dr. Chatterjee, 43, was fired in March, shortly after Google told his team that it would not publish a paper that rebutted some of the claims made in Nature, said four people familiar with the situation who were not permitted to speak openly on the matter. Google confirmed in a written statement that Dr. Chatterjee had been “terminated with cause.”
Google declined to elaborate about Dr. Chatterjee’s dismissal, but it offered a full-throated defense of the research he criticized and of its unwillingness to publish his assessment.
a similar paper a year earlier. Around that time, Google asked Dr. Chatterjee, who has a doctorate in computer science from the University of California, Berkeley, and had worked as a research scientist at Intel, to see if the approach could be sold or licensed to a chip design company, the people familiar with the matter said.
A.I. principles, including upholding high standards of scientific excellence. Soon after, Dr. Chatterjee was informed that he was no longer an employee, the people said.
Ms. Goldie said that Dr. Chatterjee had asked to manage their project in 2019 and that they had declined. When he later criticized it, she said, he could not substantiate his complaints and ignored the evidence they presented in response.
“Sat Chatterjee has waged a campaign of misinformation against me and Azalia for over two years now,” Ms. Goldie said in a written statement.
She said the work had been peer-reviewed by Nature, one of the most prestigious scientific publications. And she added that Google had used their methods to build new chips and that these chips were currently used in Google’s computer data centers.
Laurie M. Burgess, Dr. Chatterjee’s lawyer, said it was disappointing that “certain authors of the Nature paper are trying to shut down scientific discussion by defaming and attacking Dr. Chatterjee for simply seeking scientific transparency.” Ms. Burgess also questioned the leadership of Dr. Dean, who was one of 20 co-authors of the Nature paper.
“Jeff Dean’s actions to repress the release of all relevant experimental data, not just data that supports his favored hypothesis, should be deeply troubling both to the scientific community and the broader community that consumes Google services and products,” Ms. Burgess said.
Dr. Dean did not respond to a request for comment.
After the rebuttal paper was shared with academics and other experts outside Google, the controversy spread throughout the global community of researchers who specialize in chip design.
The chip maker Nvidia says it has used methods for chip design that are similar to Google’s, but some experts are unsure what Google’s research means for the larger tech industry.
“If this is really working well, it would be a really great thing,” said Jens Lienig, a professor at the Dresden University of Technology in Germany, referring to the A.I. technology described in Google’s paper. “But it is not clear if it is working.”
Three years after an employee revolt forced Google to abandon work on a Pentagon program that used artificial intelligence, the company is aggressively pursuing a major contract to provide its technology to the military.
The company’s plan to land the potentially lucrative contract, known as the Joint Warfighting Cloud Capability, could raise a furor among its outspoken work force and test the resolve of management to resist employee demands.
In 2018, thousands of Google employees signed a letter protesting the company’s involvement in Project Maven, a military program that uses artificial intelligence to interpret video images and could be used to refine the targeting of drone strikes. Google management caved and agreed to not renew the contract once it expired.
The outcry led Google to create guidelines for the ethical use of artificial intelligence, which prohibit the use of its technology for weapons or surveillance, and hastened a shake-up of its cloud computing business. Now, as Google positions cloud computing as a key part of its future, the bid for the new Pentagon contract could test the boundaries of those A.I. principles, which have set it apart from other tech giants that routinely seek military and intelligence work.
contract with Microsoft that was canceled this summer amid a lengthy legal battle with Amazon. Google did not compete against Microsoft for that contract after the uproar over Project Maven.
The Pentagon’s restart of its cloud computing project has given Google a chance to jump back into the bidding, and the company has raced to prepare a proposal to present to Defense officials, according to four people familiar with the matter who were not authorized to speak publicly. In September, Google’s cloud unit made it a priority, declaring an emergency “Code Yellow,” an internal designation of importance that allowed the company to pull engineers off other assignments and focus them on the military project, two of those people said.
On Tuesday, the Google cloud unit’s chief executive, Thomas Kurian, met with Charles Q. Brown, Jr., the chief of staff of the Air Force, and other top Pentagon officials to make the case for his company, two people said.
Google, in a written statement, said it is “firmly committed to serving our public sector customers” including the Defense Department, and that it “will evaluate any future bid opportunities accordingly.”
The contract replaces the now-scrapped Joint Enterprise Defense Infrastructure, or JEDI, the Pentagon cloud computing contract that was estimated to be worth $10 billion over 10 years. The exact size of the new contract is unknown, although it is half the duration and will be awarded to more than one company, not to a single provider like JEDI.
Project Maven in 2017 and prepared to bid for JEDI. Many Google employees believed Project Maven represented a potentially lethal use of artificial intelligence, and more than 4,000 workers signed a letter demanding that Google withdraw from the project.
Soon after, Google announced a set of ethical principles that would govern its use of artificial intelligence. Google would not allow its A.I. to be used for weapons or surveillance, said Sundar Pichai, its chief executive, but would continue to accept military contracts for cybersecurity and search-and-rescue.
weapons or those that direct injury.”
Lucy Suchman, a professor of anthropology of science and technology at Lancaster University whose research focuses on the use of technology in war, said that with so much money at stake, it is no surprise Google might waver on its commitment.
“It demonstrates the fragility of Google’s commitment to staying outside the major merger that’s happening between the D.O.D. and Silicon Valley,” Ms. Suchman said.
Google’s efforts come as its employees are already pushing the company to cancel a cloud computing contract with the Israeli military, called Project Nimbus, that provides Google’s services to government entities throughout Israel. In an open letter published last month by The Guardian, Google employees called on their employer to cancel the contract.
The Defense Department’s effort to transition to cloud technology has been mired in legal battles. The military operates on outdated computer systems and has spent billions of dollars on modernization. It turned to U.S. internet giants in the hope that the companies could quickly and securely move the Defense Department to the cloud.
awarded the JEDI contract to Microsoft. Amazon sued to block the contract, claiming that Microsoft did not have the technical capabilities to fulfill the military’s needs and that former President Donald J. Trump had improperly influenced the decision because of animosity toward Jeff Bezos, Amazon’s executive chairman and the owner of The Washington Post.
In July, the Defense Department announced that it could no longer wait for the legal fight with Amazon to resolve. It scrapped the JEDI contract and said it would be replaced with the Joint Warfighting Cloud Capability.
The Pentagon also noted that Amazon and Microsoft were the only companies that likely had the technology to meet its needs, but said it would conduct market research before ruling out other competitors. The Defense Department said it planned to reach out to Google, Oracle and IBM.
But Google executives believe they have the capability to compete for the new contract, and the company expects the Defense Department to tell it whether it will qualify to make a bid in the coming weeks, two people familiar with the matter said.
The Defense Department has previously said it hopes to award a contract by April.
“The internet is answering a question that it’s been wrestling with for decades, which is: How is the internet going to pay for itself?” he said.
The fallout may hurt brands that relied on targeted ads to get people to buy their goods. It may also initially hurt tech giants like Facebook — but not for long. Instead, businesses that can no longer track people but still need to advertise are likely to spend more with the largest tech platforms, which still have the most data on consumers.
David Cohen, chief executive of the Interactive Advertising Bureau, a trade group, said the changes would continue to “drive money and attention to Google, Facebook, Twitter.”
The shifts are complicated by Google’s and Apple’s opposing views on how much ad tracking should be dialed back. Apple wants its customers, who pay a premium for its iPhones, to have the right to block tracking entirely. But Google executives have suggested that Apple has turned privacy into a privilege for those who can afford its products.
For many people, that means the internet may start looking different depending on the products they use. On Apple gadgets, ads may be only somewhat relevant to a person’s interests, compared with highly targeted promotions inside Google’s web. Website creators may eventually choose sides, so some sites that work well in Google’s browser might not even load in Apple’s browser, said Brendan Eich, a founder of Brave, the private web browser.
“It will be a tale of two internets,” he said.
Businesses that do not keep up with the changes risk getting run over. Increasingly, media publishers and even apps that show the weather are charging subscription fees, in the same way that Netflix levies a monthly fee for video streaming. Some e-commerce sites are considering raising product prices to keep their revenues up.
Consider Seven Sisters Scones, a mail-order pastry shop in Johns Creek, Ga., which relies on Facebook ads to promote its items. Nate Martin, who leads the bakery’s digital marketing, said that after Apple blocked some ad tracking, its digital marketing campaigns on Facebook became less effective. Because Facebook could no longer get as much data on which customers like baked goods, it was harder for the store to find interested buyers online.
OAKLAND, Calif. — The seeds of a company’s downfall, it is often said in the business world, are sown when everything is going great.
It is hard to argue that things aren’t going great for Google. Revenue and profits are charting new highs every three months. Google’s parent company, Alphabet, is worth $1.6 trillion. Google has rooted itself deeper and deeper into the lives of everyday Americans.
But a restive class of Google executives worry that the company is showing cracks. They say Google’s work force is increasingly outspoken. Personnel problems are spilling into the public. Decisive leadership and big ideas have given way to risk aversion and incrementalism. And some of those executives are leaving and letting everyone know exactly why.
“I keep getting asked why did I leave now? I think the better question is why did I stay for so long?” Noam Bardin, who joined Google in 2013 when the company acquired mapping service Waze, wrote in a blog post two weeks after leaving the company in February.
Sundar Pichai, the company’s affable, low-key chief executive.
Fifteen current and former Google executives, speaking on the condition of anonymity for fear of angering Google and Mr. Pichai, told The New York Times that Google was suffering from many of the pitfalls of a large, maturing company — a paralyzing bureaucracy, a bias toward inaction and a fixation on public perception.
The executives, some of whom regularly interacted with Mr. Pichai, said Google did not move quickly on key business and personnel moves because he chewed over decisions and delayed action. They said that Google continued to be rocked by workplace culture fights, and that Mr. Pichai’s attempts to lower the temperature had the opposite effect — allowing problems to fester while avoiding tough and sometimes unpopular positions.
A Google spokesman said internal surveys about Mr. Pichai’s leadership were positive. The company declined to make Mr. Pichai, 49, available for comment, but it arranged interviews with nine current and former executives to offer a different perspective on his leadership.
“Would I be happier if he made decisions faster? Yes,” said Caesar Sengupta, a former vice president who worked closely with Mr. Pichai during his 15 years at Google. He left in March. “But am I happy that he gets nearly all of his decisions right? Yes.”
a fixture at congressional hearings. Even his critics say he has so far managed to navigate those hearings without ruffling the feathers of lawmakers or providing more ammunition to his company’s foes.
The Google executives complaining about Mr. Pichai’s leadership acknowledge that, and say he is a thoughtful and caring leader. They say Google is more disciplined and organized these days — a bigger, more professionally run company than the one Mr. Pichai inherited six years ago.
challenge Amazon in online commerce a few years ago. Mr. Pichai rejected the idea because he thought Shopify was too expensive, two people familiar with the discussions said.
to select Halimah DeLaine Prado, a longtime deputy in the company’s legal team.
Ms. Prado was at the top of an initial list of candidates provided to Mr. Pichai, who asked to see more names, several people familiar with the search said. The exhaustive search took so long, they said, that it became a running joke among industry headhunters.
Mr. Pichai’s reluctance to take decisive measures on Google’s volatile work force has been noticeable.
vowing to restore lost trust, while continuing to push Google’s view that Dr. Gebru was not fired. But it fell short of an apology, she said, and came across as public-relations pandering to some employees.
David Baker, a former director of engineering at Google’s trust and safety group who resigned in protest of Dr. Gebru’s dismissal, said Google should admit that it had made a mistake instead of trying to save face.
“Google’s lack of courage with its diversity problem is ultimately what evaporated my passion for the job,” said Mr. Baker, who worked at the company for 16 years. “The more secure Google has become financially, the more risk averse it has become.”
Some critiques of Mr. Pichai can be attributed to the challenge of maintaining Google’s outspoken culture among a work force that is far larger than it once was, said the Google executives whom the company asked to speak to The Times.
“I don’t think anyone else could manage these issues as well as Sundar,” said Luiz Barroso, one of the company’s most senior technical executives.
acquire the activity tracker Fitbit, which closed in January, took about a year as Mr. Pichai wrestled with aspects of the deal, including how to integrate the company, its product plans and how it intended to protect user data, said Sameer Samat, a Google vice president. Mr. Samat, who was pushing for the deal, said Mr. Pichai had identified potential problems that he had not fully considered.
“I could see how those multiple discussions could make somebody feel like we’re slow to make decisions,” Mr. Samat said. “The reality is that these are very large decisions.”
WASHINGTON — Lawmakers grilled the leaders of Facebook, Google and Twitter on Thursday about the connection between online disinformation and the Jan. 6 riot at the Capitol, causing Twitter’s chief executive to publicly admit for the first time that his product had played a role in the events that left five people dead.
When a Democratic lawmaker asked the executives to answer with a “yes” or a “no” whether the platforms bore some responsibility for the misinformation that had contributed to the riot, Jack Dorsey of Twitter said “yes.” Neither Mark Zuckerberg of Facebook nor Sundar Pichai of Google would answer the question directly.
The roughly five-hour hearing before a House committee marked the first time lawmakers directly questioned the chief executives regarding social media’s role in the January riot. The tech bosses were also peppered with questions about how their companies helped spread falsehoods around Covid-19 vaccines, enable racism and hurt children’s mental health.
It was also the first time the executives had testified since President Biden’s inauguration. Tough questioning from lawmakers signaled that scrutiny of Silicon Valley’s business practices would not let up, and could even intensify, with Democrats in the White House and leading both chambers of Congress.
tweeted a single question mark with a poll that had two options: “Yes” or “No.” When asked about his tweet by a lawmaker, he said “yes” was winning.
The January riot at the Capitol has made the issue of disinformation deeply personal for lawmakers. The riot was fueled by false claims from President Donald J. Trump and others that the election had been stolen, which were rampant on social media.
Some of the participants had connections to QAnon and other online conspiracy theories. And prosecutors have said that groups involved in the riot, including the Oath Keepers and the Proud Boys, coordinated some of their actions on social media.
ban Mr. Trump and his associates after the Jan. 6 riots. The bans hardened views by conservatives that the companies are left-leaning and are inclined to squelch conservative voices.
“We’re all aware of Big Tech’s ever-increasing censorship of conservative voices and their commitment to serve the radical progressive agenda,” said Representative Bob Latta of Ohio, the ranking Republican on the panel’s technology subcommittee.
The company leaders defended their businesses, saying they had invested heavily in hiring content moderators and in technology like artificial intelligence, used to identify and fight disinformation.
Mr. Zuckerberg argued against the notion that his company had a financial incentive to juice its users’ attention by driving them toward more extreme content. He said Facebook didn’t design “algorithms in order to just kind of try to tweak and optimize and get people to spend every last minute on our service.”
He added later in the hearing that elections disinformation was spread in messaging apps, where amplification and algorithms don’t aid in spread of false content. He also blamed television and other traditional media for spreading election lies.
The companies showed fissures in their view on regulations. Facebook has vocally supported internet regulations in a major advertising blitz on television and in newspapers. In the hearing, Mr. Zuckerberg suggested specific regulatory reforms to a key legal shield, known as Section 230 of the Communications Decency Act, that has helped Facebook and other Silicon Valley internet giants thrive.
The legal shield protects companies that host and moderate third-party content, and says companies like Google and Twitter are simply intermediaries of their user-generated content. Democrats have argued that with that protection, companies aren’t motivated to remove disinformation. Republicans accuse the companies of using the shield to moderate too much and to take down content that doesn’t represent their political viewpoints.
“I believe that Section 230 would benefit from thoughtful changes to make it work better for people,” Mr. Zuckerberg said in the statement.
He proposed that liability protection for companies be conditional on their ability to fight the spread of certain types of unlawful content. He said platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Reforms, he said, should be different for smaller social networks, which wouldn’t have the same resources like Facebook to meet new requirements.
Mr. Pichai and Mr. Dorsey said they supported requirements of transparency in content moderation but fell short of agreeing with Mr. Zuckerberg’s other ideas. Mr. Dorsey said that it would be very difficult to distinguish a large platform from a smaller one.
Lawmakers did not appear to be won over.
“There’s a lot of smugness among you,” said Representative Bill Johnson, a Republican of Ohio. “There’s this air of untouchable-ness in your responses to many of the tough questions that you’re being asked.”
Kate Conger and Daisuke Wakabayashi contributed reporting.
The leaders of Google, Facebook and Twitter testified on Thursday before a House committee in their first appearances on Capitol Hill since the start of the Biden administration. As expected, sparks flew.
The hearing was centered on questions of how to regulate disinformation online, although lawmakers also voiced concerns about the public-health effects of social media and the borderline-monopolistic practices of the largest tech companies.
On the subject of disinformation, Democratic legislators scolded the executives for the role their platforms played in spreading false claims about election fraud before the Capitol riot on Jan. 6. Jack Dorsey, the chief executive of Twitter, admitted that his company had been partly responsible for helping to circulate disinformation and plans for the Capitol attack. “But you also have to take into consideration the broader ecosystem,” he added. Sundar Pichai and Mark Zuckerberg, the top executives at Google and Facebook, avoided answering the question directly.
Lawmakers on both sides of the aisle returned often to the possibility of jettisoning or overhauling Section 230 of the Communications Decency Act, a federal law that for 25 years has granted immunity to tech companies for any harm caused by speech that’s hosted on their platforms.
393 million, to be precise, which is more than one per person and about 46 percent of all civilian-owned firearms in the world. As researchers at the Harvard T.H. Chan School of Public Health have put it, “more guns = more homicide” and “more guns = more suicide.”
But when it comes to understanding the causes of America’s political inertia on the issue, the lines of thought become a little more tangled. Some of them are easy to follow: There’s the line about the Senate, of course, which gives large states that favor gun regulation the same number of representatives as small states that don’t. There’s also the line about the National Rifle Association, which some gun control proponents have cast — arguably incorrectly — as the sine qua non of our national deadlock.
But there may be a psychological thread, too. Research has found that after a mass shooting, people who don’t own guns tend to identify the general availability of guns as the culprit. Gun owners, on the other hand, are more likely to blame other factors, such as popular culture or parenting.
Americans who support gun regulations also don’t prioritize the issue at the polls as much as Americans who oppose them, so gun rights advocates tend to win out. Or, in the words of Robert Gebelhoff of The Washington Post, “Gun reform doesn’t happen because Americans don’t want it enough.”
Sign up here to get it delivered to your inbox.
Is there anything you think we’re missing? Anything you want to see more of? We’d love to hear from you. Email us at firstname.lastname@example.org.
The chief executives of Google, Facebook and Twitter are testifying at the House on Thursday about how disinformation spreads across their platforms, an issue that the tech companies were scrutinized for during the presidential election and after the Jan. 6 riot at the Capitol.
The hearing, held by the House Energy and Commerce Committee, is the first time that Mark Zuckerberg of Facebook, Jack Dorsey of Twitter and Sundar Pichai of Google are appearing before Congress during the Biden administration. President Biden has indicated that he is likely to be tough on the tech industry. That position, coupled with Democratic control of Congress, has raised liberal hopes that Washington will take steps to rein in Big Tech’s power and reach over the next few years.
The hearing is also be the first opportunity since the Jan. 6 Capitol riot for lawmakers to question the three men about the role their companies played in the event. The attack has made the issue of disinformation intensely personal for the lawmakers since those who participated in the riot have been linked to online conspiracy theories like QAnon.
Before the hearing, Democrats signaled in a memo that they were interested in questioning the executives about the Jan. 6 attacks, efforts by the right to undermine the results of the 2020 election and misinformation related to the Covid-19 pandemic.
October article in The New York Post about President Biden’s son Hunter.
Lawmakers have debated whether social media platforms’ business models encourage the spread of hate and disinformation by prioritizing content that will elicit user engagement, often by emphasizing salacious or divisive posts.
Some lawmakers will push for changes to Section 230 of the Communications Decency Act, a 1996 law that shields the platforms from lawsuits over their users’ posts. Lawmakers are trying to strip the protections in cases where the companies’ algorithms amplified certain illegal content. Others believe that the spread of disinformation could be stemmed with stronger antitrust laws, since the platforms are by far the major outlets for communicating publicly online.
“By now it’s painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it’s a question of how best to do it,” said Representative Frank Pallone, the New Jersey Democrat who is chairman of the committee.
The tech executives are expected to play up their efforts to limit misinformation and redirect users to more reliable sources of information. They may also entertain the possibility of more regulation, in an effort to shape increasingly likely legislative changes rather than resist them outright.