“Sat Chatterjee has waged a campaign of misinformation against me and Azalia for over two years now,” Ms. Goldie said in a written statement.

She said the work had been peer-reviewed by Nature, one of the most prestigious scientific publications. And she added that Google had used their methods to build new chips and that these chips were currently used in Google’s computer data centers.

Laurie M. Burgess, Dr. Chatterjee’s lawyer, said it was disappointing that “certain authors of the Nature paper are trying to shut down scientific discussion by defaming and attacking Dr. Chatterjee for simply seeking scientific transparency.” Ms. Burgess also questioned the leadership of Dr. Dean, who was one of 20 co-authors of the Nature paper.

“Jeff Dean’s actions to repress the release of all relevant experimental data, not just data that supports his favored hypothesis, should be deeply troubling both to the scientific community and the broader community that consumes Google services and products,” Ms. Burgess said.

Dr. Dean did not respond to a request for comment.

After the rebuttal paper was shared with academics and other experts outside Google, the controversy spread throughout the global community of researchers who specialize in chip design.

The chip maker Nvidia says it has used methods for chip design that are similar to Google’s, but some experts are unsure what Google’s research means for the larger tech industry.

“If this is really working well, it would be a really great thing,” said Jens Lienig, a professor at the Dresden University of Technology in Germany, referring to the A.I. technology described in Google’s paper. “But it is not clear if it is working.”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Inside Twitter, Fears That Musk’s Views Will Revisit Past Troubles

Elon Musk had a plan to buy Twitter and undo its content moderation policies. On Tuesday, just a day after reaching his $44 billion deal to buy the company, Mr. Musk was already at work on his agenda. He tweeted that past moderation decisions by a top Twitter lawyer were “obviously incredibly inappropriate.” Later, he shared a meme mocking the lawyer, sparking a torrent of attacks from other Twitter users.

Mr. Musk’s personal critique was a rough reminder of what faces employees who create and enforce Twitter’s complex content moderation policies. His vision for the company would take it right back to where it started, employees said, and force Twitter to relive the last decade.

Twitter executives who created the rules said they had once held views about online speech that were similar to Mr. Musk’s. They believed Twitter’s policies should be limited, mimicking local laws. But more than a decade of grappling with violence, harassment and election tampering changed their minds. Now, many executives at Twitter and other social media companies view their content moderation policies as essential safeguards to protect speech.

The question is whether Mr. Musk, too, will change his mind when confronted with the darkest corners of Twitter.

The tweets must flow. That meant Twitter did little to moderate the conversations on its platform.

Twitter’s founders took their cues from Blogger, the publishing platform, owned by Google, that several of them had helped build. They believed that any reprehensible content would be countered or drowned out by other users, said three employees who worked at Twitter during that time.

“There’s a certain amount of idealistic zeal that you have: ‘If people just embrace it as a platform of self-expression, amazing things will happen,’” said Jason Goldman, who was on Twitter’s founding team and served on its board of directors. “That mission is valuable, but it blinds you to think certain bad things that happen are bugs rather than equally weighted uses of the platform.”

The company typically removed content only if it contained spam, or violated American laws forbidding child exploitation and other criminal acts.

In 2008, Twitter hired Del Harvey, its 25th employee and the first person it assigned the challenge of moderating content full time. The Arab Spring protests started in 2010, and Twitter became a megaphone for activists, reinforcing many employees’ belief that good speech would win out online. But Twitter’s power as a tool for harassment became clear in 2014 when it became the epicenter of Gamergate, a mass harassment campaign that flooded women in the video game industry with death and rape threats.

2,700 fake Twitter profiles and used them to sow discord about the upcoming presidential election between Mr. Trump and Hillary Clinton.

The profiles went undiscovered for months, while complaints about harassment continued. In 2017, Jack Dorsey, the chief executive at the time, declared that policy enforcement would become the company’s top priority. Later that year, women boycotted Twitter during the #MeToo movement, and Mr. Dorsey acknowledged the company was “still not doing enough.”

He announced a list of content that the company would no longer tolerate: nude images shared without the consent of the person pictured, hate symbols and tweets that glorified violence.

Alex Jones from its service because they repeatedly violated policies.

The next year, Twitter rolled out new policies that were intended to prevent the spread of misinformation in future elections, banning tweets that could dissuade people from voting or mislead them about how to do so. Mr. Dorsey banned all forms of political advertising, but often left difficult moderation decisions to Ms. Gadde.

landmark legislation called the Digital Services Act, which requires social media platforms like Twitter to more aggressively police their services for hate speech, misinformation and illicit content.

The new law will require Twitter and other social media companies with more than 45 million users in the European Union to conduct annual risk assessments about the spread of harmful content on their platforms and outline plans to combat the problem. If they are not seen as doing enough, the companies can be fined up to 6 percent of their global revenue, or even be banned from the European Union for repeat offenses.

Inside Twitter, frustrations have mounted over Mr. Musk’s moderation plans, and some employees have wondered if he would really halt their work during such a critical moment, when they are set to begin moderating tweets about elections in Brazil and another national election in the United States.

Adam Satariano contributed reporting.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Russia warns media: don’t report interview with Ukrainian president, article with image

Ukraine’s President Volodymyr Zelenskiy addresses the Ukrainian people, as Russia’s attack on Ukraine continues, in Kyiv, Ukraine March 23, 2022. Picture taken March 23, 2022. Ukrainian Presidential Press Service/Handout via REUTERS

Register now for FREE unlimited access to Reuters.com

LONDON, March 27 (Reuters) – Russia’s communications watchdog told Russian media on Sunday to refrain from reporting an interview done with Ukrainian President Volodymyr Zelenskiy and said it had started a probe into the outlets which had interviewed the Ukrainian leader.

In a short statement distributed by the watchdog on social media and posted on its website, it said a host of Russian outlets had done an interview with Zelenskiy.

“Roskomnadzor warns the Russian media about the necessity of refraining from publishing this interview,” it said. It did not give a reason for its warning.

Russian prosecutors said a legal opinion would be made on the statements made in the interview and on the legality of publishing the interview.

Zelenskiy spoke to several Russian publications.

Register now for FREE unlimited access to Reuters.com

Reporting by Guy Faulconbridge, Editing by Jane Merriman, William Maclean

Our Standards: The Thomson Reuters Trust Principles.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<