new survey by the Pew Research Center found that 15 percent of prominent accounts on those seven platforms had previously been banished from others like Twitter and Facebook.

F.B.I. raid on Mar-a-Lago thrust his latest pronouncements into the eye of the political storm once again.

study of Truth Social by Media Matters for America, a left-leaning media monitoring group, examined how the platform had become a home for some of the most fringe conspiracy theories. Mr. Trump, who began posting on the platform in April, has increasingly amplified content from QAnon, the online conspiracy theory.

He has shared posts from QAnon accounts more than 130 times. QAnon believers promote a vast and complex conspiracy that centers on Mr. Trump as a leader battling a cabal of Democratic Party pedophiles. Echoes of such views reverberated through Republican election campaigns across the country during this year’s primaries.

Ms. Jankowicz, the disinformation expert, said the nation’s social and political divisions had churned the waves of disinformation.

The controversies over how best to respond to the Covid-19 pandemic deepened distrust of government and medical experts, especially among conservatives. Mr. Trump’s refusal to accept the outcome of the 2020 election led to, but did not end with, the Capitol Hill violence.

“They should have brought us together,” Ms. Jankowicz said, referring to the pandemic and the riots. “I thought perhaps they could be kind of this convening power, but they were not.”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

U.S. Warns That Moscow May Intensify Attacks

Credit…Laura Boushnak for The New York Times

VERSOIX, Switzerland — The phones ringing in an office near the tranquil shores of Lake Geneva are a constant reminder of the devastation about 1,500 miles away in Ukraine.

The anguished callers are hoping to find any sign of loved ones, including many who went missing weeks ago when a blast killed dozens of Ukrainians at a detention camp controlled by Russia. Fielding the calls — roughly 900 a day — are staff members of the International Committee of the Red Cross, which helps trace people lost in conflicts and disasters across the world.

“She was in the street. I heard the air raid sirens, small explosions, people screaming,” said Mathias Issaev, relating a call from a Ukrainian woman looking for her husband. “Once she got through to us she didn’t want to give up.”

Some callers are thankful to reach anyone who will listen; many are overwhelmed with distress. Call operators like Mr. Issaev make up the front line of the I.C.R.C. Central Tracing Agency, which has worked to reunite people split apart by war for more than 150 years. The job does not end when the fighting stops — it is still following cases dating back to Lebanon’s civil war of the 1970s.

In the Ukraine war, the Red Cross is trying to track around 13,000 individuals — Russian and Ukrainian, soldiers and civilians — in its biggest tracing operation since World War II. But since the explosion last month at the detention camp in Olenivka, a town in eastern Ukraine controlled by Russia, the phone operators have also faced a torrent of abuse. Callers have denounced them as idlers and traitors, or as taking sides in the conflict.

“We’ve encountered a huge amount of hate speech,” said Esperanza Martinez, head of the agency’s Ukraine crisis team. The threatening calls and emails, including death threats, present a new menace to the agency’s humanitarian mission, she said.

The Red Cross operates under the Geneva Conventions as a neutral intermediary between warring parties, who are supposed to provide it with details of their prisoners and allow access. But misconceptions about the agency’s role persist, including a belief that it is supposed to guarantee prisoners’ safety or can force parties to comply with the laws of war.

Red Cross officials visited the Olenivka camp in May to observe prisoners and deliver water tanks. But they have not been able to reach an agreement with Russian authorities to visit it after the explosion, exposing the limitations of the agency’s leverage. Russia and Ukraine blame each other for the blast.

“A lot of what we do is silent,” Ms. Martinez said, adding: “Because of that we are vilified.”

Such explanations provide little consolation for callers like a Ukrainian mother who got through to operator Louis Depuydt. She had seen images on the Telegram social network of her son, a prisoner of war, showing broken teeth, a black eye and other signs of mistreatment.

“She was crying, her voice was trembling, you could feel her panic,” Mr. Depuydt said. “You have to deal with a lot of emotion, a lot of fear, lots of anger.”

The relentless exposure to pain and suffering takes a toll on the operators, even those who handle emailed inquiries. Assigned to an appeal from a woman looking for her daughter, Inna Laschenko of the tracing team teared up.

“Hang in there my beautiful girl, I’m with you. I love you so much,” Ms. Laschenko, a mother of two, began, reading the message aloud. Her voice faltering, she stopped to wipe her eyes. She could only whisper the message’s last words: “Please, help me.”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Artificial Intelligence Is Now Used To Track Down Hate Speech

Social media companies are now using artificial intelligence to detect hate speech online.

Throughout the last decade, the U.S. has seen immense growth in frequent internet usage, as one-third of Americans say they’re online constantly, while nine out of ten say they surf the web several times a week — according to a March 2021 Pew Research poll. That immense surge in activity has helped people stay more connected to one another, but it’s also allowed for the widespread proliferation and exposure of hate speech. One fix that social media companies and other online networks have relied on is artificial intelligence – to varying degrees of success. 

For companies with giant user bases, like Meta, artificial intelligence is a key, if not necessary tool for detecting hate speech — as there are too many users and pieces of violative content to be reviewed by the thousands of human content moderators already employed by the company. AI can help alleviate that burden by scaling up or down to fill in those gaps based on new influxes of users. 

Facebook, for instance, has seen massive growth – from 400 million users in the early 2010s, to more than two billion by the end of the decade. Between January and March 2022, Meta took action on more than 15 million pieces of hate speech content on Facebook. Roughly 95% of that was detected proactively by Facebook with the help of AI. 

That combination of AI and human moderators can still let huge misinformation themes fall through the cracks. Paul Barrett, deputy director of NYU’s Stern Center for Human Rights, found that every day, 3 million Facebook posts are flagged for review by 15,000 Facebook content moderators. The ratio of moderators to users is one to 160,000.  

“If you have a volume of that nature, those humans, those people are going to have an enormous burden of making decisions on hundreds of discrete items each work day,” Barrett said. 

Another issue: AI detected to root out hate speech is primarily trained by text and still images. This means that video content, especially if it’s live, is much more difficult to automatically detect as possible hate speech.   

Zeve Sanderson is the founding executive director of NYU’s Center for Social Media and Politics. 

“Live video is incredibly difficult to moderate because it’s live you know, we’ve seen this unfortunately recently with some tragic shootings where, you know, people have used live video in order to spread, you know, sort of content related to that. And even though actually platforms have been relatively quick to respond to that, we’ve seen copies of those videos spread. So it’s not just the original video, but also the ability to just sort of to record it and then share it in other forms. So, so live is extraordinarily challenging,” Sanderson said.  

And, many AI systems are not robust enough to be able to detect that hate speech in real time. Extremism researcher Linda Schiegl told Newsy that this has become a problem in online multiplayer games where players can use voice chat to spread hateful ideologies or thoughts. 

“It’s really difficult for automatic detection to pick stuff up because if you’re you’re talking about weapons or you’re talking about sort of how are we going to, I don’t know, a take on this school or whatever it could be in the game. And so artificial intelligence or automatic detection is really difficult in gaming spaces. And so it would have to be something that is more sophisticated than that or done by hand, which is really difficult, I think, even for these companies,” Schiegl said.  

Source: newsy.com

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Meta Quieter On Election Misinformation As Midterms Loom

By Associated Press
August 5, 2022

Public communication about the company’s response to election misinformation on its social media sites has gone decidedly quite since 2020.

Facebook owner Meta is quietly curtailing some of the safeguards designed to thwart voting misinformation or foreign interference in U.S. elections as the November midterm vote approaches.

It’s a sharp departure from the social media giant’s multibillion-dollar efforts to enhance the accuracy of posts about U.S. elections and regain trust from lawmakers and the public after their outrage over learning the company had exploited people’s data and allowed falsehoods to overrun its site during the 2016 campaign.

The pivot is raising alarm about Meta’s priorities and about how some might exploit the world’s most popular social media platforms to spread misleading claims, launch fake accounts and rile up partisan extremists.

Since last year, Meta has shut down an examination into how falsehoods are amplified in political ads on Facebook by indefinitely banishing the researchers from the site.

CrowdTangle, the online tool that the company offered to hundreds of newsrooms and researchers so they could identify trending posts and misinformation across Facebook or Instagram, is now inoperable on some days.

Public communication about the company’s response to election misinformation has gone decidedly quiet. Between 2018 and 2020, the company released more than 30 statements that laid out specifics about how it would stifle U.S. election misinformation, prevent foreign adversaries from running ads or posts around the vote and subdue divisive hate speech.

Top executives hosted question and answer sessions with reporters about new policies. CEO Mark Zuckerberg wrote Facebook posts promising to take down false voting information and authored opinion articles calling for more regulations to tackle foreign interference in U.S. elections via social media.

But this year Meta has only released a one-page document outlining plans for the fall elections, even as potential threats to the vote remain clear. Several Republican candidates are pushing false claims about the U.S. election across social media. In addition, Russia and China continue to wage aggressive social media propaganda campaigns aimed at further political divides among American audiences.

Meta says that elections remain a priority and that policies developed in recent years around election misinformation or foreign interference are now hard-wired into company operations.

The company is continuing many initiatives it developed to limit election misinformation, such as a fact-checking program started in 2016 that enlists the help of news outlets to investigate the veracity of popular falsehoods spreading on Facebook or Instagram.

This month, Meta also rolled out a new feature for political ads that allows the public to search for details about how advertisers target people based on their interests across Facebook and Instagram.

Yet, Meta has stifled other efforts to identify election misinformation on its sites.

It has stopped making improvements to CrowdTangle, a website it offered to newsrooms around the world that provides insights about trending social media posts. Journalists, fact-checkers and researchers used the website to analyze Facebook content, including tracing popular misinformation and who is responsible for it.

That tool is now “dying,” former CrowdTangle CEO Brandon Silverman, who left Meta last year, told the Senate Judiciary Committee this spring.

Republicans routinely accuse Facebook of unfairly censoring conservatives, some of whom have been kicked off for breaking the company’s rules. Democrats, meanwhile, regularly complain the tech company hasn’t gone far enough to curb disinformation.

Meanwhile, the possibility of regulation in the U.S. no longer looms over the company, with lawmakers failing to reach any consensus over what oversight the multibillion-dollar company should be subjected to.

Zuckerberg dived into this massive rebranding and reorganization of Facebook last October, when he changed the company’s name to Meta Platforms Inc. He plans to spend years and billions of dollars evolving his social media platforms into a nascent virtual reality construct called the “metaverse” — sort of like the internet brought to life, rendered in 3D.

Additional reporting by The Associated Press.

Source: newsy.com

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Alex Jones Ordered To Pay Sandy Hook Parents More Than $4M

By Associated Press
August 4, 2022

This is the first time the Infowars host has been held financially liable for repeatedly claiming the Sandy Hook shooting was a hoax.

A Texas jury on Thursday ordered conspiracy theorist Alex Jones to pay more than $4 million in compensatory damages to the parents of a 6-year-old boy who was killed in the Sandy Hook Elementary School massacre, marking the first time the Infowars host has been held financially liable for repeatedly claiming the deadliest school shooting in U.S. history was a hoax.

The Austin jury must still decide how much the Infowars host must pay in punitive damages to Neil Heslin and Scarlett Lewis, whose son Jesse Lewis was among the 20 children and six educators who were killed in the 2012 attack in Newtown, Connecticut.

The parents had sought at least $150 million in compensation for defamation and intentional infliction of emotional distress. Jones’ attorney asked the jury to limit damages to $8 — one dollar for each of the compensation charges they are considering — and Jones himself said any award over $2 million “would sink us.”

It likely won’t be the last judgment against Jones — who was not in the courtroom — over his claims that the attack was staged in the interests of increasing gun controls. A Connecticut judge has ruled against him in a similar lawsuit brought by other victims’ families and an FBI agent who worked on the case.

Outside the courthouse Thursday, the plaintiffs’ attorney Mark Bankston insisted that the $4.11 million amount wasn’t a disappointment, noting it was only part of the damages Jones will have to pay.

The jury returns Friday to hear more evidence about Jones and his company’s finances.

“We aren’t done folks,” Bankston said. “We knew coming into this case it was necessary to shoot for the moon to get to understand we were serious and passionate. After tomorrow, he’s going to owe a lot more.”

The total amount awarded in this case could set a marker for the other lawsuits against Jones and underlines the financial threat he’s facing. It also raises new questions about the ability of Infowars — which has been banned from YouTube, Spotify and Twitter for hate speech — to continue operating, although the company’s finances remain unclear.

Jones, who has portrayed the lawsuit as an attack on his First Amendment rights, conceded during the trial that the attack was “100% real” and that he was wrong to have lied about it. But Heslin and Lewis told jurors that an apology wouldn’t suffice and called on them to make Jones pay for the years of suffering he has put them and other Sandy Hook families through.

The parents testified Tuesday about how they’ve endured a decade of trauma, inflicted first by the murder of their son and what followed: gun shots fired at a home, online and phone threats, and harassment on the street by strangers. They said the threats and harassment were all fueled by Jones and his conspiracy theory spread to his followers via his website Infowars.

A forensic psychiatrist testified that the parents suffer from “complex post-traumatic stress disorder” inflicted by ongoing trauma, similar to what might be experienced by a soldier at war or a child abuse victim.

At one point in her testimony, Lewis looked directly at Jones, who was sitting barely 10 feet away.

“It seems so incredible to me that we have to do this — that we have to implore you, to punish you — to get you to stop lying,” Lewis told Jones.

Jones was the only witness to testify in his defense. And he came under withering attack from the plaintiffs attorneys under cross-examination, as they reviewed Jones’ own video claims about Sandy Hook over the years, and accused him of lying and trying to hide evidence, including text messages and emails about the attack. It also included internal emails sent by an Infowars employee that said “this Sandy Hook stuff is killing us.”

At one point, Jones was told that his attorneys had mistakenly sent Bankston the last two years’ worth of texts from Jones’ cellphone. Bankston said in court Thursday that the U.S. House Jan. 6 committee investigating the 2021 attack on the U.S. Capitol has requested the records and that he intends to comply.

And shortly after Jones declared “I don’t use email,” Jones was shown one that came from his address, and another one from an Infowars business officer telling Jones that the company had earned $800,000 gross in selling its products in a single day, which would amount to nearly $300 million in a year.

Jones’ media company Free Speech Systems, which is Infowars’ parent company, filed for bankruptcy during the two-week trial.

Additional reporting by The Associated Press.

Source: newsy.com

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Why Are Experts Warning About Extremism In Gaming?

and Brandi Scarber
July 23, 2022

Video games, including those targeted at children, have been the subject of a new phenomenon in extremism.

World of Warcraft, Fortnite, Roblox, Rocket League — these immensely popular games, many with child audiences, have been the subject of a new phenomenon in extremism.  

Some critics say online gaming and adjacent chat-based platforms are being used to expose and possibly recruit users to potentially dangerous ideologies.  

What remains to be seen is how widespread the problem really is. 

According to the Anti-Defamation League, three-fifths of gamers between the ages of 13 and 17 reported experiencing harassment while playing online games. Minority and LGBTQ+ gamers reported higher levels of harassment than their counterparts. Keeping up and detecting those instances of hate speech is not as easy as it sounds. It’s not always clear if and when someone exposed to extremist ideology recognizes it.  

Some experts said the games can be used to foster relationships through things like voice chat and in other private conversations. 

Others believe extremist ideas can be normalized through the lens of gaming, though there is a lack of conclusive research on the topic. 

All of the experts Newsy spoke with said it would take a mixture of civil, governmental and platform enforcement to deal with the problem of any potential extremism.

Source: newsy.com

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

The Fight Over Truth Also Has a Red State, Blue State Divide

To fight disinformation, California lawmakers are advancing a bill that would force social media companies to divulge their process for removing false, hateful or extremist material from their platforms. Texas lawmakers, by contrast, want to ban the largest of the companies — Facebook, Twitter and YouTube — from removing posts because of political points of view.

In Washington, the state attorney general persuaded a court to fine a nonprofit and its lawyer $28,000 for filing a baseless legal challenge to the 2020 governor’s race. In Alabama, lawmakers want to allow people to seek financial damages from social media platforms that shut down their accounts for having posted false content.

In the absence of significant action on disinformation at the federal level, officials in state after state are taking aim at the sources of disinformation and the platforms that propagate them — only they are doing so from starkly divergent ideological positions. In this deeply polarized era, even the fight for truth breaks along partisan lines.

a nation increasingly divided over a variety of issues — including abortion, guns, the environment — and along geographic lines.

a similar law in Florida that would have fined social media companies as much as $250,000 a day if they blocked political candidates from their platforms, which have become essential tools of modern campaigning. Other states with Republican-controlled legislatures have proposed similar measures, including Alabama, Mississippi, South Carolina, West Virginia, Ohio, Indiana, Iowa and Alaska.

Alabama’s attorney general, Steve Marshall, has created an online portal through which residents can complain that their access to social media has been restricted: alabamaag.gov/Censored. In a written response to questions, he said that social media platforms stepped up efforts to restrict content during the pandemic and the presidential election of 2020.

“During this period (and continuing to present day), social media platforms abandoned all pretense of promoting free speech — a principle on which they sold themselves to users — and openly and arrogantly proclaimed themselves the Ministry of Truth,” he wrote. “Suddenly, any viewpoint that deviated in the slightest from the prevailing orthodoxy was censored.”

Much of the activity on the state level today has been animated by the fraudulent assertion that Mr. Trump, and not President Biden, won the 2020 presidential election. Although disproved repeatedly, the claim has been cited by Republicans to introduce dozens of bills that would clamp down on absentee or mail-in voting in the states they control.

memoirist and Republican nominee for Senate, railed against social media giants, saying they stifled news about the foreign business dealings of Hunter Biden, the president’s son.

massacre at a supermarket in Buffalo in May.

Connecticut plans to spend nearly $2 million on marketing to share factual information about voting and to create a position for an expert to root out misinformation narratives about voting before they go viral. A similar effort to create a disinformation board at the Department of Homeland Security provoked a political fury before its work was suspended in May pending an internal review.

In California, the State Senate is moving forward with legislation that would require social media companies to disclose their policies regarding hate speech, disinformation, extremism, harassment and foreign political interference. (The legislation would not compel them to restrict content.) Another bill would allow civil lawsuits against large social media platforms like TikTok and Meta’s Facebook and Instagram if their products were proven to have addicted children.

“All of these different challenges that we’re facing have a common thread, and the common thread is the power of social media to amplify really problematic content,” said Assemblyman Jesse Gabriel of California, a Democrat, who sponsored the legislation to require greater transparency from social media platforms. “That has significant consequences both online and in physical spaces.”

It seems unlikely that the flurry of legislative activity will have a significant impact before this fall’s elections; social media companies will have no single response acceptable to both sides when accusations of disinformation inevitably arise.

“Any election cycle brings intense new content challenges for platforms, but the November midterms seem likely to be particularly explosive,” said Matt Perault, a director of the Center on Technology Policy at the University of North Carolina. “With abortion, guns, democratic participation at the forefront of voters’ minds, platforms will face intense challenges in moderating speech. It’s likely that neither side will be satisfied by the decisions platforms make.”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Inside Twitter, Fears That Musk’s Views Will Revisit Past Troubles

Elon Musk had a plan to buy Twitter and undo its content moderation policies. On Tuesday, just a day after reaching his $44 billion deal to buy the company, Mr. Musk was already at work on his agenda. He tweeted that past moderation decisions by a top Twitter lawyer were “obviously incredibly inappropriate.” Later, he shared a meme mocking the lawyer, sparking a torrent of attacks from other Twitter users.

Mr. Musk’s personal critique was a rough reminder of what faces employees who create and enforce Twitter’s complex content moderation policies. His vision for the company would take it right back to where it started, employees said, and force Twitter to relive the last decade.

Twitter executives who created the rules said they had once held views about online speech that were similar to Mr. Musk’s. They believed Twitter’s policies should be limited, mimicking local laws. But more than a decade of grappling with violence, harassment and election tampering changed their minds. Now, many executives at Twitter and other social media companies view their content moderation policies as essential safeguards to protect speech.

The question is whether Mr. Musk, too, will change his mind when confronted with the darkest corners of Twitter.

The tweets must flow. That meant Twitter did little to moderate the conversations on its platform.

Twitter’s founders took their cues from Blogger, the publishing platform, owned by Google, that several of them had helped build. They believed that any reprehensible content would be countered or drowned out by other users, said three employees who worked at Twitter during that time.

“There’s a certain amount of idealistic zeal that you have: ‘If people just embrace it as a platform of self-expression, amazing things will happen,’” said Jason Goldman, who was on Twitter’s founding team and served on its board of directors. “That mission is valuable, but it blinds you to think certain bad things that happen are bugs rather than equally weighted uses of the platform.”

The company typically removed content only if it contained spam, or violated American laws forbidding child exploitation and other criminal acts.

In 2008, Twitter hired Del Harvey, its 25th employee and the first person it assigned the challenge of moderating content full time. The Arab Spring protests started in 2010, and Twitter became a megaphone for activists, reinforcing many employees’ belief that good speech would win out online. But Twitter’s power as a tool for harassment became clear in 2014 when it became the epicenter of Gamergate, a mass harassment campaign that flooded women in the video game industry with death and rape threats.

2,700 fake Twitter profiles and used them to sow discord about the upcoming presidential election between Mr. Trump and Hillary Clinton.

The profiles went undiscovered for months, while complaints about harassment continued. In 2017, Jack Dorsey, the chief executive at the time, declared that policy enforcement would become the company’s top priority. Later that year, women boycotted Twitter during the #MeToo movement, and Mr. Dorsey acknowledged the company was “still not doing enough.”

He announced a list of content that the company would no longer tolerate: nude images shared without the consent of the person pictured, hate symbols and tweets that glorified violence.

Alex Jones from its service because they repeatedly violated policies.

The next year, Twitter rolled out new policies that were intended to prevent the spread of misinformation in future elections, banning tweets that could dissuade people from voting or mislead them about how to do so. Mr. Dorsey banned all forms of political advertising, but often left difficult moderation decisions to Ms. Gadde.

landmark legislation called the Digital Services Act, which requires social media platforms like Twitter to more aggressively police their services for hate speech, misinformation and illicit content.

The new law will require Twitter and other social media companies with more than 45 million users in the European Union to conduct annual risk assessments about the spread of harmful content on their platforms and outline plans to combat the problem. If they are not seen as doing enough, the companies can be fined up to 6 percent of their global revenue, or even be banned from the European Union for repeat offenses.

Inside Twitter, frustrations have mounted over Mr. Musk’s moderation plans, and some employees have wondered if he would really halt their work during such a critical moment, when they are set to begin moderating tweets about elections in Brazil and another national election in the United States.

Adam Satariano contributed reporting.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<