Twitter bans the account of James O’Keefe, the founder of Project Veritas.

Twitter said on Thursday that it had blocked the account of James O’Keefe, the founder of the conservative group Project Veritas.

Mr. O’Keefe’s account, @JamesOKeefeIII, was “permanently suspended for violating the Twitter Rules on platform manipulation and spam,” specifically that users cannot mislead others with fake accounts or “artificially amplify or disrupt conversations” through the use of multiple accounts, a Twitter spokesman said.

In a statement on his website, Mr. O’Keefe said he will file a defamation lawsuit against Twitter on Monday over its claim that he had operated fake accounts.

“This is false, this is defamatory, and they will pay,” the statement said.

“Section 230 may have protected them before, but it will not protect them from me,” Mr. O’Keefe said, referring to a legal liability shield for social media. That shield, part of the federal Communications Decency Act, has become a favorite target of lawmakers in both parties.

the Project Veritas account, saying it had posted private information. It also temporarily locked Mr. O’Keefe’s account.

View Source

My Family’s Global Vaccine Journey

On Feb. 22, Mom texted that she and Dad had booked a March 11 appointment to get their first shots, followed by second doses in April. A day later, she reported that Dad hadn’t pressed the button to confirm the appointment on the online booking system and had lost the slots.

The next week, they texted again: They had walked to a private clinic that was dispensing Sinovac shots. After a short wait, they received the vaccine. On April 2, they told us that they had gotten their second dose of Sinovac and were feeling fine. Mom groused that even though they had an appointment, they “still need to wait for half an hour.”

Our responses were more enthusiastic.

“Great news,” I wrote.

“Yay!” Pui-Ying texted, followed by celebratory emojis.

“Congrats!” Pui Ling said.

Pui-Ying had moved with her family to Malawi in 2016 to work as a doctor and conduct clinical research on children’s health. Resources at the Queen Elizabeth Central Hospital, where she works, were limited. When Madonna’s charity helped finance the construction of a new children’s wing at the hospital, which opened in 2017, it was big news.

Staffing was tight even before the coronavirus, Pui-Ying said. When the pandemic came, the hospital decided on a one-week-on, one-week-off routine to reduce staff exposure to Covid-19 while ensuring that enough medical professionals would be working at all times. Masks, gloves and other protective equipment were scarce.

In pediatrics, Pui-Ying and her colleagues set up a “respiratory zone” for children with Covid-19. It was essentially a two-room ward, with about a dozen beds in the main room. The second room, which was an isolation unit, had space for four children.

View Source

A Collapse Foretold: How Brazil’s Covid-19 Outbreak Overwhelmed Hospitals

The virus has killed more than 300,000 people in Brazil, its spread aided by a highly contagious variant, political infighting and distrust of science.


PORTO ALEGRE, Brazil — The patients began arriving at hospitals in Porto Alegre far sicker and younger than before. Funeral homes were experiencing a steady uptick in business, while exhausted doctors and nurses pleaded in February for a lockdown to save lives.

But Sebastião Melo, Porto Alegre’s mayor, argued there was a greater imperative.

“Put your life on the line so that we can save the economy,” Mr. Melo appealed to his constituents in late February.

Now Porto Alegre, a prosperous city in southern Brazil, is at the heart of an stunning breakdown of the country’s health care system — a crisis foretold.

More than a year into the pandemic, deaths in Brazil are at their peak and highly contagious variants of the coronavirus are sweeping the nation, enabled by political dysfunction, widespread complacency and conspiracy theories. The country, whose leader, President Jair Bolsonaro, has played down the threat of the virus, is now reporting more new cases and deaths per day than any other country in the world.

125 Brazilians succumbing to the disease every hour. Health officials in public and private hospitals were scrambling to expand critical care units, stock up on dwindling supplies of oxygen and procure scarce intubation sedatives that are being sold at an exponential markup.

Intensive care units in Brasília, the capital, and 16 of Brazil’s 26 states report dire shortages of available beds, with capacity below 10 percent, and many are experiencing rising contagion (when 90 percent of such beds are full the situation is considered dire.)

a model for other developing nations, with a reputation for advancing agile and creative solutions to medical crises, including a surge in H.I.V. infections and the outbreak of Zika.

Mr. Melo, who campaigned last year on a promise to lift all pandemic restrictions in the city, said a lockdown would cause people to starve.

celebrated setbacks in clinical trials for CoronaVac, the Chinese-made vaccine that Brazil came to largely rely on, and joked that pharmaceutical companies would not be held responsible if people who got newly developed vaccines turned into alligators.

“The government initially dismissed the threat of the pandemic, then the need for preventive measures, and then goes against science by promoting miracle cures,” said Natália Pasternak, a microbiologist in São Paulo. “That confuses the population, which means people felt safe going out in the street.”

Terezinha Backes, a 63-year-old retired shoemaker living in a municipality on the outskirts of Porto Alegre, had been exceedingly careful over the past year, venturing out only when necessary, said her nephew, Henrique Machado.

But her 44-year-old son, a security guard tasked with taking the temperature of people entering a medical facility, appears to have brought the virus home early this month.

Ms. Backes, who had been in good health, was taken to a hospital on March 13 after she began having trouble breathing. With no beds to spare, she was treated with oxygen and an IV in the hallway of an overflowing wing. She died three days later.

“My aunt was not given the right to fight for her life,” said Mr. Machado, 29, a pharmacist. “She was left in a hallway.”

anti-parasite drug ivermectin as a preventive measure. The drug is part of the so-called Covid kit of drugs, which also includes the antibiotic azithromycin and the anti-malaria drug hydroxychloroquine. Mr. Bolsonaro’s health ministry has endorsed their use.

Leading medical experts in Brazil, the United States and Europe have said those drugs are not effective to treat Covid-19 and some can have serious side effects, including kidney failure.

“Lies,” Mr. Monteiro, 63, said about the scientific consensus on the Covid kit. “There are so many lies and myths.”

He said medical professionals have sabotaged Mr. Bolsonaro’s plan to rein in the pandemic by refusing to prescribe those drugs more decisively at the early stages of illness.

“There was one solution: to listen to the president,” he said. “When people elect a leader it is because they trust him.”

The mistrust and the denials — and the caravans of Bolsonaro supporters blasting their horns outside hospitals to protest pandemic restrictions — are crushing for medical professionals who have lost colleagues to the virus and to suicide in recent months, said Claudia Franco, the president of the nurses union in Rio Grande do Sul.

“People are in such denial,” said Ms. Franco, who has been taking care of Covid-19 patients. “The reality we’re in today is we don’t have enough respirators for everyone, we don’t have oxygen for everyone.”

Ernesto Londoño reported from Porto Alegre. Letícia Casado reported from Brasília.

View Source

Jack Dorsey says Twitter played a role in U.S. Capitol riot.

Jack Dorsey, Twitter’s chief executive, said during his congressional testimony on Thursday that the site played a role in the storming of the U.S. Capitol on Jan. 6, in what appeared to be the first public acknowledgment by a top social media executive of the influence of the platforms on the riot.

Mr. Dorsey’s answer came after Representative Mike Doyle, Democrat of Pennsylvania, pressed the tech chief executives at a hearing on disinformation to answer “yes” or “no” as to whether their platforms had contributed to the spread of misinformation and the planning of the attack.

Mark Zuckerberg, Facebook’s chief executive, and Sundar Pichai, Google’s chief executive, did not answer with a “yes” or “no.” Mr. Dorsey took a different tactic.

“Yes,” he said. “But you also have to take into consideration the broader ecosystem. It’s not just about the technological systems that we use.”

Twitter and Facebook barred Mr. Trump from posting on their platforms. Their actions suggested that they saw a risk of more violence being incited from what was posted on their sites, but the executives had not previously articulated what role the platforms had played.

Representative Jan Schakowsky, a Democrat of Illinois, later asked Mr. Zuckerberg about remarks that Facebook’s chief operating officer, Sheryl Sandberg, made shortly after the riot. In a January interview with Reuters, Ms. Sandberg said that the planning for the riot had been “largely organized” on other social media platforms and downplayed Facebook’s involvement.

Ms. Schakowsky asked whether Mr. Zuckerberg agreed with Ms. Sandberg’s statement.

Mr. Zuckerberg appeared to walk back Ms. Sandberg’s remarks.“In the comment that Sheryl made what I believe that we were trying to say was, and where I stand behind, is what was widely reported at the time,” he responded.

Mr. Zuckerberg then said: “Certainly there was content on our services. From that perspective, I think there’s further work that we need to do.” He seemed to want to add more before Ms. Schakowsky interrupted him.

View Source

Lawmakers Grill Tech C.E.O.s on Capitol Riot, Getting Few Direct Answers

WASHINGTON — Lawmakers grilled the leaders of Facebook, Google and Twitter on Thursday about the connection between online disinformation and the Jan. 6 riot at the Capitol, causing Twitter’s chief executive to publicly admit for the first time that his product had played a role in the events that left five people dead.

When a Democratic lawmaker asked the executives to answer with a “yes” or a “no” whether the platforms bore some responsibility for the misinformation that had contributed to the riot, Jack Dorsey of Twitter said “yes.” Neither Mark Zuckerberg of Facebook nor Sundar Pichai of Google would answer the question directly.

The roughly five-hour hearing before a House committee marked the first time lawmakers directly questioned the chief executives regarding social media’s role in the January riot. The tech bosses were also peppered with questions about how their companies helped spread falsehoods around Covid-19 vaccines, enable racism and hurt children’s mental health.

It was also the first time the executives had testified since President Biden’s inauguration. Tough questioning from lawmakers signaled that scrutiny of Silicon Valley’s business practices would not let up, and could even intensify, with Democrats in the White House and leading both chambers of Congress.

tweeted a single question mark with a poll that had two options: “Yes” or “No.” When asked about his tweet by a lawmaker, he said “yes” was winning.

The January riot at the Capitol has made the issue of disinformation deeply personal for lawmakers. The riot was fueled by false claims from President Donald J. Trump and others that the election had been stolen, which were rampant on social media.

Some of the participants had connections to QAnon and other online conspiracy theories. And prosecutors have said that groups involved in the riot, including the Oath Keepers and the Proud Boys, coordinated some of their actions on social media.

ban Mr. Trump and his associates after the Jan. 6 riots. The bans hardened views by conservatives that the companies are left-leaning and are inclined to squelch conservative voices.

“We’re all aware of Big Tech’s ever-increasing censorship of conservative voices and their commitment to serve the radical progressive agenda,” said Representative Bob Latta of Ohio, the ranking Republican on the panel’s technology subcommittee.

The company leaders defended their businesses, saying they had invested heavily in hiring content moderators and in technology like artificial intelligence, used to identify and fight disinformation.

Mr. Zuckerberg argued against the notion that his company had a financial incentive to juice its users’ attention by driving them toward more extreme content. He said Facebook didn’t design “algorithms in order to just kind of try to tweak and optimize and get people to spend every last minute on our service.”

He added later in the hearing that elections disinformation was spread in messaging apps, where amplification and algorithms don’t aid in spread of false content. He also blamed television and other traditional media for spreading election lies.

The companies showed fissures in their view on regulations. Facebook has vocally supported internet regulations in a major advertising blitz on television and in newspapers. In the hearing, Mr. Zuckerberg suggested specific regulatory reforms to a key legal shield, known as Section 230 of the Communications Decency Act, that has helped Facebook and other Silicon Valley internet giants thrive.

The legal shield protects companies that host and moderate third-party content, and says companies like Google and Twitter are simply intermediaries of their user-generated content. Democrats have argued that with that protection, companies aren’t motivated to remove disinformation. Republicans accuse the companies of using the shield to moderate too much and to take down content that doesn’t represent their political viewpoints.

“I believe that Section 230 would benefit from thoughtful changes to make it work better for people,” Mr. Zuckerberg said in the statement.

He proposed that liability protection for companies be conditional on their ability to fight the spread of certain types of unlawful content. He said platforms should be required to demonstrate that they have systems in place for identifying unlawful content and removing it. Reforms, he said, should be different for smaller social networks, which wouldn’t have the same resources like Facebook to meet new requirements.

Mr. Pichai and Mr. Dorsey said they supported requirements of transparency in content moderation but fell short of agreeing with Mr. Zuckerberg’s other ideas. Mr. Dorsey said that it would be very difficult to distinguish a large platform from a smaller one.

Lawmakers did not appear to be won over.

“There’s a lot of smugness among you,” said Representative Bill Johnson, a Republican of Ohio. “There’s this air of untouchable-ness in your responses to many of the tough questions that you’re being asked.”

Kate Conger and Daisuke Wakabayashi contributed reporting.

View Source

Zuckerberg, Dorsey and Pichai testify about disinformation.

The chief executives of Google, Facebook and Twitter are testifying at the House on Thursday about how disinformation spreads across their platforms, an issue that the tech companies were scrutinized for during the presidential election and after the Jan. 6 riot at the Capitol.

The hearing, held by the House Energy and Commerce Committee, is the first time that Mark Zuckerberg of Facebook, Jack Dorsey of Twitter and Sundar Pichai of Google are appearing before Congress during the Biden administration. President Biden has indicated that he is likely to be tough on the tech industry. That position, coupled with Democratic control of Congress, has raised liberal hopes that Washington will take steps to rein in Big Tech’s power and reach over the next few years.

The hearing is also be the first opportunity since the Jan. 6 Capitol riot for lawmakers to question the three men about the role their companies played in the event. The attack has made the issue of disinformation intensely personal for the lawmakers since those who participated in the riot have been linked to online conspiracy theories like QAnon.

Before the hearing, Democrats signaled in a memo that they were interested in questioning the executives about the Jan. 6 attacks, efforts by the right to undermine the results of the 2020 election and misinformation related to the Covid-19 pandemic.

October article in The New York Post about President Biden’s son Hunter.

Lawmakers have debated whether social media platforms’ business models encourage the spread of hate and disinformation by prioritizing content that will elicit user engagement, often by emphasizing salacious or divisive posts.

Some lawmakers will push for changes to Section 230 of the Communications Decency Act, a 1996 law that shields the platforms from lawsuits over their users’ posts. Lawmakers are trying to strip the protections in cases where the companies’ algorithms amplified certain illegal content. Others believe that the spread of disinformation could be stemmed with stronger antitrust laws, since the platforms are by far the major outlets for communicating publicly online.

“By now it’s painfully clear that neither the market nor public pressure will stop social media companies from elevating disinformation and extremism, so we have no choice but to legislate, and now it’s a question of how best to do it,” said Representative Frank Pallone, the New Jersey Democrat who is chairman of the committee.

The tech executives are expected to play up their efforts to limit misinformation and redirect users to more reliable sources of information. They may also entertain the possibility of more regulation, in an effort to shape increasingly likely legislative changes rather than resist them outright.

View Source

This Island Nation Had Zero Covid Cases as of June. Now It’s Overwhelmed.

“They’re our family. They’re our friends. They’re our neighbors. They’re our partners,” Scott Morrison, Australia’s prime minister, said last week. “This is in Australia’s interests, and it is in our region’s interests,” he added.

Covax, a global health initiative designed to make access to inoculations more equal, began rolling out doses of vaccines to developing nations last month, and it has said it will deliver 588,000 to Papua New Guinea by June.

But in some cases, wealthier nations have failed to honor contracts, reducing the number of doses the initiative can buy, Dr. Tedros Adhanom Ghebreyesus, the director of the World Health Organization, said in a statement last month. He warned that the pandemic would not end until everyone was vaccinated.

“This is not a matter of charity,” he said. “It’s a matter of epidemiology.”

Until then, officials in Papua New Guinea will be left to combat not only the virus itself but also a tide of misinformation about the pathogen and the vaccines, carried largely through social media channels.

“Even for the educated health worker, it’s causing a lot of doubt,” said Dr. Nou, the Port Moresby-based physician, who has conducted a survey of health care workers’ views about the pandemic.

Some public health experts said they worried that the redirection of resources to fight the coronavirus could come at a lethal cost to those with other severe health conditions, such malaria or tuberculosis. Papua New Guinea has some of the highest rates of tuberculosis in the world.

“It’s not good enough to just respond to Covid and then have someone die of another cause,” said Dr. Suman Majumdar, an infectious diseases specialist at the Burnet Institute, an Australian medical research facility. “We have feared the worst,” he added, “and this is happening.”

View Source

How The Death of Taylor Force in Israel Echoes Through the Fight Over Online Speech

WASHINGTON — Stuart Force says he found solace on Facebook after his son was stabbed to death in Israel by a member of the militant group Hamas in 2016. He turned to the site to read hundreds of messages offering condolences on his son’s page.

But only a few months later, Mr. Force had decided that Facebook was partly to blame for the death, because the algorithms that power the social network helped spread Hamas’s content. He joined relatives of other terror victims in suing the company, arguing that its algorithms aided the crimes by regularly amplifying posts that encouraged terrorist attacks.

The legal case ended unsuccessfully last year when the Supreme Court declined to take it up. But arguments about the algorithms’ power have reverberated in Washington, where some members of Congress are citing the case in an intense debate about the law that shields tech companies from liability for content posted by users.

At a House hearing on Thursday about the spread of misinformation with the chief executives of Facebook, Twitter and Google, some lawmakers are expected to focus on how the companies’ algorithms are written to generate revenue by surfacing posts that users are inclined to click on and respond to. And some will argue that the law that protects the social networks from liability, Section 230 of the Communications Decency Act, should be changed to hold the companies responsible when their software turns the services from platforms into accomplices for crimes committed offline.

litigation group, which had a question: Would the Force family be willing to sue Facebook?

After Mr. Force spent some time on a Facebook page belonging to Hamas, the family agreed to sue. The lawsuit fit into a broader effort by the Forces to limit the resources and tools available to Palestinian groups. Mr. Force and his wife allied with lawmakers in Washington to pass legislation restricting aid to the Palestinian Authority, which governs part of the West Bank.

Their lawyers argued in an American court that Facebook gave Hamas “a highly developed and sophisticated algorithm that facilitates Hamas’s ability to reach and engage an audience it could not otherwise reach as effectively.” The lawsuit said Facebook’s algorithms had not only amplified posts but aided Hamas by recommending groups, friends and events to users.

The federal district judge, in New York, ruled against the claims, citing Section 230. The lawyers for the Force family appealed to a three-judge panel of the U.S. Court of Appeals for the Second Circuit, and two of the judges ruled entirely for Facebook. The other, Judge Robert Katzmann, wrote a 35-page dissent to part of the ruling, arguing that Facebook’s algorithmic recommendations shouldn’t be covered by the legal protections.

“Mounting evidence suggests that providers designed their algorithms to drive users toward content and people the users agreed with — and that they have done it too well, nudging susceptible souls ever further down dark paths,” he said.

Late last year, the Supreme Court rejected a call to hear a different case that would have tested the Section 230 shield. In a statement attached to the court’s decision, Justice Clarence Thomas called for the court to consider whether Section 230’s protections had been expanded too far, citing Mr. Force’s lawsuit and Judge Katzmann’s opinion.

Justice Thomas said the court didn’t need to decide in the moment whether to rein in the legal protections. “But in an appropriate case, it behooves us to do so,” he said.

Some lawmakers, lawyers and academics say recognition of the power of social media’s algorithms in determining what people see is long overdue. The platforms usually do not reveal exactly what factors the algorithms use to make decisions and how they are weighed against one another.

“Amplification and automated decision-making systems are creating opportunities for connection that are otherwise not possible,” said Olivier Sylvain, a professor of law at Fordham University, who has made the argument in the context of civil rights. “They’re materially contributing to the content.”

That argument has appeared in a series of lawsuits that contend Facebook should be responsible for discrimination in housing when its platform could target advertisements according to a user’s race. A draft bill produced by Representative Yvette D. Clarke, Democrat of New York, would strip Section 230 immunity from targeted ads that violated civil rights law.

A bill introduced last year by Representatives Tom Malinowski of New Jersey and Anna G. Eshoo of California, both Democrats, would strip Section 230 protections from social media platforms when their algorithms amplified content that violated some antiterrorism and civil rights laws. The news release announcing the bill, which was reintroduced on Wednesday, cited the Force family’s lawsuit against Facebook. Mr. Malinowski said he had been inspired in part by Judge Katzmann’s dissent.

Critics of the legislation say it may violate the First Amendment and, because there are so many algorithms on the web, could sweep up a wider range of services than lawmakers intend. They also say there’s a more fundamental problem: Regulating algorithmic amplification out of existence wouldn’t eliminate the impulses that drive it.

“There’s a thing you kind of can’t get away from,” said Daphne Keller, the director of the Program on Platform Regulation at Stanford University’s Cyber Policy Center, “which is human demand for garbage content.”

View Source

How Anti-Asian Activity Online Set the Stage for Real-World Violence

Negative Asian-American tropes have long existed online but began increasing last March as parts of the United States went into lockdown over the coronavirus. That month, politicians including Representative Paul Gosar, Republican of Arizona, and Representative Kevin McCarthy, a Republican of California, used the terms “Wuhan virus” and “Chinese coronavirus” to refer to Covid-19 in their tweets.

Those terms then began trending online, according to a study from the University of California, Berkeley. On the day Mr. Gosar posted his tweet, usage of the term “Chinese virus” jumped 650 percent on Twitter; a day later there was an 800 percent increase in their usage in conservative news articles, the study found.

Mr. Trump also posted eight times on Twitter last March about the “Chinese virus,” causing vitriolic reactions. In the replies section of one of his posts, a Trump supporter responded, “U caused the virus,” directing the comment to an Asian Twitter user who had cited U.S. death statistics for Covid-19. The Trump fan added a slur about Asian people.

In a study this week from the University of California, San Francisco, researchers who examined 700,000 tweets before and after Mr. Trump’s March 2020 posts found that people who posted the hashtag #chinesevirus were more likely to use racist hashtags, including #bateatingchinese.

“There’s been a lot of discussion that ‘Chinese virus’ isn’t racist and that it can be used,” said Yulin Hswen, an assistant professor of epidemiology at the University of California, San Francisco, who conducted the research. But the term, she said, has turned into “a rallying cry to be able to gather and galvanize people who have these feelings, as well as normalize racist beliefs.”

Representatives for Mr. Trump, Mr. McCarthy and Mr. Gosar did not respond to requests for comment.

Misinformation linking the coronavirus to anti-Asian beliefs also rose last year. Since last March, there have been nearly eight million mentions of anti-Asian speech online, much of it falsehoods, according to Zignal Labs, a media insights firm.

In one example, a Fox News article from April that went viral baselessly said that the coronavirus was created in a lab in the Chinese city of Wuhan and intentionally released. The article was liked and shared more than one million times on Facebook and retweeted 78,800 times on Twitter, according to data from Zignal and CrowdTangle, a Facebook-owned tool for analyzing social media.

View Source