The software that many school districts use to track students’ progress can record extremely confidential information on children: “Intellectual disability.” “Emotional Disturbance.” “Homeless.” “Disruptive.” “Defiance.” “Perpetrator.” “Excessive Talking.” “Should attend tutoring.”
Now these systems are coming under heightened scrutiny after a recent cyberattack on Illuminate Education, a leading provider of student-tracking software, which affected the personal information of more than a million current and former students across dozens of districts — including in New York City and Los Angeles, the nation’s largest public school systems.
Officials said in some districts the data included the names, dates of birth, races or ethnicities and test scores of students. At least one district said the data included more intimate information like student tardiness rates, migrant status, behavior incidents and descriptions of disabilities.
Chicago Public Schools, the nation’s third-largest district.
Now some cybersecurity and privacy experts say that the cyberattack on Illuminate Education amounts to a warning for industry and government regulators. Although it was not the largest hack on an ed tech company, these experts say they are troubled by the nature and scope of the data breach — which, in some cases, involved delicate personal details about students or student data dating back more than a decade. At a moment when some education technology companies have amassed sensitive information on millions of school children, they say, safeguards for student data seem wholly inadequate.
“There has really been an epic failure,” said Hector Balderas, the attorney general of New Mexico, whose office has sued tech companies for violating the privacy of children and students.
In a recent interview, Mr. Balderas said that Congress had failed to enact modern, meaningful data protections for students while regulators had failed to hold ed tech firms accountable for flouting student data privacy and security.
outpacing protections for students’ personal information. Lawmakers rushed to respond.
Since 2014, California, Colorado and dozens of other states have passed student data privacy and security laws. In 2014, dozens of K-12 ed tech providers signed on to a national Student Privacy Pledge, promising to maintain a “comprehensive security program.”
Supporters of the pledge said the Federal Trade Commission, which polices deceptive privacy practices, would be able to hold companies to their commitments. President Obama endorsed the pledge, praising participating companies in a major privacy speech at the F.T.C. in 2015.
The F.T.C. has a long history of fining companies for violating children’s privacy on consumer services like YouTube and TikTok. Despite numerous reports of ed tech companies with problematic privacy and security practices, however, the agency has yet to enforce the industry’s student privacy pledge.
In May, the F.T.C. announced that regulators intended to crack down on ed tech companies that violate a federal law — the Children’s Online Privacy Protection Act — which requires online services aimed at children under 13 to safeguard their personal data. The agency is pursuing a number of nonpublic investigations into ed tech companies, said Juliana Gruenwald Henderson, an F.T.C. spokeswoman.
company’s site says its services reach more than 17 million students in 5,200 school districts. Popular products include an attendance-taking system and an online grade book as well as a school platform, called eduCLIMBER, that enables educators to record students’ “social-emotional behavior” and color-code children as green (“on track”) or red (“not on track”).
Illuminate has promoted its cybersecurity. In 2016, the company announced that it had signed on to the industry pledge to show its “support for safeguarding” student data.
Concerns about a cyberattack emerged in January after some teachers in New York City schools discovered that their online attendance and grade book systems had stopped working. Illuminate said it temporarily took those systems offline after it became aware of “suspicious activity” on part of its network.
On March 25, Illuminate notified the district that certain company databases had been subject to unauthorized access, said Nathaniel Styer, the press secretary for New York City Public Schools. The incident, he said, affected about 800,000 current and former students across roughly 700 local schools.
For the affected New York City students, data included first and last names, school name and student ID number as well as at least two of the following: birth date, gender, race or ethnicity, home language and class information like teacher name. In some cases, students’ disability status — that is, whether or not they received special education services — was also affected.
said they were outraged. In 2020, Illuminate signed a strict data agreement with the district requiring the company to safeguard student data and promptly notify district officials in the event of a data breach.
kept student data on the Amazon Web Services online storage system. Cybersecurity experts said many companies had inadvertently made their A.W.S. storage buckets easy for hackers to find — by naming databases after company platforms or products.
a spate of cyberattacks on both ed tech companies and public schools, education officials said it was time for Washington to intervene to protect students.
“Changes at the federal level are overdue and could have an immediate and nationwide impact,” said Mr. Styer, the New York City schools spokesman. Congress, for instance, could amend federal education privacy rules to impose data security requirements on school vendors, he said. That would enable federal agencies to levy fines on companies that failed to comply.
One agency has already cracked down — but not on behalf of students.
Last year, the Securities and Exchange Commission charged Pearson, a major provider of assessment software for schools, with misleading investors about a cyberattack in which the birth dates and email addresses of millions of students were stolen. Pearson agreed to pay $1 million to settle the charges.
Mr. Balderas, the attorney general, said he was infuriated that financial regulators had acted to protect investors in the Pearson case — even as privacy regulators failed to step up for schoolchildren who were victims of cybercrime.
“My concern is there will be bad actors who will exploit a public school setting, especially when they think that the technology protocols are not very robust,” Mr. Balderas said. “And I don’t know why Congress isn’t terrified yet.”
WASHINGTON — The Federal Trade Commission on Wednesday filed for an injunction to block Meta, the company formerly known as Facebook, from buying a virtual reality company called Within, potentially limiting the company’s push into the so-called metaverse and signaling a shift in how the agency is approaching tech deals.
The antitrust lawsuit is the first under Lina Khan, the commission’s chair and a leading progressive critic of corporate concentration, against one of the tech giants. Ms. Khan has argued that regulators must stop competition and consumer protection violations when it comes to the bleeding edge of technology, including virtual and augmented reality, and not just in areas where the companies have already become behemoths.
The F.T.C.’s request for an injunction puts Ms. Khan on a collision course with Mark Zuckerberg, Meta’s chief executive, who is also named as a defendant in the request. He has poured billions of dollars into building products for virtual and augmented reality, betting that the immersive world of the metaverse is the next technology frontier. The lawsuit could crimp those ambitions.
in its lawsuit, which was filed in the U.S. District Court for the Northern District of California. “Instead, it chose to buy” a top company in what the government called a “vitally important” category.
attack on innovation and that the agency was “sending a chilling message to anyone who wishes to innovate in V.R.”
Meta had said it would acquire Within, which produces the highly popular fitness app called Supernatural, last year for an undisclosed sum. The company has promoted its virtual reality headsets for fitness and health purposes.
The F.T.C.’s lawsuit is highly unusual and pushes the boundaries of antitrust law. Regulators mostly focus on deals between large companies in large markets, rather than their acquisitions of small start-ups in nascent tech areas. Courts have also been skeptical applying antitrust law to block mergers based on the hypothetical that the two companies involved would later become competitors if the deal was blocked.
Instagram, the photo-sharing app that has since grown to more than one billion regular users. Instagram has helped Meta dominate the market on social photo sharing, though other start-ups have sprung up since.
lawsuit against Facebook that argued the company shut down nascent competition through acquisitions. The Justice Department has also sued Google over whether the company abused a monopoly over online search.
More cases could be coming. The F.T.C. is investigating whether Amazon has violated antitrust laws, and the Justice Department has inquiries into Google’s dominance over advertising technology and into Apple’s App Store policies.
For Mr. Zuckerberg, the F.T.C. lawsuit is a setback. He has been pushing Meta away from its roots in social networking as its apps, like Facebook and Instagram, face more competition amid stumbles in privacy and content moderation. Instead, he has bet on the metaverse.
Mr. Zuckerberg has reassigned employees and put a top lieutenant in charge of metaverse efforts. He has also authorized executives to pursue some of the most popular games in the V.R. space. In 2019, Facebook purchased Beat Games, makers of the hit title Beat Saber, one of the top V.R. games on the Oculus platform. He has also authorized the purchase of roughly half a dozen other virtual reality or gaming studios over the past three years.
The F.T.C. filed suit on Wednesday hours before Meta reported its first decline in quarterly revenue since it went public in 2012. The company has recently trimmed employee perks and reined in spending amid uncertain economic conditions. John Newman, the deputy director of the F.T.C.’s Bureau of Competition, said the agency acted on the Within deal because Meta was “trying to buy its way to the top.” The company already owned a best-selling virtual reality fitness app, he said, but then chose to acquire Within’s Supernatural app “to buy market position.” He said the deal was “an illegal acquisition, and we will pursue all appropriate relief.”
The F.T.C.’s vote to authorize the filing was split 3 to 2. Christine Wilson, a Republican commissioner at the agency, said she was one of the two votes against the lawsuit. She declined to comment onher reasoning.
The F.T.C. said in its request that asking for an injunction was sometimes a prelude to filing a complaint against a merger, which could embroil Meta and the agency in a lengthy trial and appeals process. A F.T.C. spokeswoman said the agency had not filed such a complaint and declined to comment further on the agency’s strategy.
Ms. Khan, 33, who was appointed by President Biden last year to acclaim from the left, has tried to make good on expansive promises to rein in corporate power. She became prominent after she wrote an article in law school in 2017 criticizing Amazon. As F.T.C. chair, she has called for regulators to vigorously enforce antitrust laws and has said she may craft sweeping online privacy rules that would implicate Silicon Valley companies.
The lawsuit drew praise from Ms. Khan’s allies. Sandeep Vaheesan, the legal director of the Open Markets Institute, a liberal think tank, said in a statement that the lawsuit was a “step toward making building, not buying, the norm for Facebook.”
But tech industry allies assailed Ms. Khan’s actions. Adam Kovacevich, the chief executive of Chamber of Progress, an industry group funded partly by Meta, said that with the new lawsuit, “the agency is more focused on getting headlines than results.” He said Meta “isn’t any closer than pickleball or synchronized swimming are to locking up the fitness market.”
Meta said in a blog post that the F.T.C. would fail to prove that the Within deal would “substantially lessen competition,” which is the bar that is typically set to block a deal under federal antitrust law.
In its lawsuit, the F.T.C. said that if Meta bought Within’s Supernatural, it would no longer have an incentive to improve Beat Saber, the virtual reality fitness game it already owns. But Nikhil Shanbhag, an associate general counsel for Meta, said in the blog post that the games weren’t competitors.
“Beat Saber is a game people play to have fun and it has many competitors,” he said. “Supernatural couldn’t be more different.”
SAN FRANCISCO — Mark Zuckerberg, the founder and chief executive of the company formerly known as Facebook, called his top lieutenants for the social network to a last-minute meeting in the San Francisco Bay Area this month. On the agenda: a “work-athon” to discuss the road map for improving the main Facebook app, including a revamp that would change how users browse the service.
For weeks beforehand, Mr. Zuckerberg had sent his executives messages about the overhaul, pressing them to increase the velocity and execution of their work, people with knowledge of the matter said. Some executives — who had to read a 122-page slide deck about the changes — were beginning to sweat at the unusual level of intensity, they said.
Facebook’s leaders flew in from around the world for the summit, the people said, and Mr. Zuckerberg and the group pored over each slide. Within days, the team unveiled an update to the Facebook app to better compete with a top rival, TikTok.
trimmed perks, reshuffled his leadership team and made it clear he would cut low-performing employees. Those who are not on board are welcome to leave, he has said. Managers have sent out memos to convey the seriousness of the approach — one, which was shared with The New York Times, had the title “Operating With Increased Intensity.”
the so-called metaverse. Across Silicon Valley, he and other executives who built what many refer to as Web 2.0 — a more social, app-focused version of the internet — are rethinking and upending their original vision after their platforms were plagued by privacy stumbles, toxic content and misinformation.
The moment is reminiscent of other bet-the-company gambles, such as when Netflix killed off its DVD-mailing business last decade to focus on streaming. But Mr. Zuckerberg is making these moves as Meta’s back is against the wall. The company is staring into the barrel of a global recession. Competitors like TikTok, YouTube and Apple are bearing down.
And success is far from guaranteed. In recent months, Meta’s profits have fallen and revenue has slowed as the company has spent lavishly on the metaverse and as the economic slowdown has hurt its advertising business. Its stock has plunged.
“When Mark gets super focused on something, it becomes all hands on deck within the company,” said Katie Harbath, a former Facebook policy director and the founder of Anchor Change, a consulting firm that works on tech and democracy issues. “Teams will quickly drop other work to pivot to the issue at hand, and the pressure is intense to move fast to show progress.”
Andrew Bosworth, who is known as Boz, to chief technology officer, leading hardware efforts for the metaverse. He promoted other loyalists, too, including Javier Olivan, the new chief operating officer; Nick Clegg, who became president of global affairs; and Guy Rosen, who took on a new role of chief information security officer.
In June, Sheryl Sandberg, who was Mr. Zuckerberg’s No. 2 for 14 years, said she would step down this fall. While she spent more than a decade building Facebook’s advertising systems, she was less interested in doing the same for the metaverse, people familiar with her plans have said.
Mr. Zuckerberg has moved thousands of workers into different teams for the metaverse, training their focus on aspirational projects like hardware glasses, wearables and a new operating system for those devices.
“It’s an existential bet on where people over the next decade will connect, express and identify with one another,” said Matthew Ball, a longtime tech executive and the author of a book on the metaverse. “If you have the cash, the engineers, the users and the conviction to take a swing at that, then you should.”
But the efforts are far from cheap. Facebook’s Reality Labs division, which is building augmented and virtual reality products, has dragged down the company’s balance sheet; the hardware unit lost nearly $3 billion in the first quarter alone.
privacy changes from Apple that have hampered its ability to measure the effectiveness of ads on iPhones. TikTok, the Chinese-owned video app, has stolen young audiences from Meta’s core apps like Instagram and Facebook. These challenges are coinciding with a brutal macroeconomic environment, which has pushed Apple, Google, Microsoft and Twitter to freeze or slow hiring.
a memo last month, Chris Cox, Meta’s chief product officer, said the economic environment called for “leaner, meaner, better executing teams.”
In an employee meeting around the same time, Mr. Zuckerberg said he knew that not everyone would be on board for the changes. That was fine, he told employees.
“I think some of you might decide that this place isn’t for you, and that self-selection is OK with me,” Mr. Zuckerberg said. “Realistically, there are probably a bunch of people at the company who shouldn’t be here.”
Another memo circulated internally among workers this month was titled “Operating With Increased Intensity.” In the memo, a Meta vice president said managers should begin to “think about every person on their team and the value they are adding.”
“If a direct report is coasting or a low performer, they are not who we need; they are failing this company,” the memo said. “As a manager, you cannot allow someone to be net neutral or negative for Meta.”
investment priorities” for the company in the second half of this year.
other prototypes. Bloomberg reported earlier on the smart watch.
posted an update to his Facebook profile, noting some coming changes in the app. Facebook would start pushing people into a more video-heavy feed with more suggested content, emulating how TikTok operates.
Meta has been investing heavily in video and discovery, aiming to beef up its artificial intelligence and to improve “discovery algorithms” that suggest engaging content to users without them having to work to find it.
In the past, Facebook has tested major product updates with a few English-speaking audiences to see how they perform before rolling them out more widely. But, this time, the 2.93 billion people around the world who use the social networking app will receive the update simultaneously.
It is a sign, some Meta employees said, of just how much Mr. Zuckerberg means business.
For Didi, once hailed as an innovator and disrupter in China’s staid transportation sector, it has been a fast fall from grace. The company was considered the pride of China’s spunky, and valuable, start-up scene in 2016 when it beat its American rival, Uber, and bought the firm’s Chinese operations. At the time, its executives vowed that the data it collected would be used to unsnarl traffic jams and eventually help develop driverless cars.
As Beijing has asserted greater control over internet firms like Didi, it has sought to shape a private sector more in line with the Communist Party’s focus on political security and meeting its policy goals. Popular attitudes about China’s tech sector, once an emblem of future achievement, appear to have shifted, too.
After the punishment was announced, a number of professors and tech commentators took to Weibo to call for even harsher punishments.
Jin Canrong, a professor of international relations at Renmin University, called the revelations of Didi’s violations “really shocking!” Didi “disregarded national security, disregarded national laws and disregarded citizens’ privacy,” he added. Others went further, wondering whether a company that jeopardized national security should be allowed to exist at all.
In the short term, the government will probably relent on Didi, allowing it to restore its apps in stores. But the company will still have to show that it has addressed the regulator’s concerns over data security and other issues, said Linghao Bao, an analyst at Trivium China, a China-focused policy research team.
“Big tech platforms are getting a break as the economy is not doing so well. Regulators are shifting from a campaign-style crackdown toward a more rules-based governance,” he said. “But tech regulation is here to stay over the long term.”
Chinese artists have staged performances to highlight the ubiquity of surveillance cameras. Privacy activists have filed lawsuits against the collection of facial recognition data. Ordinary citizens and establishment intellectuals alike have pushed back against the abuse of Covid tracking apps by the authorities to curb protests. Internet users have shared tips on how to evade digital monitoring.
As China builds up its vast surveillance and security apparatus, it is running up against growing public unease about the lack of safeguards to prevent the theft or misuse of personal data. The ruling Communist Party is keenly aware of the cost to its credibility of any major security lapses: Last week, it moved systematically to squelch news about what was probably the largest known breach of a Chinese government computer system, involving the personal information of as many as one billion citizens.
The breach dealt a blow to Beijing, exposing the risks of its expansive efforts to vacuum up enormous amounts of digital and biological information on the daily activities and social connections of its people from social media posts, biometric data, phone records and surveillance videos. The government says these efforts are necessary for public safety: to limit the spread of Covid, for instance, or to catch criminals. But its failure to protect the data exposes citizens to problems like fraud and extortion, and threatens to erode people’s willingness to comply with surveillance.
for mishandling data. But the authorities rarely point fingers at the country’s other top collector of personal information: the government itself.
Security researchers say the leaked database, apparently used by the police in Shanghai, had been left online and unsecured for months. It was exposed after an anonymous user posted in an online forum offering to sell the vast trove of data for 10 Bitcoin, or about $200,000. The New York Times confirmed parts of a sample of the database released by the anonymous user, who posted under the name ChinaDan.
In addition to basic information like names, addresses and ID numbers, the sample featured details that appeared to be drawn from external databases, like instructions for couriers on where to drop off deliveries, raising questions about how much information private companies share with the authorities. Of particular concern for many, it also contained intensely personal information, such as police reports that included the names of people accused of rape and domestic violence, as well as private information about political dissidents.
leaked databases used by the police in China that were left online with little to no protection; some contained facial recognition records and ID scans of people in a Muslim ethnic minority region.
Now, there are signs that people are growing wary of the government and public institutions, too, as they see how their own data is being used against them. Last month, a nationwide outcry erupted over the apparent abuse of Covid-19 tracking technology by local authorities.
The Latest on China: Key Things to Know
Card 1 of 6
China’s economy stumbles. Hurt by lockdowns imposed to curb the spread of Covid, China’s economic engine has shuddered in recent months, as housing sales sagged, shops and restaurants shuttered and youth unemployment climbed. The slowdown has kindled doubts about the viability of the country’s stringent strategy of eliminating virtually all Covid-19 infections.
A financial scandal. Depositors from across the country descended on the city of Zhengzhou for a rare mass demonstration after the money they placed in rural banks using online, third-party platforms was frozen as investigators examined allegations of widespread fraud. The authorities responded with violence.
Forced labor. Mining companies in China’s western Xinjiang region are assuming a larger role in the supply chain behind the batteries that power electric vehicles and store renewable energy. But their ties to forced labor practices could portend trouble for industries that depend on materials from China.
Protesters fighting to recover their savings from four rural banks in the central Chinese city of Zhengzhou found that the mobile apps used to identify and isolate people who might be spreading Covid had turned from green — meaning safe — to red, a designation that would prevent them from moving freely.
“There is no privacy in China,” said Silvia Si, 30, a protester whose health code had turned red. The authorities in Zhengzhou, under pressure to account for the episode, later punished five officials for changing the codes of more than 1,300 customers.
posted on Weibo that he was refusing to wear an electronic bracelet to track his movements while in isolation, saying the device was an “electronic shackle” and an infringement on his privacy. The post was liked around 60,000 times, and users flooded it with responses. Many said the bracelet reminded them of the treatment of criminals; others called it a ploy to surreptitiously collect personal information. The post was later taken down by censors, the blogger said.
researcher on technology policy at Yale Law School and New America. “People are far more trusting overall in how government entities handle their personal information and far more suspicious about the corporate sector.”
Legal analysts said any disciplinary actions resulting from the Shanghai police database breach were unlikely to be publicized. There are few mechanisms in place to hold Chinese government agencies responsible for their own data leaks. For many citizens, that lack of recourse has contributed to a sense of resignation.
Occasionally, though, they notch small victories, as Xu Peilin did when she took on her neighborhood committee last year. She had returned to her apartment building in Beijing one day to find that the compound wanted residents to submit to a facial recognition scanner to enter.
“It was insane,” said Ms. Xu, 37, a project manager at a start-up company. She said it reminded her of one of her favorite television shows, the British science fiction series “Black Mirror.”
Ms. Xu badgered her neighborhood committee by telephone and text message until it relented. For now, Ms. Xu said, she can still enter her compound using her key card, though she believed it was only a matter of time until the facial recognition devices became mandatory again.
“All I can do for now,” she said, “is continue to resist on a small scale.”
Digital payments are the default for millions of women of childbearing age. So what will their credit and debit card issuers and financial app providers do when prosecutors seek their transaction data during abortion investigations?
It’s a hypothetical question that’s almost certainly an inevitable one in the wake of the overturning of Roe v. Wade last week. Now that abortion is illegal in several states, criminal investigators will soon begin their hunt for evidence to prosecute those they say violated the law.
Medical records are likely to be the most definitive proof of what now is a crime, but officials who cannot get those may look for evidence elsewhere. The payment trail is likely to be a high priority.
HIPAA — which governs the privacy of a patient’s health records — permits medical and billing records to be released in response to a warrant or subpoena.
“There is a very broad exception to the HIPAA protections for law enforcement,” said Marcy Wilder, a partner and co-head of the global privacy and cybersecurity practice at Hogan Lovells, a law firm. But Ms. Wilder added that the information shared with law enforcement officials could not be overly broad or unrelated to the request. “That is why it matters how companies and health plans are interpreting this.”
Card issuers and networks like Visa and Mastercard generally do not have itemized lists of everything that people pay for when they shop for prescription drugs or other medications online, or when they purchase services at health care providers. But evidence of patronage of, say, a pharmacy that sells only abortion pills could give someone away.
a new state law authorizes residents to file lawsuits against anyone who helped facilitate an abortion.
“With the ruling only coming down late last week, it’s premature to understand the full impact at the state level,” Brad Russell, a USAA spokesman, said via email. “However, USAA will always comply with all applicable laws.”
American Airlines Credit Union, Bank of America, Capital One, Discover, Goldman Sachs, Prosperity Bank USA, Navy Federal Credit Union, US Bank, University of Wisconsin Credit Union, Wells Fargo and Western Union did not return at least two messages seeking comment.
American Express, Bank of America, Goldman Sachs, JPMorgan and Wells Fargo have all announced their intentions to reimburse employees for expenses if they travel to other states for abortions. So far, none have commented about how they would respond to a subpoena seeking the transaction records of the very employees who would be eligible for employer reimbursement.
Amie Stepanovich, vice president of U.S. policy at the Future of Privacy Forum, a nonprofit focused on data privacy and protection, said warrants and subpoenas can be accompanied by gag orders, which can prevent companies from even alerting their customers that they’re being investigated.
“They can choose to battle the use of gag orders in court,” she said. “Sometimes they win, sometimes they don’t.”
In other instances, prosecutors may not say exactly what they’re investigating when they ask for transaction records. In that case, it’s up to the financial institution to request more information or try to figure it out on its own.
Paying for abortion services with cash is one possible way to avoid detection, even if it isn’t possible for people ordering pills online. Many abortion funds pay on behalf of people who need financial help.
But cash and electronic transfers of money are not entirely foolproof.
“Even if you are paying with cash, the amount of residual information that can be used to reveal health status and pregnancy status is fairly significant,” said Ms. Stepanovich, referring to potential bread crumbs such as the use of a retailer’s loyalty program or location tracking on a mobile phone when making a cash purchase.
In some cases, users may inadvertently give up sensitive information themselves through apps that track and share their financial behavior.
“The purchase of a pregnancy test on an app where financial history is public is probably the biggest red flag,” Ms. Stepanovich said.
Other advocates mentioned the possibility of using prepaid cards in fixed amounts, like the kinds that people can buy off a rack in a drugstore. Cryptocurrency, they added, usually does leave enough of a trail that achieving anonymity is challenging.
One thing that every expert emphasized is the lack of certainty. But there is an emerging gut feeling that corporations will be in the spotlight at least as much as judges.
“Now, these payment companies are going to be front and center in the fight,” Ms. Caraballo said.
There is no clear blueprint for corporate engagement on abortion. After numerous companies came forward to announce that they would cover travel expenses for their employees to get abortions, executives have had to move swiftly to both sort out the mechanics of those policies and explain them to a work force concerned about confidentiality and safety.
Few companies have commented directly on the Supreme Court’s ruling in Dobbs v. Jackson Women’s Health Organization, which ended nearly 50 years of federal abortion rights. Far more have responded by expanding their health care policies to cover travel and other expenses for employees who can’t get abortions close to home, now that the procedure is banned in at least eight states with other bans set to soon take effect. About half the country gets its health care coverage from employers, and the wave of new employer commitments has raised concerns from some workers about privacy.
“It’s a doomsday scenario if individuals have to bring their health care choices to their employers,” said Dina Fierro, a global vice president at the cosmetics company Nars, echoing a concern that many workers have expressed on social media in recent days.
Popular Information. Match Group declined to comment.
tweet: “I believe CEOs have a responsibility to take care of their employees — no matter what.”
The more than 1.4 billion people living in China are constantly watched. They are recorded by police cameras that are everywhere, on street corners and subway ceilings, in hotel lobbies and apartment buildings. Their phones are tracked, their purchases are monitored, and their online chats are censored.
Now, even their future is under surveillance.
The latest generation of technology digs through the vast amounts of data collected on their daily activities to find patterns and aberrations, promising to predict crimes or protests before they happen. They target potential troublemakers in the eyes of the Chinese government — not only those with a criminal past but also vulnerable groups, including ethnic minorities, migrant workers and those with a history of mental illness.
They can warn the police if a victim of a fraud tries to travel to Beijing to petition the government for payment or a drug user makes too many calls to the same number. They can signal officers each time a person with a history of mental illness gets near a school.
automating systemic discrimination and political repression.
to quell ethnic unrest in the western region of Xinjiang and enforce some of the world’s most severe coronavirus lockdowns. The space for dissent, always limited, is rapidly disappearing.
“Big data should be used as an engine to power the innovative development of public security work and a new growth point for nurturing combat capabilities,” Mr. Xi said in 2019 at a national public security work meeting.
ChinaFile, an online magazine published by the Asia Society, which has systematically gathered years of records on government websites. Another set, describing software bought by the authorities in the port city of Tianjin to stop petitioners from going to neighboring Beijing, was provided by IPVM, a surveillance industry publication.
China’s Ministry of Public Security did not respond to requests for comment faxed to its headquarters in Beijing and six local departments across the country.
The new approach to surveillance is partly based on data-driven policing software from the United States and Europe, technology that rights groups say has encoded racism into decisions like which neighborhoods are most heavily policed and which prisoners get parole. China takes it to the extreme, tapping nationwide reservoirs of data that allow the police to operate with opacity and impunity.
Megvii, an artificial intelligence start-up, told Chinese state media that the surveillance system could give the police a search engine for crime, analyzing huge amounts of video footage to intuit patterns and warn the authorities about suspicious behavior. He explained that if cameras detected a person spending too much time at a train station, the system could flag a possible pickpocket.
Hikvision, that aims to predict protests. The system collects data on legions of Chinese petitioners, a general term in China that describes people who try to file complaints about local officials with higher authorities.
It then scores petitioners on the likelihood that they will travel to Beijing. In the future, the data will be used to train machine-learning models, according to a procurement document.
Local officials want to prevent such trips to avoid political embarrassment or exposure of wrongdoing. And the central government doesn’t want groups of disgruntled citizens gathering in the capital.
A Hikvision representative declined to comment on the system.
Under Mr. Xi, official efforts to control petitioners have grown increasingly invasive. Zekun Wang, a 32-year-old member of a group that for years sought redress over a real estate fraud, said the authorities in 2017 had intercepted fellow petitioners in Shanghai before they could even buy tickets to Beijing. He suspected that the authorities were watching their communications on the social media app WeChat.
The Hikvision system in Tianjin, which is run in cooperation with the police in nearby Beijing and Hebei Province, is more sophisticated.
The platform analyzes individuals’ likelihood to petition based on their social and family relationships, past trips and personal situations, according to the procurement document. It helps the police create a profile of each, with fields for officers to describe the temperament of the protester, including “paranoid,” “meticulous” and “short tempered.”
Many people who petition do so over government mishandling of a tragic accident or neglect in the case — all of which goes into the algorithm. “Increase a person’s early-warning risk level if they have low social status or went through a major tragedy,” reads the procurement document.
When the police in Zhouning, a rural county in Fujian Province, bought a new set of 439 cameras in 2018, they listed coordinates where each would go. Some hung above intersections and others near schools, according to a procurement document.
Nine were installed outside the homes of people with something in common: mental illness.
While some software tries to use data to uncover new threats, a more common type is based on the preconceived notions of the police. In over a hundred procurement documents reviewed by The Times, the surveillance targeted blacklists of “key persons.”
These people, according to some of the procurement documents, included those with mental illness, convicted criminals, fugitives, drug users, petitioners, suspected terrorists, political agitators and threats to social stability. Other systems targeted migrant workers, idle youths (teenagers without school or a job), ethnic minorities, foreigners and those infected with H.I.V.
The authorities decide who goes on the lists, and there is often no process to notify people when they do. Once individuals are in a database, they are rarely removed, said experts, who worried that the new technologies reinforce disparities within China, imposing surveillance on the least fortunate parts of its population.
In many cases the software goes further than simply targeting a population, allowing the authorities to set up digital tripwires that indicate a possible threat. In one Megvii presentation detailing a rival product by Yitu, the system’s interface allowed the police to devise their own early warnings.
With a simple fill-in-the-blank menu, the police can base alarms on specific parameters, including where a blacklisted person appears, when the person moves around, whether he or she meets with other blacklisted people and the frequency of certain activities. The police could set the system to send a warning each time two people with a history of drug use check into the same hotel or when four people with a history of protest enter the same park.
Yitu did not respond to emailed requests for comment.
In 2020 in the city of Nanning, the police bought software that could look for “more than three key people checking into the same or nearby hotels” and “a drug user calling a new out-of-town number frequently,” according to a bidding document. In Yangshuo, a tourist town famous for its otherworldly karst mountains, the authorities bought a system to alert them if a foreigner without a work permit spent too much time hanging around foreign-language schools or bars, an apparent effort to catch people overstaying their visas or working illegally.
In Shanghai, one party-run publication described how the authorities used software to identify those who exceeded normal water and electricity use. The system would send a “digital whistle” to the police when it found suspicious consumption patterns.
The tactic was likely designed to detect migrant workers, who often live together in close quarters to save money. In some places, the police consider them an elusive, and often impoverished, group who can bring crime into communities.
The automated alerts don’t result in the same level of police response. Often, the police give priority to warnings that point to political problems, like protests or other threats to social stability, said Suzanne E. Scoggins, a professor at Clark University who studies China’s policing.
At times, the police have stated outright the need to profile people. “Through the application of big data, we paint a picture of people and give them labels with different attributes,” Li Wei, a researcher at China’s national police university, said in a 2016 speech. “For those who receive one or more types of labels, we infer their identities and behavior, and then carry out targeted pre-emptive security measures.”
Toward Techno Totalitarianism
Mr. Zhang first started petitioning the government for compensation over the torture of his family during the Cultural Revolution. He has since petitioned over what he says is police targeting of his family.
As China has built out its techno-authoritarian tools, he has had to use spy movie tactics to circumvent surveillance that, he said, has become “high tech and Nazified.”
When he traveled to Beijing in January from his village in Shandong Province, he turned off his phone and paid for transportation in cash to minimize his digital footprint. He bought train tickets to the wrong destination to foil police tracking. He hired private drivers to get around checkpoints where his identification card would set off an alarm.
The system in Tianjin has a special feature for people like him who have “a certain awareness of anti-reconnaissance” and regularly change vehicles to evade detection, according to the police procurement document.
Whether or not he triggered the system, Mr. Zhang has noticed a change. Whenever he turns off his phone, he said, officers show up at his house to check that he hasn’t left on a new trip to Beijing.
Even if police systems cannot accurately predict behavior, the authorities may consider them successful because of the threat, said Noam Yuchtman, an economics professor at the London School of Economics who has studied the impact of surveillance in China.
“In a context where there isn’t real political accountability,” having a surveillance system that frequently sends police officers “can work pretty well” at discouraging unrest, he said.
Once the metrics are set and the warnings are triggered, police officers have little flexibility, centralizing control.They are evaluated for their responsiveness to automated alarms and effectiveness at preventing protests, according to experts and public police reports.
The technology has encoded power imbalances. Some bidding documents refer to a “red list” of people whom the surveillance system must ignore.
One national procurement document said the function was for “people who need privacy protection or V.I.P. protection.” Another, from Guangdong Province, got more specific, stipulating that the red list was for government officials.
Mr. Zhang expressed frustration at the ways technology had cut off those in political power from regular people.
“The authorities do not seriously solve problems but do whatever it takes to silence the people who raise the problems,” he said. “This is a big step backward for society.”
Mr. Zhang said that he still believed in the power of technology to do good, but that in the wrong hands it could be a “scourge and a shackle.”
“In the past if you left your home and took to the countryside, all roads led to Beijing,” he said. “Now, the entire country is a net.”
Isabelle Qian and Aaron Krolik contributed research and reporting. Production by Agnes Chang and Alexander Cardia.
The headquarters for Axon Enterprise Inc, formerly Taser International, is seen in Scottsdale, Aizona, U.S., May 17, 2017. Picture taken May 17, 2017. To match Special Report USA-TASER/EXPERTS REUTERS/Ricardo Arduengo
Register now for FREE unlimited access to Reuters.com
June 6 (Reuters) – Taser-maker Axon Enterprise Inc (AXON.O) said it was halting a project to equip drones with stun guns to combat mass shootings, a reversal that did not stop most of its ethics advisory board members from announcing their resignation on Monday in protest over the original plans.
The May 24 school shooting in Uvalde, Texas, which killed 19 children and two teachers, prompted Axon to announce last week it was working on a drone that first responders could operate remotely to fire a Taser at a target about 40 feet (12 m) away.
Nine of 12 members of the company’s AI Ethics Board quit over concerns the drones would harm over-policed communities and that Axon publicized its ambitions without consulting the group. The resignations and Axon’s scuttled plans were first reported by Reuters.
Register now for FREE unlimited access to Reuters.com
“In light of feedback, we are pausing work on this project and refocusing to further engage with key constituencies to fully explore the best path forward,” Chief Executive Rick Smith said in a statement.
The action by ethics board members marked a rare public rebuke for one of the watchdog groups some companies have set up to gather feedback on emerging technologies, such as drones and artificial intelligence (AI) software.
Smith said it was unfortunate that members withdrew before Axon could address their technical questions, but the company “will continue to seek diverse perspectives to challenge our thinking.”
Axon, which also sells body-worn cameras and policing software, has said its clients include about 17,000 out of the roughly 18,000 law enforcement agencies in the United States.
It explored the idea of a Taser-equipped drone for police since at least 2016, and Smith depicted how one could stop an active shooter in a graphic novel he wrote. The novel shows a daycare center with what looks like an enlarged smoke alarm, which first recognizes the sound of gunfire and then ejects a drone, identifying and tasing the shooter in two seconds.
Axon first approached its ethics board more than a year ago about Taser-equipped drones, and the panel last month voted eight to four against running a limited police pilot of the technology.
The company announced the drone idea anyway, as it said it wanted to get past “fruitless debates” on guns after the Uvalde shooting, sending shares up nearly 6%. They were down 0.5% on Monday.
Ethics board members worried the drones could exacerbate racial injustice, undermine privacy through surveillance and become more lethal if other weapons were added, member Wael Abd-Almageed said in an interview.
“What we have right now is just dangerous and irresponsible,” said Abd-Almageed, an engineering research associate professor at University of Southern California.
The board likewise had not evaluated use of the drones by first responders outside police, it said. And members questioned how a drone could navigate closed doors to stop a shooting.
The drone is “distracting society from real solutions to a tragic problem,” resigning board members said in a Monday statement.
CEO Smith has said drones could be stationed in hallways and move into rooms through special vents. A drone system would cost a school about $1,000 annually, he said.
Formed in 2018, the ethics panel has guided Axon productively on sensitive technologies such as facial recognition in the past.
Giles Herdale, one of the remaining ethics board members, told Reuters he chose not to resign because he could have more influence “if I am in the tent than outside it.”
For others, the company’s drone announcement prior to a formal report by the board broke with practice, said member Ryan Calo, a University of Washington law professor.
“I’m not going to stay on an advisory board for a company that departs so far from expectation and protocol or, frankly, who believes ubiquitous surveillance coupled with remote non-lethal weapons is a viable response to school shootings,” he said.
Barry Friedman, the board chairman, resigned as well.
Register now for FREE unlimited access to Reuters.com
Reporting by Jeffrey Dastin in Palo Alto, Calif., and Paresh Dave in Oakland, Calif.; Editing by Clarence Fernandez, Robert Birsel and Tomasz Janowski
Our Standards: The Thomson Reuters Trust Principles.
Ms. Sandberg flirted with leaving Facebook. In 2016, she told colleagues that if Hillary Clinton, the Democratic presidential nominee, won the White House she would most likely assume a job in Washington, three people who spoke to her about the move at the time said. In 2018, after revelations about Cambridge Analytica and Russia’s interference in the 2016 U.S. presidential election, she again told colleagues that she was considering leaving but did not want to do so when the company was in crisis.
Last year, Mr. Zuckerberg said his company was making a new bet and was going all in on the metaverse, which he called “the successor to the mobile internet.” In his announcement, Ms. Sandberg made only a cameo, while other executives were more prominently featured.
As Mr. Zuckerberg overhauled the company to focus on the metaverse, some of Ms. Sandberg’s responsibilities were spread among other executives. Nick Clegg, the president of global affairs and a former British deputy prime minister, became the company’s chief spokesman, a role that Ms. Sandberg had once taken. In February, Mr. Clegg was promoted to president of global affairs for Meta.
Ms. Sandberg’s profile dimmed. She concentrated on building the ads business and growing the number of small businesses on Facebook.
She was also focused on personal matters. Dave Goldberg, her husband, had died unexpectedly in 2015. (Ms. Sandberg’s second book, “Option B,” was about dealing with grief.) She later met Mr. Bernthal, and he and his three children moved to her Silicon Valley home from Southern California during the pandemic. Ms. Sandberg, who had two children with Mr. Goldberg, was focused on integrating the families and planning for her summer wedding, a person close to her said.
Meta’s transition to the metaverse has not been easy. The company has spent heavily on metaverse products while its advertising business has stumbled, partly because privacy changes made by Apple have hurt targeted advertising. In February, Meta’s market value plunged more than $230 billion, its biggest one-day wipeout, after it reported financial results that showed it was struggling to make the leap to the metaverse.
In the interview, Ms. Sandberg said Meta faced near-term challenges but would weather the storm, as it had during past challenges. “When we went public, we had no mobile ads,” Ms. Sandberg said, citing the company’s rapid transition from desktop computers to smartphones last decade. “We have done this before.”