The advent of computer modeling helped automate voter targeting, making it more efficient.
In the 1960s, a market researcher in Los Angeles, Vincent Barabba, developed a computer program to help political campaigns decide which neighborhoods to target. The system overlaid voting precinct maps with details on individuals’ voting histories along with U.S. census data on household economics, ethnic makeup and family composition.
In 1966, political consultants used the system to help Ronald Reagan’s campaign for governor of California identify neighborhoods with potential swing voters, like middle-aged, white, male union members, and target them with ads.
Critics worried about the technology’s potential to influence voters, deriding it as a “sinister new development dreamt up by manipulative social scientists,” according to “Selling Ronald Reagan,” a book on the Hollywood actor’s political transformation.
By the early 2000s, campaigns had moved on to more advanced targeting methods.
For the re-election campaign of President George W. Bush in 2004, Republican consultants classified American voters into discrete buckets, like “Flag and Family Republicans” and “Religious Democrats.” Then they used the segmentation to target Republicans and swing voters living in towns that typically voted Democrat, said Michael Meyers, the president of TargetPoint Consulting, who worked on the Bush campaign.
In 2008, the Obama presidential campaign widely used individualized voter scores. Republicans soon beefed up their own voter-profiling and targeting operations.
A decade later, when Cambridge Analytica — a voter-profiling firm that covertly data-mined and scored millions of Facebook users — became front-page news, many national political campaigns were already using voter scores. Now, even local candidates use them.
This spring, the Government Accountability Office issued a report warning that the practice of consumer scoring lacked transparency and could cause harm. Although the report did not specifically examine voter scores, it urged Congress to consider enacting consumer protections around scoring.
SAN FRANCISCO — Mark Zuckerberg, the founder and chief executive of the company formerly known as Facebook, called his top lieutenants for the social network to a last-minute meeting in the San Francisco Bay Area this month. On the agenda: a “work-athon” to discuss the road map for improving the main Facebook app, including a revamp that would change how users browse the service.
For weeks beforehand, Mr. Zuckerberg had sent his executives messages about the overhaul, pressing them to increase the velocity and execution of their work, people with knowledge of the matter said. Some executives — who had to read a 122-page slide deck about the changes — were beginning to sweat at the unusual level of intensity, they said.
Facebook’s leaders flew in from around the world for the summit, the people said, and Mr. Zuckerberg and the group pored over each slide. Within days, the team unveiled an update to the Facebook app to better compete with a top rival, TikTok.
trimmed perks, reshuffled his leadership team and made it clear he would cut low-performing employees. Those who are not on board are welcome to leave, he has said. Managers have sent out memos to convey the seriousness of the approach — one, which was shared with The New York Times, had the title “Operating With Increased Intensity.”
the so-called metaverse. Across Silicon Valley, he and other executives who built what many refer to as Web 2.0 — a more social, app-focused version of the internet — are rethinking and upending their original vision after their platforms were plagued by privacy stumbles, toxic content and misinformation.
The moment is reminiscent of other bet-the-company gambles, such as when Netflix killed off its DVD-mailing business last decade to focus on streaming. But Mr. Zuckerberg is making these moves as Meta’s back is against the wall. The company is staring into the barrel of a global recession. Competitors like TikTok, YouTube and Apple are bearing down.
And success is far from guaranteed. In recent months, Meta’s profits have fallen and revenue has slowed as the company has spent lavishly on the metaverse and as the economic slowdown has hurt its advertising business. Its stock has plunged.
“When Mark gets super focused on something, it becomes all hands on deck within the company,” said Katie Harbath, a former Facebook policy director and the founder of Anchor Change, a consulting firm that works on tech and democracy issues. “Teams will quickly drop other work to pivot to the issue at hand, and the pressure is intense to move fast to show progress.”
Andrew Bosworth, who is known as Boz, to chief technology officer, leading hardware efforts for the metaverse. He promoted other loyalists, too, including Javier Olivan, the new chief operating officer; Nick Clegg, who became president of global affairs; and Guy Rosen, who took on a new role of chief information security officer.
In June, Sheryl Sandberg, who was Mr. Zuckerberg’s No. 2 for 14 years, said she would step down this fall. While she spent more than a decade building Facebook’s advertising systems, she was less interested in doing the same for the metaverse, people familiar with her plans have said.
Mr. Zuckerberg has moved thousands of workers into different teams for the metaverse, training their focus on aspirational projects like hardware glasses, wearables and a new operating system for those devices.
“It’s an existential bet on where people over the next decade will connect, express and identify with one another,” said Matthew Ball, a longtime tech executive and the author of a book on the metaverse. “If you have the cash, the engineers, the users and the conviction to take a swing at that, then you should.”
But the efforts are far from cheap. Facebook’s Reality Labs division, which is building augmented and virtual reality products, has dragged down the company’s balance sheet; the hardware unit lost nearly $3 billion in the first quarter alone.
privacy changes from Apple that have hampered its ability to measure the effectiveness of ads on iPhones. TikTok, the Chinese-owned video app, has stolen young audiences from Meta’s core apps like Instagram and Facebook. These challenges are coinciding with a brutal macroeconomic environment, which has pushed Apple, Google, Microsoft and Twitter to freeze or slow hiring.
a memo last month, Chris Cox, Meta’s chief product officer, said the economic environment called for “leaner, meaner, better executing teams.”
In an employee meeting around the same time, Mr. Zuckerberg said he knew that not everyone would be on board for the changes. That was fine, he told employees.
“I think some of you might decide that this place isn’t for you, and that self-selection is OK with me,” Mr. Zuckerberg said. “Realistically, there are probably a bunch of people at the company who shouldn’t be here.”
Another memo circulated internally among workers this month was titled “Operating With Increased Intensity.” In the memo, a Meta vice president said managers should begin to “think about every person on their team and the value they are adding.”
“If a direct report is coasting or a low performer, they are not who we need; they are failing this company,” the memo said. “As a manager, you cannot allow someone to be net neutral or negative for Meta.”
investment priorities” for the company in the second half of this year.
other prototypes. Bloomberg reported earlier on the smart watch.
posted an update to his Facebook profile, noting some coming changes in the app. Facebook would start pushing people into a more video-heavy feed with more suggested content, emulating how TikTok operates.
Meta has been investing heavily in video and discovery, aiming to beef up its artificial intelligence and to improve “discovery algorithms” that suggest engaging content to users without them having to work to find it.
In the past, Facebook has tested major product updates with a few English-speaking audiences to see how they perform before rolling them out more widely. But, this time, the 2.93 billion people around the world who use the social networking app will receive the update simultaneously.
It is a sign, some Meta employees said, of just how much Mr. Zuckerberg means business.
The more than 1.4 billion people living in China are constantly watched. They are recorded by police cameras that are everywhere, on street corners and subway ceilings, in hotel lobbies and apartment buildings. Their phones are tracked, their purchases are monitored, and their online chats are censored.
Now, even their future is under surveillance.
The latest generation of technology digs through the vast amounts of data collected on their daily activities to find patterns and aberrations, promising to predict crimes or protests before they happen. They target potential troublemakers in the eyes of the Chinese government — not only those with a criminal past but also vulnerable groups, including ethnic minorities, migrant workers and those with a history of mental illness.
They can warn the police if a victim of a fraud tries to travel to Beijing to petition the government for payment or a drug user makes too many calls to the same number. They can signal officers each time a person with a history of mental illness gets near a school.
automating systemic discrimination and political repression.
to quell ethnic unrest in the western region of Xinjiang and enforce some of the world’s most severe coronavirus lockdowns. The space for dissent, always limited, is rapidly disappearing.
“Big data should be used as an engine to power the innovative development of public security work and a new growth point for nurturing combat capabilities,” Mr. Xi said in 2019 at a national public security work meeting.
ChinaFile, an online magazine published by the Asia Society, which has systematically gathered years of records on government websites. Another set, describing software bought by the authorities in the port city of Tianjin to stop petitioners from going to neighboring Beijing, was provided by IPVM, a surveillance industry publication.
China’s Ministry of Public Security did not respond to requests for comment faxed to its headquarters in Beijing and six local departments across the country.
The new approach to surveillance is partly based on data-driven policing software from the United States and Europe, technology that rights groups say has encoded racism into decisions like which neighborhoods are most heavily policed and which prisoners get parole. China takes it to the extreme, tapping nationwide reservoirs of data that allow the police to operate with opacity and impunity.
Megvii, an artificial intelligence start-up, told Chinese state media that the surveillance system could give the police a search engine for crime, analyzing huge amounts of video footage to intuit patterns and warn the authorities about suspicious behavior. He explained that if cameras detected a person spending too much time at a train station, the system could flag a possible pickpocket.
Hikvision, that aims to predict protests. The system collects data on legions of Chinese petitioners, a general term in China that describes people who try to file complaints about local officials with higher authorities.
It then scores petitioners on the likelihood that they will travel to Beijing. In the future, the data will be used to train machine-learning models, according to a procurement document.
Local officials want to prevent such trips to avoid political embarrassment or exposure of wrongdoing. And the central government doesn’t want groups of disgruntled citizens gathering in the capital.
A Hikvision representative declined to comment on the system.
Under Mr. Xi, official efforts to control petitioners have grown increasingly invasive. Zekun Wang, a 32-year-old member of a group that for years sought redress over a real estate fraud, said the authorities in 2017 had intercepted fellow petitioners in Shanghai before they could even buy tickets to Beijing. He suspected that the authorities were watching their communications on the social media app WeChat.
The Hikvision system in Tianjin, which is run in cooperation with the police in nearby Beijing and Hebei Province, is more sophisticated.
The platform analyzes individuals’ likelihood to petition based on their social and family relationships, past trips and personal situations, according to the procurement document. It helps the police create a profile of each, with fields for officers to describe the temperament of the protester, including “paranoid,” “meticulous” and “short tempered.”
Many people who petition do so over government mishandling of a tragic accident or neglect in the case — all of which goes into the algorithm. “Increase a person’s early-warning risk level if they have low social status or went through a major tragedy,” reads the procurement document.
When the police in Zhouning, a rural county in Fujian Province, bought a new set of 439 cameras in 2018, they listed coordinates where each would go. Some hung above intersections and others near schools, according to a procurement document.
Nine were installed outside the homes of people with something in common: mental illness.
While some software tries to use data to uncover new threats, a more common type is based on the preconceived notions of the police. In over a hundred procurement documents reviewed by The Times, the surveillance targeted blacklists of “key persons.”
These people, according to some of the procurement documents, included those with mental illness, convicted criminals, fugitives, drug users, petitioners, suspected terrorists, political agitators and threats to social stability. Other systems targeted migrant workers, idle youths (teenagers without school or a job), ethnic minorities, foreigners and those infected with H.I.V.
The authorities decide who goes on the lists, and there is often no process to notify people when they do. Once individuals are in a database, they are rarely removed, said experts, who worried that the new technologies reinforce disparities within China, imposing surveillance on the least fortunate parts of its population.
In many cases the software goes further than simply targeting a population, allowing the authorities to set up digital tripwires that indicate a possible threat. In one Megvii presentation detailing a rival product by Yitu, the system’s interface allowed the police to devise their own early warnings.
With a simple fill-in-the-blank menu, the police can base alarms on specific parameters, including where a blacklisted person appears, when the person moves around, whether he or she meets with other blacklisted people and the frequency of certain activities. The police could set the system to send a warning each time two people with a history of drug use check into the same hotel or when four people with a history of protest enter the same park.
Yitu did not respond to emailed requests for comment.
In 2020 in the city of Nanning, the police bought software that could look for “more than three key people checking into the same or nearby hotels” and “a drug user calling a new out-of-town number frequently,” according to a bidding document. In Yangshuo, a tourist town famous for its otherworldly karst mountains, the authorities bought a system to alert them if a foreigner without a work permit spent too much time hanging around foreign-language schools or bars, an apparent effort to catch people overstaying their visas or working illegally.
In Shanghai, one party-run publication described how the authorities used software to identify those who exceeded normal water and electricity use. The system would send a “digital whistle” to the police when it found suspicious consumption patterns.
The tactic was likely designed to detect migrant workers, who often live together in close quarters to save money. In some places, the police consider them an elusive, and often impoverished, group who can bring crime into communities.
The automated alerts don’t result in the same level of police response. Often, the police give priority to warnings that point to political problems, like protests or other threats to social stability, said Suzanne E. Scoggins, a professor at Clark University who studies China’s policing.
At times, the police have stated outright the need to profile people. “Through the application of big data, we paint a picture of people and give them labels with different attributes,” Li Wei, a researcher at China’s national police university, said in a 2016 speech. “For those who receive one or more types of labels, we infer their identities and behavior, and then carry out targeted pre-emptive security measures.”
Toward Techno Totalitarianism
Mr. Zhang first started petitioning the government for compensation over the torture of his family during the Cultural Revolution. He has since petitioned over what he says is police targeting of his family.
As China has built out its techno-authoritarian tools, he has had to use spy movie tactics to circumvent surveillance that, he said, has become “high tech and Nazified.”
When he traveled to Beijing in January from his village in Shandong Province, he turned off his phone and paid for transportation in cash to minimize his digital footprint. He bought train tickets to the wrong destination to foil police tracking. He hired private drivers to get around checkpoints where his identification card would set off an alarm.
The system in Tianjin has a special feature for people like him who have “a certain awareness of anti-reconnaissance” and regularly change vehicles to evade detection, according to the police procurement document.
Whether or not he triggered the system, Mr. Zhang has noticed a change. Whenever he turns off his phone, he said, officers show up at his house to check that he hasn’t left on a new trip to Beijing.
Even if police systems cannot accurately predict behavior, the authorities may consider them successful because of the threat, said Noam Yuchtman, an economics professor at the London School of Economics who has studied the impact of surveillance in China.
“In a context where there isn’t real political accountability,” having a surveillance system that frequently sends police officers “can work pretty well” at discouraging unrest, he said.
Once the metrics are set and the warnings are triggered, police officers have little flexibility, centralizing control.They are evaluated for their responsiveness to automated alarms and effectiveness at preventing protests, according to experts and public police reports.
The technology has encoded power imbalances. Some bidding documents refer to a “red list” of people whom the surveillance system must ignore.
One national procurement document said the function was for “people who need privacy protection or V.I.P. protection.” Another, from Guangdong Province, got more specific, stipulating that the red list was for government officials.
Mr. Zhang expressed frustration at the ways technology had cut off those in political power from regular people.
“The authorities do not seriously solve problems but do whatever it takes to silence the people who raise the problems,” he said. “This is a big step backward for society.”
Mr. Zhang said that he still believed in the power of technology to do good, but that in the wrong hands it could be a “scourge and a shackle.”
“In the past if you left your home and took to the countryside, all roads led to Beijing,” he said. “Now, the entire country is a net.”
Isabelle Qian and Aaron Krolik contributed research and reporting. Production by Agnes Chang and Alexander Cardia.
While Meta adjusts, some small businesses have begun seeking other avenues for ads. Shawn Baker, the owner of Baker SoftWash, an exterior cleaning company in Mooresville, N.C., said it previously took about $6 of Facebook ads to identify a new customer. Now it costs $27 because the ads do not find the right people, he said.
Mr. Baker has started spending $200 a month to advertise through Google’s marketing program for local businesses, which surfaces his website when people who live in the area search for cleaners. To compensate for those higher marketing costs, he has raised his prices 7 percent.
“You’re spending more money now than what you had to spend before to do the same things,” he said.
Other tech giants with first-party information are capitalizing on the change. Amazon, for example, has reams of data on its customers, including what they buy, where they reside, and what movies or TV shows they stream.
In February, Amazon disclosed the size of its advertising business — $31.2 billion in revenue in 2021 — for the first time. That makes advertising its third-largest source of sales after e-commerce and cloud computing. Amazon declined to comment.
Amber Murray, the owner of See Your Strength in St. George, Utah, which sells stickers online for people with anxiety, started experimenting with ads on Amazon after the performance of Facebook ads deteriorated. The results were remarkable, she said.
In February, she paid about $200 for Amazon to feature her products near the top of search results when customers looked up textured stickers. Sales totaled $250 a day and continued to grow, she said. When she spent $85 on a Facebook ad campaign in January, it yielded just $37.50 in sales, she said.
“I think the golden days of Facebook advertising are over,” Ms. Murray said. “On Amazon, people are looking for you, instead of you telling people what they should want.”
The change affects more than a third of Facebook’s daily users who had facial recognition turned on for their accounts, according to the company. That meant they received alerts when new photos or videos of them were uploaded to the social network. The feature had also been used to flag accounts that might be impersonating someone else and was incorporated into software that described photos to blind users.
“Making this change required us to weigh the instances where facial recognition can be helpful against the growing concerns about the use of this technology as a whole,” said Jason Grosse, a Meta spokesman.
Let Us Help You Protect Your Digital Life
Although Facebook plans to delete more than one billion facial recognition templates, which are digital scans of facial features, by December, it will not eliminate the software that powers the system, which is an advanced algorithm called DeepFace. The company has also not ruled out incorporating facial recognition technology into future products, Mr. Grosse said.
Privacy advocates nonetheless applauded the decision.
“Facebook getting out of the face recognition business is a pivotal moment in the growing national discomfort with this technology,” said Adam Schwartz, a senior lawyer with the Electronic Frontier Foundation, a civil liberties organization. “Corporate use of face surveillance is very dangerous to people’s privacy.”
Facebook is not the first large technology company to pull back on facial recognition software. Amazon, Microsoft and IBM have paused or ceased selling their facial recognition products to law enforcement in recent years, while expressing concerns about privacy and algorithmic bias and calling for clearer regulation.
Facebook’s facial recognition software has a long and expensive history. When the software was rolled out to Europe in 2011, data protection authorities there said the move was illegal and that the company needed consent to analyze photos of a person and extract the unique pattern of an individual face. In 2015, the technology also led to the filing of the class action suit in Illinois.
Over the last decade, the Electronic Privacy Information Center, a Washington-based privacy advocacy group, filed two complaints about Facebook’s use of facial recognition with the F.T.C. When the F.T.C. fined Facebook in 2019, it named the site’s confusing privacy settings around facial recognition as one of the reasons for the penalty.
John Tye, the founder of Whistleblower Aid, a legal nonprofit that represents people seeking to expose potential lawbreaking, was contacted this spring through a mutual connection by a woman who claimed to have worked at Facebook.
The woman told Mr. Tye and his team something intriguing: She had access to tens of thousands of pages of internal documents from the world’s largest social network. In a series of calls, she asked for legal protection and a path to releasing the confidential information. Mr. Tye, who said he understood the gravity of what the woman brought “within a few minutes,” agreed to represent her and call her by the alias “Sean.”
She “is a very courageous person and is taking a personal risk to hold a trillion-dollar company accountable,” he said.
On Sunday, Frances Haugen revealed herself to be “Sean,” the whistle-blower against Facebook. A product manager who worked for nearly two years on the civic misinformation team at the social network before leaving in May, Ms. Haugen has used the documents she amassed to expose how much Facebook knew about the harms that it was causing and provided the evidence to lawmakers, regulators and the news media.
knew Instagram was worsening body image issues among teenagers and that it had a two-tier justice system — have spurred criticism from lawmakers, regulators and the public.
Ms. Haugen has also filed a whistle-blower complaint with the Securities and Exchange Commission, accusing Facebook of misleading investors with public statements that did not match its internal actions. And she has talked with lawmakers such as Senator Richard Blumenthal, a Democrat of Connecticut, and Senator Marsha Blackburn, a Republican of Tennessee, and shared subsets of the documents with them.
The spotlight on Ms. Haugen is set to grow brighter. On Tuesday, she is scheduled to testify in Congress about Facebook’s impact on young users.
misinformation and hate speech.
In 2018, Christopher Wylie, a disgruntled former employee of the consulting firm Cambridge Analytica, set the stage for those leaks. Mr. Wylie spoke with The New York Times, The Observer of London and The Guardian to reveal that Cambridge Analytica had improperly harvested Facebook data to build voter profiles without users’ consent.
In the aftermath, more of Facebook’s own employees started speaking up. Later that same year, Facebook workers provided executive memos and planning documents to news outlets including The Times and BuzzFeed News. In mid-2020, employees who disagreed with Facebook’s decision to leave up a controversial post from President Donald J. Trump staged a virtual walkout and sent more internal information to news outlets.
“I think over the last year, there’ve been more leaks than I think all of us would have wanted,” Mark Zuckerberg, Facebook’s chief executive, said in a meeting with employees in June 2020.
Facebook tried to preemptively push back against Ms. Haugen. On Friday, Nick Clegg, Facebook’s vice president for policy and global affairs, sent employees a 1,500-word memo laying out what the whistle-blower was likely to say on “60 Minutes” and calling the accusations “misleading.” On Sunday, Mr. Clegg appeared on CNN to defend the company, saying the platform reflected “the good, the bad and ugly of humanity” and that it was trying to “mitigate the bad, reduce it and amplify the good.”
personal website. On the website, Ms. Haugen was described as “an advocate for public oversight of social media.”
A native of Iowa City, Iowa, Ms. Haugen studied electrical and computer engineering at Olin College and got an M.B.A. from Harvard, the website said. She then worked on algorithms at Google, Pinterest and Yelp. In June 2019, she joined Facebook. There, she handled democracy and misinformation issues, as well as working on counterespionage, according to the website.
filed an antitrust suit against Facebook. In a video posted by Whistleblower Aid on Sunday, Ms. Haugen said she did not believe breaking up Facebook would solve the problems inherent at the company.
“The path forward is about transparency and governance,” she said in the video. “It’s not about breaking up Facebook.”
Ms. Haugen has also spoken to lawmakers in France and Britain, as well as a member of European Parliament. This month, she is scheduled to appear before a British parliamentary committee. That will be followed by stops at Web Summit, a technology conference in Lisbon, and in Brussels to meet with European policymakers in November, Mr. Tye said.
On Sunday, a GoFundMe page that Whistleblower Aid created for Ms. Haugen also went live. Noting that Facebook had “limitless resources and an army of lawyers,” the group set a goal of raising $10,000. Within 30 minutes, 18 donors had given $1,195. Shortly afterward, the fund-raising goal was increased to $50,000.
Fighting stalkerware is tough. You may not suspect it’s there. Even if you did, it can be difficult to detect since antivirus software only recently began flagging these apps as malicious.
Here’s a guide to how stalkerware works, what to look out for and what to do about it.
The Different Types of Stalkerware
Surveillance software has proliferated on computers for decades, but more recently spyware makers have shifted their focus to mobile devices. Because mobile devices have access to more intimate data, including photos, real-time location, phone conversations and messages, the apps became known as stalkerware.
Various stalkerware apps collect different types of information. Some record phone calls, some log keystrokes, and others track location or upload a person’s photos to a remote server. But they all generally work the same way: An abuser with access to a victim’s device installs the app on the phone and disguises the software as an ordinary piece of software, like a calendar app.
From there, the app lurks in the background, and later, the abuser retrieves the data. Sometimes, the information gets sent to the abuser’s email address or it can be downloaded from a website. In other scenarios, abusers who know their partner’s passcode can simply unlock the device to open the stalkerware and review the recorded data.
So what to do? The Coalition Against Stalkerware, which was founded by Ms. Galperin and other groups, and many security firms offered these tips:
Look for unusual behavior on your device, like a rapidly draining battery. That could be a giveaway that a stalker app has been constantly running in the background.
Scan your device. Some apps, like MalwareBytes, Certo, NortonLifeLock and Lookout, can detect stalkerware. But to be thorough, take a close look at your apps to see if anything is unfamiliar or suspicious. If you find a piece of stalkerware, pause before you delete it: It may be useful evidence if you decide to report the abuse to law enforcement.
Seek help. In addition to reporting stalking behavior to law enforcement, you can seek advice from resources like the National Domestic Violence Hotline or the Safety Net Project hosted by the National Network to End Domestic Violence.
Audit your online accounts to see which apps and devices are hooked into them. On Twitter, for example, you can click on the “security and account access” button inside the settings menu to see which devices and apps have access to your account. Log out of anything that looks shady.
Change your passwords and passcode. It’s always safer to change passwords for important online accounts and avoid reusing passwords across sites. Try creating long, complex passwords for each account. Similarly, make sure your passcode is difficult for someone to guess.
Enable two-factor authentication. For any online account that offers it, use two-factor authentication, which basically requires two forms of verification of your identity before letting you log into an account. Say you enter your user name and password for your Facebook account. That’s Step 1. Facebook then asks you to punch in a temporary code generated by an authentication app. That’s Step 2. With this protection, even if an abuser figures out your password using a piece of stalkerware, he or she still can’t log in without that code.
On iPhones, check your settings. A new stalker app, WebWatcher, uses a computer to wirelessly download a backup copy of a victim’s iPhone data, according to Certo, a mobile security firm. To defend yourself, open the Settings app and look at the General menu to see if “iTunes Wi-Fi Sync” is turned on. Disabling this will prevent WebWatcher from copying your data.
Apple said this was not considered an iPhone vulnerability because it required an attacker to be on the same Wi-Fi network and have physical access to a victim’s unlocked iPhone.
Start fresh. Buying a new phone or erasing all the data from your phone to begin anew is the most effective way to rid a device of stalkerware.
Update your software. Apple and Google regularly issue software updates that include security fixes, which can remove stalkerware. Make sure you’re running the latest software.
In the end, there’s no true way to defeat stalkerware. Kevin Roundy, NortonLifeLock’s lead researcher, said he had reported more than 800 pieces of stalkerware inside the Android app store. Google removed the apps and updated its policy in October to forbid developers to offer stalkerware.
But more have emerged to take their place.
“There are definitely a lot of very dangerous, alarming possibilities,” Mr. Roundy said. “It’s going to continue to be a concern.”
“The internet is answering a question that it’s been wrestling with for decades, which is: How is the internet going to pay for itself?” he said.
The fallout may hurt brands that relied on targeted ads to get people to buy their goods. It may also initially hurt tech giants like Facebook — but not for long. Instead, businesses that can no longer track people but still need to advertise are likely to spend more with the largest tech platforms, which still have the most data on consumers.
David Cohen, chief executive of the Interactive Advertising Bureau, a trade group, said the changes would continue to “drive money and attention to Google, Facebook, Twitter.”
The shifts are complicated by Google’s and Apple’s opposing views on how much ad tracking should be dialed back. Apple wants its customers, who pay a premium for its iPhones, to have the right to block tracking entirely. But Google executives have suggested that Apple has turned privacy into a privilege for those who can afford its products.
For many people, that means the internet may start looking different depending on the products they use. On Apple gadgets, ads may be only somewhat relevant to a person’s interests, compared with highly targeted promotions inside Google’s web. Website creators may eventually choose sides, so some sites that work well in Google’s browser might not even load in Apple’s browser, said Brendan Eich, a founder of Brave, the private web browser.
“It will be a tale of two internets,” he said.
Businesses that do not keep up with the changes risk getting run over. Increasingly, media publishers and even apps that show the weather are charging subscription fees, in the same way that Netflix levies a monthly fee for video streaming. Some e-commerce sites are considering raising product prices to keep their revenues up.
Consider Seven Sisters Scones, a mail-order pastry shop in Johns Creek, Ga., which relies on Facebook ads to promote its items. Nate Martin, who leads the bakery’s digital marketing, said that after Apple blocked some ad tracking, its digital marketing campaigns on Facebook became less effective. Because Facebook could no longer get as much data on which customers like baked goods, it was harder for the store to find interested buyers online.
When New York City announced on Tuesday that it would soon require people to show proof of at least one coronavirus vaccine shot to enter businesses, Mayor Bill de Blasio said the system was “simple — just show it and you’re in.”
Less simple was the privacy debate that the city reignited.
Vaccine passports, which show proof of vaccination, often in electronic form such as an app, are the bedrock of Mr. de Blasio’s plan. For months, these records — also known as health passes or digital health certificates — have been under discussion around the world as a tool to allow vaccinated people, who are less at risk from the virus, to gather safely. New York will be the first U.S. city to include these passes in a vaccine mandate, potentially setting off similar actions elsewhere.
But the mainstreaming of these credentials could also usher in an era of increased digital surveillance, privacy researchers said. That’s because vaccine passes may enable location tracking, even as there are few rules about how people’s digital vaccine data should be stored and how it can be shared. While existing privacy laws limit the sharing of information among medical providers, there is no such rule for when people upload their own data onto an app.
sends a person’s location, city name and an identifying code number to a server as soon as the user grants the software access to personal data.
passed a law limiting such use only to “serious” criminal investigations.
“One of the things that we don’t want is that we normalize surveillance in an emergency and we can’t get rid of it,” said Jon Callas, the director of technology projects at the Electronic Frontier Foundation, a digital rights group.
While such incidents are not occurring in the United States, researchers said, they already see potential for overreach. Several pointed to New York City, where proof of vaccination requirements will start on Aug. 16 and be enforced starting on Sept. 13.
For proof, people can use their paper vaccination cards, the NYC Covid Safe app or another app, the Excelsior Pass. The Excelsior Pass was developed by IBM under an estimated $17 million contract with New York State.
To obtain the pass, people upload their personal information. Under the standard version of the pass, businesses and third parties see only whether the pass is valid, along with the person’s name and date of birth.
On Wednesday, the state announced the “Excelsior Pass Plus,” which displays not only whether an individual is vaccinated, but includes more information about when and where they got their shot. Businesses scanning the Pass Plus “may be able to save or store the information contained,” according to New York State.
Phase 2,” which could involve expanding the app’s use and adding more information like personal details and other health records that could be checked by businesses upon entry.
IBM has said that it uses blockchain technology and encryption to protect user data, but did not say how. The company and New York State did not respond to requests for comment.
Mr. de Blasio told WNYC in April that he understands the privacy concerns around the Excelsior Pass, but thinks it will still “play an important role.”
For now, some states and cities are proceeding cautiously. More than a dozen states, including Arizona, Florida and Texas, have in recent months announced some type of ban on vaccine passports. The mayors of San Francisco, Los Angeles and Seattle have also said they were holding off on passport programs.
Some business groups and companies that have adopted vaccine passes said the privacy concerns were valid but addressable.
Airlines for America, an industry trade group, said it supported vaccine passes and was pushing the federal government to establish privacy standards. The San Francisco Chamber of Commerce, which is helping its members work with Clear, said using the tools to ensure only vaccinated people entered stores was preferable to having businesses shut down again as virus cases climb.
“People’s privacy is valuable,” said Rodney Fong, the chamber’s president, but “when we’re talking about saving lives, the privacy piece becomes a little less important.”
Critics of the state regulations warned that tech companies weren’t the only ones that would have to maneuver through the patchwork of rules. “For consumers, this means confusion,” said Daniel Castro, a vice president of the Information Technology & Innovation Foundation, a think tank sponsored by tech companies.
Apple and Google declined to comment. Jodi Seth, a spokeswoman for Amazon, pointed to an April blog post from the company’s policy executive Brian Huseman, who said the state laws risked creating a hodgepodge of regulations that wouldn’t serve users well.
Will Castleberry, Facebook’s vice president of state and local public policy, said that instead, the social network largely backed more federal legislation. “While we support state efforts to address specific challenges,” he said in a statement, “there are some issues, like privacy, where it’s time for updated federal rules for the internet — and those need to come from Congress.”
To fight against the splintering rules, the tech companies have gone on the offensive. While data on state lobbying is inconsistent and often underreported, Google, Amazon and Facebook funneled a combined $5 million into those efforts in 2019, according to the National Institute on Money in Politics, a nonprofit. The companies also increased their lobbying ranks to dozens in state legislatures compared with skeletal forces five years ago.
Let Us Help You Protect Your Digital Life
Some of the companies have also recently sent top engineers to kill state proposals. In February, Apple’s chief privacy engineer, Erik Neuenschwander, testified in a North Dakota Senate hearing to oppose a bill that would let app developers use their own payment systems and bypass Apple’s App Store rules. The bill died a week later in a 36-to-11 vote.
Even so, states have barreled forward.
Maryland lawmakers in February overrode their governor’s veto of a new tax on sites like Facebook and Google. The tax, the first aimed at the business of behavioral advertising, takes a cut of the money that the companies make from the sale of ads shown in Maryland. One analysis projected that it would raise up to $250 million in its first year, a fraction of Facebook and Google’s combined $267 billion in annual revenue, but a real threat if replicated across states.
Trade groups for Google, Amazon and Facebook tried to stop the tax. They hired a well-connected political consultant to argue that it would hurt small businesses. When that failed, the trade groups sued to block it. The litigation is pending.