said in April after sealing the deal. “I don’t care about the economics at all.”

He cared a little more when the subsequent plunge in the stock market meant that he was overpaying by a significant amount. Analysts estimated that Twitter was worth not $44 billion but $30 billion, or maybe even less. For a few months, Mr. Musk tried to get out of the deal.

This had the paradoxical effect of bringing the transaction down to earth for spectators. Who among us has not failed to do due diligence on a new venture — a job, a house, even a relationship — and then realized that it was going to cost so much more than we had thought? Mr. Musk’s buying Twitter, and then his refusal to buy Twitter, and then his being forced to buy Twitter after all — and everything playing out on Twitter — was weirdly relatable.

Inescapable, too. The apex, or perhaps the nadir, came this month when Mr. Musk introduced a perfume called Burnt Hair, described on its website as “the Essence of Repugnant Desire.”

“Please buy my perfume, so I can buy Twitter,” Mr. Musk tweeted on Oct. 12, garnering nearly 600,000 likes. This worked, apparently; the perfume is now marked “sold out” on its site. Did 30,000 people really pay $100 each for a bottle? Will this perfume actually be produced and sold? (It’s not supposed to be released until next year.) It’s hard to tell where the joke stops, which is perhaps the point.

Evan Spiegel.

“What was unique about Twitter was that no one actually controlled it,” said Richard Greenfield, a media analyst at LightShed Partners. “And now one person will own it in its entirety.”

He is relatively hopeful, however, that Mr. Musk will improve the site, somehow. That, in turn, will have its own consequences.

“If it turns into a massive home run,” Mr. Greenfield said, “you’ll see other billionaires try to do the same thing.”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

British Ruling Pins Blame on Social Media for Teenager’s Suicide

In January 2019, Mr. Russell went public with Molly’s story. Outraged that his young daughter could view such bleak content so easily and convinced that it had played a role in her death, he sat for a TV interview with the BBC that resulted in front-page stories across British newsstands.

Mr. Russell, a television director, urged the coroner reviewing Molly’s case to go beyond what is often a formulaic process, and to explore the role of social media. Mr. Walker agreed after seeing a sample of Molly’s social media history.

That resulted in a yearslong effort to get access to Molly’s social media data. The family did not know her iPhone passcode, but the London police were able to bypass it to extract 30,000 pages of material. After a lengthy battle, Meta agreed to provide more than 16,000 pages from her Instagram, such a volume that it delayed the start of the inquest. Merry Varney, a lawyer with the Leigh Day law firm who worked on the case through a legal aid program, said it had taken more than 1,000 hours to review the content.

What they found was that Molly had lived something of a double life. While she was a regular teenager to family, friends and teachers, her existence online was much bleaker.

In the six months before Molly died, she shared, liked or saved 16,300 pieces of content on Instagram. About 2,100 of those posts, or about 12 per day, were related to suicide, self-harm and depression, according to data that Meta disclosed to her family. Many accounts she interacted with were dedicated to sharing only depressive and suicidal material, often using hashtags that linked to other explicit content.

Many posts glorified inner struggle, hiding emotional duress and telling others “I’m fine.” Molly went on binges of liking and saving graphic depictions of suicide and self-harm, once after 3 a.m., according to a timeline of her Instagram usage.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Corporate America Doesn’t Want to Talk Abortion, but It May Have To

Even more recently, corporate leaders were reminded of how fraught engagement can be. Disney, for example, faced internal backlash when its leadership declined to take a strong stance against Florida’s Parental Rights in Education act, which critics often refer to as the “Don’t Say Gay” law. But when the chief executive did take a public stance, the company was crucified on social media, and the state revoked its special tax benefits.

Now, with the expected demise of the country’s landmark abortion law, corporate leaders are confronting the hottest of hot-button issues. In a Pew Research poll in 2021, 59 percent of Americans said they believed that abortion should be legal in all or most cases, while 39 percent said it should be illegal in all or most cases. People on all sides of the issue feel strongly about it, with nearly a quarter of Americans saying they will vote only for candidates who share their views on abortion, according to Gallup.

That all adds up to many reasons a company would want to avoid making any statement on abortion — and all the more reason that customers and workers could come to see it as necessary. A company’s position on the end of Roe could have repercussions for how it hires in an increasingly competitive labor market, and how customers view its brand.

“Abortion is a health care issue, health care is an employer issue, so abortion is an issue for employers,” said Carolyn Witte, chief executive of Tia, a women’s health care company. On Tuesday, Tia announced that it would provide medication abortions through its telemedicine platform in states where it operated and where doing so was legal.

For some major companies that have been known to weigh in on political and social issues, this week has been unusually quiet. Walmart, Disney, Meta, PwC, Salesforce, JPMorgan Chase, ThirdLove, Patagonia, Kroger and Business Roundtable were among the companies and organizations that declined to comment or take a position, or did not respond to requests for comment about whether they plan to make public statements about their stance on abortion. Hobby Lobby, which in 2014 brought a suit to the Supreme Court challenging whether employer-provided health care had to include contraception, made no public statement and did not respond to a request for comment.

Other companies did wade in. United Talent Agency said it would reimburse travel expenses for employees affected by abortion bans. Airbnb said it would ensure its employees “have the resources they need to make choices about their reproductive rights.” Levi Strauss & Company, which has said its benefits plan will reimburse employees who have to travel out of state for health care services such as abortions, said abortion was a business issue.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

‘Davos Man,’ Marc Benioff and the Covid Pandemic

He frequently tells the story of his supposed inspiration for founding Salesforce. Despite success at Oracle, where he worked early in his career, Mr. Benioff was plagued by existential doubt, prompting him to take a sabbatical to southern India. There, he visited a woman known as “the hugging saint,” who urged him to share his prosperity.

From the incorporation of Salesforce in 1999, Mr. Benioff pledged that he would devote 1 percent of its equity and product to philanthropic undertakings, while encouraging employees to dedicate 1 percent of their working time to voluntary efforts. Salesforce employees regularly volunteer at schools, food banks and hospitals.

“There are very few examples of companies doing this at scale,” Mr. Benioff told me in an interview. He noted that people were always talking to him about another business known for its focus on doing good, Ben & Jerry’s. He said this with a chuckle, clearly amused that his company — now worth more than $200 billion — could be compared to the aging Vermont hippies who had brought the world Cherry Garcia ice cream.

Mr. Benioff is by many indications a true believer, not just idly parroting Davos Man talking points. In 2015, when Indiana proceeded with legislation that would have allowed businesses to discriminate against gay, lesbian and transgender employees, he threatened to yank investment, forcing a change in the law. He shamed Facebook and Google for abusing the public trust and called for regulations on search and social media giants. Early in the pandemic, Salesforce embraced remote work to protect employees.

“I’m trying to influence others to do the right thing,” he told me. “I feel that responsibility.”

I found myself won over by his boyish enthusiasm, and his willingness to talk at length absent public relations minders — a rarity for Silicon Valley.

His philanthropic efforts have been directed at easing homelessness in San Francisco, while expanding health care for children. He and Salesforce collectively contributed $7 million toward a successful 2018 campaign for a local ballot measure that levied fresh taxes on San Francisco companies to finance expanded programs. The new taxes were likely to cost Salesforce $10 million a year.

That sounded like a lot of money, ostensible evidence of a socially conscious C.E.O. sacrificing the bottom line in the interest of catering to societal needs. But it was less than a trifle alongside the money that Salesforce withheld from the government through legal tax subterfuge.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Corporate Board Diversity Increased in 2021. Some Ask What Took So Long.

People pushing for greater diversity on boards say companies need to expand their searches beyond current and former senior business executives, and emphasize skills over title.

“If you look around, everyone wants a sitting or recently retired C.E.O. who’s done very similar things to what their company’s trying to do sometime in the last decade,” said Jennifer Tejada, chief executive of PagerDuty, a software company, and a member of the boards of Estée Lauder and UiPath, a software company. “That’s a very narrow lens to look through.”

Under her leadership, PagerDuty’s eight-member board has just two white directors. She emphasized that she hadn’t had to settle for lesser candidates to have a diverse board. Her directors, she noted, include the dean of engineering at the University of Michigan, Alec D. Gallimore, who is Black; Bonita Stewart, who is a board partner at Gradient Ventures, an investment arm of Google, and the first Black woman to be a vice president at Google; and Rathi Murthy, who is Indian and a top technology executive at Expedia Group.

To ensure there are enough board candidates from a variety of backgrounds, companies need to do a better job promoting more people from underrepresented groups into senior roles, some executives said. That is especially true of increasing the number of Hispanic board members, said Elena Gomez, the chief financial officer of Toast, a software company, who is on PagerDuty’s board.

“What we need to do is get more Latinx people into those management roles, and that starts deeper in how you recruit and train,” Ms. Gomez said.

But the push to make boards more diverse has led to a backlash by some conservatives and libertarians. Some are suing to overturn the California laws, arguing that the state is illegally restricting the right of shareholders to select and vote on directors based on merit and skill.

“A coercive quota is being imposed on these companies,” said Daniel Ortner, a lawyer with the Pacific Legal Foundation. The foundation is representing the National Center for Public Policy Research, a group that says it promotes free-market policies, in a lawsuit challenging the law that requires directors from underrepresented groups.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Facebook Plans to Shut Down Its Facial Recognition System

The change affects more than a third of Facebook’s daily users who had facial recognition turned on for their accounts, according to the company. That meant they received alerts when new photos or videos of them were uploaded to the social network. The feature had also been used to flag accounts that might be impersonating someone else and was incorporated into software that described photos to blind users.

“Making this change required us to weigh the instances where facial recognition can be helpful against the growing concerns about the use of this technology as a whole,” said Jason Grosse, a Meta spokesman.

Although Facebook plans to delete more than one billion facial recognition templates, which are digital scans of facial features, by December, it will not eliminate the software that powers the system, which is an advanced algorithm called DeepFace. The company has also not ruled out incorporating facial recognition technology into future products, Mr. Grosse said.

Privacy advocates nonetheless applauded the decision.

“Facebook getting out of the face recognition business is a pivotal moment in the growing national discomfort with this technology,” said Adam Schwartz, a senior lawyer with the Electronic Frontier Foundation, a civil liberties organization. “Corporate use of face surveillance is very dangerous to people’s privacy.”

Facebook is not the first large technology company to pull back on facial recognition software. Amazon, Microsoft and IBM have paused or ceased selling their facial recognition products to law enforcement in recent years, while expressing concerns about privacy and algorithmic bias and calling for clearer regulation.

Facebook’s facial recognition software has a long and expensive history. When the software was rolled out to Europe in 2011, data protection authorities there said the move was illegal and that the company needed consent to analyze photos of a person and extract the unique pattern of an individual face. In 2015, the technology also led to the filing of the class action suit in Illinois.

Over the last decade, the Electronic Privacy Information Center, a Washington-based privacy advocacy group, filed two complaints about Facebook’s use of facial recognition with the F.T.C. When the F.T.C. fined Facebook in 2019, it named the site’s confusing privacy settings around facial recognition as one of the reasons for the penalty.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Facebook Debates What to Do With Its Like and Share Buttons

SAN FRANCISCO — In 2019, Facebook researchers began a new study of one of the social network’s foundational features: the Like button.

They examined what people would do if Facebook removed the distinct thumbs-up icon and other emoji reactions from posts on its photo-sharing app Instagram, according to company documents. The buttons had sometimes caused Instagram’s youngest users “stress and anxiety,” the researchers found, especially if posts didn’t get enough Likes from friends.

But the researchers discovered that when the Like button was hidden, users interacted less with posts and ads. At the same time, it did not alleviate teenagers’ social anxiety and young users did not share more photos, as the company thought they might, leading to a mixed bag of results.

Mark Zuckerberg, Facebook’s chief executive, and other managers discussed hiding the Like button for more Instagram users, according to the documents. In the end, a larger test was rolled out in just a limited capacity to “build a positive press narrative” around Instagram.

misinformation, privacy and hate speech, a central issue has been whether the basic way that the platform works has been at fault — essentially, the features that have made Facebook be Facebook.

Apart from the Like button, Facebook has scrutinized its share button, which lets users instantly spread content posted by other people; its groups feature, which is used to form digital communities; and other tools that define how more than 3.5 billion people behave and interact online. The research, laid out in thousands of pages of internal documents, underlines how the company has repeatedly grappled with what it has created.

What researchers found was often far from positive. Time and again, they determined that people misused key features or that those features amplified toxic content, among other effects. In an August 2019 internal memo, several researchers said it was Facebook’s “core product mechanics” — meaning the basics of how the product functioned — that had let misinformation and hate speech flourish on the site.

“The mechanics of our platform are not neutral,” they concluded.

hide posts they do not want to see and turning off political group recommendations to reduce the spread of misinformation.

But the core way that Facebook operates — a network where information can spread rapidly and where people can accumulate friends and followers and Likes — ultimately remains largely unchanged.

Many significant modifications to the social network were blocked in the service of growth and keeping users engaged, some current and former executives said. Facebook is valued at more than $900 billion.

“There’s a gap between the fact that you can have pretty open conversations inside of Facebook as an employee,” said Brian Boland, a Facebook vice president who left last year. “Actually getting change done can be much harder.”

The company documents are part of the Facebook Papers, a cache provided to the Securities and Exchange Commission and to Congress by a lawyer representing Frances Haugen, a former Facebook employee who has become a whistle-blower. Ms. Haugen earlier gave the documents to The Wall Street Journal. This month, a congressional staff member supplied the redacted disclosures to more than a dozen other news organizations, including The New York Times.

In a statement, Andy Stone, a Facebook spokesman, criticized articles based on the documents, saying that they were built on a “false premise.”

“Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or well-being misunderstands where our own commercial interests lie,” he said. He said Facebook had invested $13 billion and hired more than 40,000 people to keep people safe, adding that the company has called “for updated regulations where democratic governments set industry standards to which we can all adhere.”

post this month, Mr. Zuckerberg said it was “deeply illogical” that the company would give priority to harmful content because Facebook’s advertisers don’t want to buy ads on a platform that spreads hate and misinformation.

“At the most basic level, I think most of us just don’t recognize the false picture of the company that is being painted,” he wrote.

When Mr. Zuckerberg founded Facebook 17 years ago in his Harvard University dorm room, the site’s mission was to connect people on college campuses and bring them into digital groups with common interests and locations.

Growth exploded in 2006 when Facebook introduced the News Feed, a central stream of photos, videos and status updates posted by people’s friends. Over time, the company added more features to keep people interested in spending time on the platform.

In 2009, Facebook introduced the Like button. The tiny thumbs-up symbol, a simple indicator of people’s preferences, became one of the social network’s most important features. The company allowed other websites to adopt the Like button so users could share their interests back to their Facebook profiles.

That gave Facebook insight into people’s activities and sentiments outside of its own site, so it could better target them with advertising. Likes also signified what users wanted to see more of in their News Feeds so people would spend more time on Facebook.

Facebook also added the groups feature, where people join private communication channels to talk about specific interests, and pages, which allowed businesses and celebrities to amass large fan bases and broadcast messages to those followers.

Adam Mosseri, the head of Instagram, has said that research on users’ well-being led to investments in anti-bullying measures on Instagram.

Yet Facebook cannot simply tweak itself so that it becomes a healthier social network when so many problems trace back to core features, said Jane Lytvynenko, a senior fellow at the Harvard Kennedy Shorenstein Center, who studies social networks and misinformation.

“When we talk about the Like button, the share button, the News Feed and their power, we’re essentially talking about the infrastructure that the network is built on top of,” she said. “The crux of the problem here is the infrastructure itself.”

As Facebook’s researchers dug into how its products worked, the worrisome results piled up.

In a July 2019 study of groups, researchers traced how members in those communities could be targeted with misinformation. The starting point, the researchers said, were people known as “invite whales,” who sent invitations out to others to join a private group.

These people were effective at getting thousands to join new groups so that the communities ballooned almost overnight, the study said. Then the invite whales could spam the groups with posts promoting ethnic violence or other harmful content, according to the study.

Another 2019 report looked at how some people accrued large followings on their Facebook pages, often using posts about cute animals and other innocuous topics. But once a page had grown to tens of thousands of followers, the founders sold it. The buyers then used the pages to show followers misinformation or politically divisive content, according to the study.

As researchers studied the Like button, executives considered hiding the feature on Facebook as well, according to the documents. In September 2019, it removed Likes from users’ Facebook posts in a small experiment in Australia.

The company wanted to see if the change would reduce pressure and social comparison among users. That, in turn, might encourage people to post more frequently to the network.

But people did not share more posts after the Like button was removed. Facebook chose not to roll the test out more broadly, noting, “Like counts are extremely low on the long list of problems we need to solve.”

Last year, company researchers also evaluated the share button. In a September 2020 study, a researcher wrote that the button and so-called reshare aggregation units in the News Feed, which are automatically generated clusters of posts that have already been shared by people’s friends, were “designed to attract attention and encourage engagement.”

But gone unchecked, the features could “serve to amplify bad content and sources,” such as bullying and borderline nudity posts, the researcher said.

That’s because the features made people less hesitant to share posts, videos and messages with one another. In fact, users were three times more likely to share any kind of content from the reshare aggregation units, the researcher said.

One post that spread widely this way was an undated message from an account called “The Angry Patriot.” The post notified users that people protesting police brutality were “targeting a police station” in Portland, Ore. After it was shared through reshare aggregation units, hundreds of hate-filled comments flooded in. It was an example of “hate bait,” the researcher said.

A common thread in the documents was how Facebook employees argued for changes in how the social network worked and often blamed executives for standing in the way.

In an August 2020 internal post, a Facebook researcher criticized the recommendation system that suggests pages and groups for people to follow and said it can “very quickly lead users down the path to conspiracy theories and groups.”

“Out of fears over potential public and policy stakeholder responses, we are knowingly exposing users to risks of integrity harms,” the researcher wrote. “During the time that we’ve hesitated, I’ve seen folks from my hometown go further and further down the rabbit hole” of conspiracy theory movements like QAnon and anti-vaccination and Covid-19 conspiracies.

The researcher added, “It has been painful to observe.”

Reporting was contributed by Davey Alba, Sheera Frenkel, Cecilia Kang and Ryan Mac.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

What Happened When Facebook Employees Warned About Election Misinformation

WHAT HAPPENED

1. From Wednesday through Saturday there was a lot of content circulating which implied fraud in the election, at around 10% of all civic content and 1-2% of all US VPVs. There was also a fringe of incitement to violence.

2. There were dozens of employees monitoring this, and FB launched ~15 measures prior to the election, and another ~15 in the days afterwards. Most of the measures made existings processes more aggressive: e.g. by lowering thresholds, by making penalties more severe, or expanding eligibility for existing measures. Some measures were qualitative: reclassifying certain types of content as violating, which had not been before.

3. I would guess these measures reduced prevalence of violating content by at least 2X. However they had collateral damage (removing and demoting non-violating content), and the episode caused noticeable resentment by Republican Facebook users who feel they are being unfairly targeted.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Instagram Struggles With Fears of Losing Its ‘Pipeline’: Young Users

Facebook knew that an ad intended for a 13-year-old was likely to capture younger children who wanted to mimic their older siblings and friends, one person said. Managers told employees that Facebook did everything it could to stop underage users from joining Instagram, but that it could not be helped if they signed up anyway.

In September 2018, Kevin Systrom and Mike Krieger, Instagram’s founders, left Facebook after clashing with Mr. Zuckerberg. Mr. Mosseri, a longtime Facebook executive, was appointed to helm Instagram.

With the leadership changes, Facebook went all out to turn Instagram into a main attraction for young audiences, four former employees said. That coincided with the realization that Facebook itself, which was grappling with data privacy and other scandals, would never be a teen destination, the people said.

Instagram began concentrating on the “teen time spent” data point, three former employees said. The goal was to drive up the amount of time that teenagers were on the app with features including Instagram Live, a broadcasting tool, and Instagram TV, where people upload videos that run as long as an hour.

Instagram also increased its global marketing budget. In 2018, it allocated $67.2 million to marketing. In 2019, that increased to a planned $127.3 million, then to $186.3 million last year and $390 million this year, according to the internal documents. Most of the budgets were designated to wooing teens, the documents show. Mr. Mosseri approved the budgets, two employees said.

The money was slated for marketing categories like “establishing Instagram as the favorite place for teens to express themselves” and cultural programs for events like the Super Bowl, according to the documents.

Many of the resulting ads were digital, featuring some of the platform’s top influencers, such as Donté Colley, a Canadian dancer and creator. The marketing, when put into action, also targeted parents of teenagers and people up to the age of 34.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Whistle-Blower Says Facebook ‘Chooses Profits Over Safety’

John Tye, the founder of Whistleblower Aid, a legal nonprofit that represents people seeking to expose potential lawbreaking, was contacted this spring through a mutual connection by a woman who claimed to have worked at Facebook.

The woman told Mr. Tye and his team something intriguing: She had access to tens of thousands of pages of internal documents from the world’s largest social network. In a series of calls, she asked for legal protection and a path to releasing the confidential information. Mr. Tye, who said he understood the gravity of what the woman brought “within a few minutes,” agreed to represent her and call her by the alias “Sean.”

She “is a very courageous person and is taking a personal risk to hold a trillion-dollar company accountable,” he said.

On Sunday, Frances Haugen revealed herself to be “Sean,” the whistle-blower against Facebook. A product manager who worked for nearly two years on the civic misinformation team at the social network before leaving in May, Ms. Haugen has used the documents she amassed to expose how much Facebook knew about the harms that it was causing and provided the evidence to lawmakers, regulators and the news media.

knew Instagram was worsening body image issues among teenagers and that it had a two-tier justice system — have spurred criticism from lawmakers, regulators and the public.

Ms. Haugen has also filed a whistle-blower complaint with the Securities and Exchange Commission, accusing Facebook of misleading investors with public statements that did not match its internal actions. And she has talked with lawmakers such as Senator Richard Blumenthal, a Democrat of Connecticut, and Senator Marsha Blackburn, a Republican of Tennessee, and shared subsets of the documents with them.

The spotlight on Ms. Haugen is set to grow brighter. On Tuesday, she is scheduled to testify in Congress about Facebook’s impact on young users.

misinformation and hate speech.

In 2018, Christopher Wylie, a disgruntled former employee of the consulting firm Cambridge Analytica, set the stage for those leaks. Mr. Wylie spoke with The New York Times, The Observer of London and The Guardian to reveal that Cambridge Analytica had improperly harvested Facebook data to build voter profiles without users’ consent.

In the aftermath, more of Facebook’s own employees started speaking up. Later that same year, Facebook workers provided executive memos and planning documents to news outlets including The Times and BuzzFeed News. In mid-2020, employees who disagreed with Facebook’s decision to leave up a controversial post from President Donald J. Trump staged a virtual walkout and sent more internal information to news outlets.

“I think over the last year, there’ve been more leaks than I think all of us would have wanted,” Mark Zuckerberg, Facebook’s chief executive, said in a meeting with employees in June 2020.

Facebook tried to preemptively push back against Ms. Haugen. On Friday, Nick Clegg, Facebook’s vice president for policy and global affairs, sent employees a 1,500-word memo laying out what the whistle-blower was likely to say on “60 Minutes” and calling the accusations “misleading.” On Sunday, Mr. Clegg appeared on CNN to defend the company, saying the platform reflected “the good, the bad and ugly of humanity” and that it was trying to “mitigate the bad, reduce it and amplify the good.”

personal website. On the website, Ms. Haugen was described as “an advocate for public oversight of social media.”

A native of Iowa City, Iowa, Ms. Haugen studied electrical and computer engineering at Olin College and got an M.B.A. from Harvard, the website said. She then worked on algorithms at Google, Pinterest and Yelp. In June 2019, she joined Facebook. There, she handled democracy and misinformation issues, as well as working on counterespionage, according to the website.

filed an antitrust suit against Facebook. In a video posted by Whistleblower Aid on Sunday, Ms. Haugen said she did not believe breaking up Facebook would solve the problems inherent at the company.

“The path forward is about transparency and governance,” she said in the video. “It’s not about breaking up Facebook.”

Ms. Haugen has also spoken to lawmakers in France and Britain, as well as a member of European Parliament. This month, she is scheduled to appear before a British parliamentary committee. That will be followed by stops at Web Summit, a technology conference in Lisbon, and in Brussels to meet with European policymakers in November, Mr. Tye said.

On Sunday, a GoFundMe page that Whistleblower Aid created for Ms. Haugen also went live. Noting that Facebook had “limitless resources and an army of lawyers,” the group set a goal of raising $10,000. Within 30 minutes, 18 donors had given $1,195. Shortly afterward, the fund-raising goal was increased to $50,000.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<