barred Mr. Trump from its platforms after the riot at the U.S. Capitol on Jan. 6, 2021, has worked over the years to limit political falsehoods on its sites. Tom Reynolds, a Meta spokesman, said the company had “taken a comprehensive approach to how elections play out on our platforms since before the U.S. 2020 elections and through the dozens of global elections since then.”

recently raised doubts about the country’s electoral process. Latvia, Bosnia and Slovenia are also holding elections in October.

“People in the U.S. are almost certainly getting the Rolls-Royce treatment when it comes to any integrity on any platform, especially for U.S. elections,” said Sahar Massachi, the executive director of the think tank Integrity Institute and a former Facebook employee. “And so however bad it is here, think about how much worse it is everywhere else.”

Facebook’s role in potentially distorting elections became evident after 2016, when Russian operatives used the site to spread inflammatory content and divide American voters in the U.S. presidential election. In 2018, Mr. Zuckerberg testified before Congress that election security was his top priority.

banning QAnon conspiracy theory posts and groups in October 2020.

Around the same time, Mr. Zuckerberg and his wife, Priscilla Chan, donated $400 million to local governments to fund poll workers, pay for rental fees for polling places, provide personal protective equipment and cover other administrative costs.

The week before the November 2020 election, Meta also froze all political advertising to limit the spread of falsehoods.

But while there were successes — the company kept foreign election interference off the platform — it struggled with how to handle Mr. Trump, who used his Facebook account to amplify false claims of voter fraud. After the Jan. 6 riot, Facebook barred Mr. Trump from posting. He is eligible for reinstatement in January.

Frances Haugen, a Facebook employee turned whistle-blower, filed complaints with the Securities and Exchange Commission accusing the company of removing election safety features too soon after the 2020 election. Facebook made growth and engagement its priorities over security, she said.

fully realized digital world that exists beyond the one in which we live. It was coined by Neal Stephenson in his 1992 novel “Snow Crash,” and the concept was further explored by Ernest Cline in his novel “Ready Player One.”

Mr. Zuckerberg no longer meets weekly with those focused on election security, said the four employees, though he receives their reports. Instead, they meet with Nick Clegg, Meta’s president of global affairs.

Several civil right groups said they had noticed Meta’s shift in priorities. Mr. Zuckerberg isn’t involved in discussions with them as he once was, nor are other top Meta executives, they said.

“I’m concerned,” said Derrick Johnson, president of the National Association for the Advancement of Colored People, who talked with Mr. Zuckerberg and Sheryl Sandberg, Meta’s chief operating officer, ahead of the 2020 election. “It appears to be out of sight, out of mind.” (Ms. Sandberg has announced that she will leave Meta this fall.)

wrote a letter to Mr. Zuckerberg and the chief executives of YouTube, Twitter, Snap and other platforms. They called for them to take down posts about the lie that Mr. Trump won the 2020 election and to slow the spread of election misinformation before the midterms.

Yosef Getachew, a director at the nonprofit public advocacy organization Common Cause, whose group studied 2020 election misinformation on social media, said the companies had not responded.

“The Big Lie is front and center in the midterms with so many candidates using it to pre-emptively declare that the 2022 election will be stolen,” he said, pointing to recent tweets from politicians in Michigan and Arizona who falsely said dead people cast votes for Democrats. “Now is not the time to stop enforcing against the Big Lie.”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Sheryl Sandberg Steps Down From Facebook’s Parent Company, Meta

Ms. Sandberg flirted with leaving Facebook. In 2016, she told colleagues that if Hillary Clinton, the Democratic presidential nominee, won the White House she would most likely assume a job in Washington, three people who spoke to her about the move at the time said. In 2018, after revelations about Cambridge Analytica and Russia’s interference in the 2016 U.S. presidential election, she again told colleagues that she was considering leaving but did not want to do so when the company was in crisis.

Last year, Mr. Zuckerberg said his company was making a new bet and was going all in on the metaverse, which he called “the successor to the mobile internet.” In his announcement, Ms. Sandberg made only a cameo, while other executives were more prominently featured.

As Mr. Zuckerberg overhauled the company to focus on the metaverse, some of Ms. Sandberg’s responsibilities were spread among other executives. Nick Clegg, the president of global affairs and a former British deputy prime minister, became the company’s chief spokesman, a role that Ms. Sandberg had once taken. In February, Mr. Clegg was promoted to president of global affairs for Meta.

Ms. Sandberg’s profile dimmed. She concentrated on building the ads business and growing the number of small businesses on Facebook.

She was also focused on personal matters. Dave Goldberg, her husband, had died unexpectedly in 2015. (Ms. Sandberg’s second book, “Option B,” was about dealing with grief.) She later met Mr. Bernthal, and he and his three children moved to her Silicon Valley home from Southern California during the pandemic. Ms. Sandberg, who had two children with Mr. Goldberg, was focused on integrating the families and planning for her summer wedding, a person close to her said.

Meta’s transition to the metaverse has not been easy. The company has spent heavily on metaverse products while its advertising business has stumbled, partly because privacy changes made by Apple have hurt targeted advertising. In February, Meta’s market value plunged more than $230 billion, its biggest one-day wipeout, after it reported financial results that showed it was struggling to make the leap to the metaverse.

In the interview, Ms. Sandberg said Meta faced near-term challenges but would weather the storm, as it had during past challenges. “When we went public, we had no mobile ads,” Ms. Sandberg said, citing the company’s rapid transition from desktop computers to smartphones last decade. “We have done this before.”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

How War in Ukraine Roiled Facebook and Instagram

Meta, which owns Facebook and Instagram, took an unusual step last week: It suspended some of the quality controls that ensure that posts from users in Russia, Ukraine and other Eastern European countries meet its rules.

Under the change, Meta temporarily stopped tracking whether its workers who monitor Facebook and Instagram posts from those areas were accurately enforcing its content guidelines, six people with knowledge of the situation said. That’s because the workers could not keep up with shifting rules about what kinds of posts were allowed about the war in Ukraine, they said.

Meta has made more than half a dozen content policy revisions since Russia invaded Ukraine last month. The company has permitted posts about the conflict that it would normally have taken down — including some calling for the death of President Vladimir V. Putin of Russia and violence against Russian soldiers — before changing its mind or drawing up new guidelines, the people said.

The result has been internal confusion, especially among the content moderators who patrol Facebook and Instagram for text and images with gore, hate speech and incitements to violence. Meta has sometimes shifted its rules on a daily basis, causing whiplash, said the people, who were not authorized to speak publicly.

contended with pressure from Russian and Ukrainian authorities over the information battle about the conflict. And internally, it has dealt with discontent about its decisions, including from Russian employees concerned for their safety and Ukrainian workers who want the company to be tougher on Kremlin-affiliated organizations online, three people said.

Meta has weathered international strife before — including the genocide of a Muslim minority in Myanmar last decade and skirmishes between India and Pakistan — with varying degrees of success. Now the largest conflict on the European continent since World War II has become a litmus test of whether the company has learned to police its platforms during major global crises — and so far, it appears to remain a work in progress.

“All the ingredients of the Russia-Ukraine conflict have been around for a long time: the calls for violence, the disinformation, the propaganda from state media,” said David Kaye, a law professor at the University of California, Irvine, and a former special rapporteur to the United Nations. “What I find mystifying was that they didn’t have a game plan to deal with it.”

Dani Lever, a Meta spokeswoman, declined to directly address how the company was handling content decisions and employee concerns during the war.

After Russia invaded Ukraine, Meta said it established a round-the-clock special operations team staffed by employees who are native Russian and Ukrainian speakers. It also updated its products to aid civilians in the war, including features that direct Ukrainians toward reliable, verified information to locate housing and refugee assistance.

Mark Zuckerberg, Meta’s chief executive, and Sheryl Sandberg, the chief operating officer, have been directly involved in the response to the war, said two people with knowledge of the efforts. But as Mr. Zuckerberg focuses on transforming Meta into a company that will lead the digital worlds of the so-called metaverse, many responsibilities around the conflict have fallen — at least publicly — to Nick Clegg, the president for global affairs.

announced that Meta would restrict access within the European Union to the pages of Russia Today and Sputnik, which are Russian state-controlled media, following requests by Ukraine and other European governments. Russia retaliated by cutting off access to Facebook inside the country, claiming the company discriminated against Russian media, and then blocking Instagram.

This month, President Volodymyr Zelensky of Ukraine praised Meta for moving quickly to limit Russian war propaganda on its platforms. Meta also acted rapidly to remove an edited “deepfake” video from its platforms that falsely featured Mr. Zelensky yielding to Russian forces.

a group called the Ukrainian Legion to run ads on its platforms this month to recruit “foreigners” for the Ukrainian army, a violation of international laws. It later removed the ads — which were shown to people in the United States, Ireland, Germany and elsewhere — because the group may have misrepresented ties to the Ukrainian government, according to Meta.

Internally, Meta had also started changing its content policies to deal with the fast-moving nature of posts about the war. The company has long forbidden posts that might incite violence. But on Feb. 26, two days after Russia invaded Ukraine, Meta informed its content moderators — who are typically contractors — that it would allow calls for the death of Mr. Putin and “calls for violence against Russians and Russian soldiers in the context of the Ukraine invasion,” according to the policy changes, which were reviewed by The New York Times.

Reuters reported on Meta’s shifts with a headline that suggested that posts calling for violence against all Russians would be tolerated. In response, Russian authorities labeled Meta’s activities as “extremist.”

Shortly thereafter, Meta reversed course and said it would not let its users call for the deaths of heads of state.

“Circumstances in Ukraine are fast moving,” Mr. Clegg wrote in an internal memo that was reviewed by The Times and first reported by Bloomberg. “We try to think through all the consequences, and we keep our guidance under constant review because the context is always evolving.”

Meta amended other policies. This month, it made a temporary exception to its hate speech guidelines so users could post about the “removal of Russians” and “explicit exclusion against Russians” in 12 Eastern European countries, according to internal documents. But within a week, Meta tweaked the rule to note that it should be applied only to users in Ukraine.

The constant adjustments left moderators who oversee users in Central and Eastern European countries confused, the six people with knowledge of the situation said.

The policy changes were onerous because moderators were generally given less than 90 seconds to decide on whether images of dead bodies, videos of limbs being blown off, or outright calls to violence violated Meta’s rules, they said. In some instances, they added, moderators were shown posts about the war in Chechen, Kazakh or Kyrgyz, despite not knowing those languages.

Ms. Lever declined to comment on whether Meta had hired content moderators who specialize in those languages.

take action against Russia Today and Sputnik, said two people who attended. Russian state activity was at the center of Facebook’s failure to protect the 2016 U.S. presidential election, they said, and it didn’t make sense that those outlets had continued to operate on Meta’s platforms.

While Meta has no employees in Russia, the company held a separate meeting this month for workers with Russian connections. Those employees said they were concerned that Moscow’s actions against the company would affect them, according to an internal document.

In discussions on Meta’s internal forums, which were viewed by The Times, some Russian employees said they had erased their place of work from their online profiles. Others wondered what would happen if they worked in the company’s offices in places with extradition treaties to Russia and “what kind of risks will be associated with working at Meta not just for us but our families.”

Ms. Lever said Meta’s “hearts go out to all of our employees who are affected by the war in Ukraine, and our teams are working to make sure they and their families have the support they need.”

At a separate company meeting this month, some employees voiced unhappiness with the changes to the speech policies during the war, according to an internal poll. Some asked if the new rules were necessary, calling the changes “a slippery slope” that were “being used as proof that Westerners hate Russians.”

Others asked about the effect on Meta’s business. “Will Russian ban affect our revenue for the quarter? Future quarters?” read one question. “What’s our recovery strategy?”

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Facebook’s Parent Company Will Make Employees Do Their Own Laundry

The salad days of Facebook’s lavish employee perks may be coming to an end.

Meta, the parent company of Facebook, told employees on Friday that it was cutting back or eliminating free services like laundry and dry cleaning and was pushing back the dinner bell for a free meal from 6 p.m. to 6:30 p.m., according to seven company employees who spoke on the condition of anonymity.

The new dinner time is an inconvenience because the last of the company’s shuttles that take employees to and from their homes typically leaves the office at 6 p.m. It will also make it more difficult for workers to stock up on hefty to-go boxes of food and bring them to their refrigerators at home.

The moves are a reflection of changing workplace culture in Silicon Valley. Tech companies, which often offer lifestyle perks in return for employees spending long hours in the office, are preparing to adjust to a new hybrid work model.

At Meta, for example, many employees are scheduled to return to the company’s offices on March 28, though some will continue to work from home and others will come into the office less often.

The changes at Meta could be a warning shot for employees at other companies that are preparing to return to the office after two years of the coronavirus pandemic. Google, Amazon, Meta and others have long offered creature comforts like on-site medical attention, sushi buffets, candy stores and beanbag chairs to lure and retain top talent, which remains at a premium in the tech industry.

Meta has had a difficult past few months, though company officials say the changes to perks are not related. For the first time in years, investors have been questioning the long-term prospects of the company’s advertising business model. Its market capitalization has dropped by half, to $515 billion. And some employees are debating whether they should be searching for new jobs as they see the value of their stock-based compensation plummet.

Meta discussed the changes to its perks program for months as it explored how to shift to the new, hybrid workplace model, said two employees. The company has also expanded employees’ wellness stipends from roughly $700 to $3,000 this year in an attempt to accommodate for removing some of the other in-office perks.

“As we return to the office, we’ve adjusted on-site services and amenities to better reflect the needs of our hybrid work force,” a Meta spokesman said in a statement. “We believe people and teams will be increasingly distributed in the future, and we’re committed to building an experience that helps everyone be successful.”

Many workers were quick to gripe in the comment section underneath the post announcing the change, according to several employees who viewed the post. Just minutes after the changes were announced, employees asked whether the company was planning to compensate them in new ways and if Meta had undertaken an employee survey to evaluate how the changes would impact the staff.

Meta executives, who have been trying to thread the needle of cracking down on misinformation tied to the war in Ukraine and facing an outright ban of Facebook and Instagram in Russia, appeared to have little patience for the questions.

In a tone several employees described as combative, Meta’s chief technical officer, Andrew Bosworth, assertively defended some of the changes and chafed at the perceived sense of entitlement on display in the comments, according to the employees who saw the thread. Mike Schroepfer, the outgoing chief technical officer, also wrote in the comments in support of the changes.

Another employee who worked on the company’s food service team pushed back even more strenuously, according to two people who saw the post.

“I can honestly say when our peers are cramming three to 10 to-go boxes full of steak to take them home, nobody cares about our culture,” the employee said, pushing back on assertions from others that the changes would be damaging to Meta’s workplace culture. “A decision was made to try and curb some of the abuse while eliminating six million to-go boxes.”

It appeared that many employees agreed. As of midday Friday, the employee’s post was the most liked comment in the thread, with hundreds of workers expressing support.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Facebook Debates What to Do With Its Like and Share Buttons

SAN FRANCISCO — In 2019, Facebook researchers began a new study of one of the social network’s foundational features: the Like button.

They examined what people would do if Facebook removed the distinct thumbs-up icon and other emoji reactions from posts on its photo-sharing app Instagram, according to company documents. The buttons had sometimes caused Instagram’s youngest users “stress and anxiety,” the researchers found, especially if posts didn’t get enough Likes from friends.

But the researchers discovered that when the Like button was hidden, users interacted less with posts and ads. At the same time, it did not alleviate teenagers’ social anxiety and young users did not share more photos, as the company thought they might, leading to a mixed bag of results.

Mark Zuckerberg, Facebook’s chief executive, and other managers discussed hiding the Like button for more Instagram users, according to the documents. In the end, a larger test was rolled out in just a limited capacity to “build a positive press narrative” around Instagram.

misinformation, privacy and hate speech, a central issue has been whether the basic way that the platform works has been at fault — essentially, the features that have made Facebook be Facebook.

Apart from the Like button, Facebook has scrutinized its share button, which lets users instantly spread content posted by other people; its groups feature, which is used to form digital communities; and other tools that define how more than 3.5 billion people behave and interact online. The research, laid out in thousands of pages of internal documents, underlines how the company has repeatedly grappled with what it has created.

What researchers found was often far from positive. Time and again, they determined that people misused key features or that those features amplified toxic content, among other effects. In an August 2019 internal memo, several researchers said it was Facebook’s “core product mechanics” — meaning the basics of how the product functioned — that had let misinformation and hate speech flourish on the site.

“The mechanics of our platform are not neutral,” they concluded.

hide posts they do not want to see and turning off political group recommendations to reduce the spread of misinformation.

But the core way that Facebook operates — a network where information can spread rapidly and where people can accumulate friends and followers and Likes — ultimately remains largely unchanged.

Many significant modifications to the social network were blocked in the service of growth and keeping users engaged, some current and former executives said. Facebook is valued at more than $900 billion.

“There’s a gap between the fact that you can have pretty open conversations inside of Facebook as an employee,” said Brian Boland, a Facebook vice president who left last year. “Actually getting change done can be much harder.”

The company documents are part of the Facebook Papers, a cache provided to the Securities and Exchange Commission and to Congress by a lawyer representing Frances Haugen, a former Facebook employee who has become a whistle-blower. Ms. Haugen earlier gave the documents to The Wall Street Journal. This month, a congressional staff member supplied the redacted disclosures to more than a dozen other news organizations, including The New York Times.

In a statement, Andy Stone, a Facebook spokesman, criticized articles based on the documents, saying that they were built on a “false premise.”

“Yes, we’re a business and we make profit, but the idea that we do so at the expense of people’s safety or well-being misunderstands where our own commercial interests lie,” he said. He said Facebook had invested $13 billion and hired more than 40,000 people to keep people safe, adding that the company has called “for updated regulations where democratic governments set industry standards to which we can all adhere.”

post this month, Mr. Zuckerberg said it was “deeply illogical” that the company would give priority to harmful content because Facebook’s advertisers don’t want to buy ads on a platform that spreads hate and misinformation.

“At the most basic level, I think most of us just don’t recognize the false picture of the company that is being painted,” he wrote.

When Mr. Zuckerberg founded Facebook 17 years ago in his Harvard University dorm room, the site’s mission was to connect people on college campuses and bring them into digital groups with common interests and locations.

Growth exploded in 2006 when Facebook introduced the News Feed, a central stream of photos, videos and status updates posted by people’s friends. Over time, the company added more features to keep people interested in spending time on the platform.

In 2009, Facebook introduced the Like button. The tiny thumbs-up symbol, a simple indicator of people’s preferences, became one of the social network’s most important features. The company allowed other websites to adopt the Like button so users could share their interests back to their Facebook profiles.

That gave Facebook insight into people’s activities and sentiments outside of its own site, so it could better target them with advertising. Likes also signified what users wanted to see more of in their News Feeds so people would spend more time on Facebook.

Facebook also added the groups feature, where people join private communication channels to talk about specific interests, and pages, which allowed businesses and celebrities to amass large fan bases and broadcast messages to those followers.

Adam Mosseri, the head of Instagram, has said that research on users’ well-being led to investments in anti-bullying measures on Instagram.

Yet Facebook cannot simply tweak itself so that it becomes a healthier social network when so many problems trace back to core features, said Jane Lytvynenko, a senior fellow at the Harvard Kennedy Shorenstein Center, who studies social networks and misinformation.

“When we talk about the Like button, the share button, the News Feed and their power, we’re essentially talking about the infrastructure that the network is built on top of,” she said. “The crux of the problem here is the infrastructure itself.”

As Facebook’s researchers dug into how its products worked, the worrisome results piled up.

In a July 2019 study of groups, researchers traced how members in those communities could be targeted with misinformation. The starting point, the researchers said, were people known as “invite whales,” who sent invitations out to others to join a private group.

These people were effective at getting thousands to join new groups so that the communities ballooned almost overnight, the study said. Then the invite whales could spam the groups with posts promoting ethnic violence or other harmful content, according to the study.

Another 2019 report looked at how some people accrued large followings on their Facebook pages, often using posts about cute animals and other innocuous topics. But once a page had grown to tens of thousands of followers, the founders sold it. The buyers then used the pages to show followers misinformation or politically divisive content, according to the study.

As researchers studied the Like button, executives considered hiding the feature on Facebook as well, according to the documents. In September 2019, it removed Likes from users’ Facebook posts in a small experiment in Australia.

The company wanted to see if the change would reduce pressure and social comparison among users. That, in turn, might encourage people to post more frequently to the network.

But people did not share more posts after the Like button was removed. Facebook chose not to roll the test out more broadly, noting, “Like counts are extremely low on the long list of problems we need to solve.”

Last year, company researchers also evaluated the share button. In a September 2020 study, a researcher wrote that the button and so-called reshare aggregation units in the News Feed, which are automatically generated clusters of posts that have already been shared by people’s friends, were “designed to attract attention and encourage engagement.”

But gone unchecked, the features could “serve to amplify bad content and sources,” such as bullying and borderline nudity posts, the researcher said.

That’s because the features made people less hesitant to share posts, videos and messages with one another. In fact, users were three times more likely to share any kind of content from the reshare aggregation units, the researcher said.

One post that spread widely this way was an undated message from an account called “The Angry Patriot.” The post notified users that people protesting police brutality were “targeting a police station” in Portland, Ore. After it was shared through reshare aggregation units, hundreds of hate-filled comments flooded in. It was an example of “hate bait,” the researcher said.

A common thread in the documents was how Facebook employees argued for changes in how the social network worked and often blamed executives for standing in the way.

In an August 2020 internal post, a Facebook researcher criticized the recommendation system that suggests pages and groups for people to follow and said it can “very quickly lead users down the path to conspiracy theories and groups.”

“Out of fears over potential public and policy stakeholder responses, we are knowingly exposing users to risks of integrity harms,” the researcher wrote. “During the time that we’ve hesitated, I’ve seen folks from my hometown go further and further down the rabbit hole” of conspiracy theory movements like QAnon and anti-vaccination and Covid-19 conspiracies.

The researcher added, “It has been painful to observe.”

Reporting was contributed by Davey Alba, Sheera Frenkel, Cecilia Kang and Ryan Mac.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

What Happened When Facebook Employees Warned About Election Misinformation

WHAT HAPPENED

1. From Wednesday through Saturday there was a lot of content circulating which implied fraud in the election, at around 10% of all civic content and 1-2% of all US VPVs. There was also a fringe of incitement to violence.

2. There were dozens of employees monitoring this, and FB launched ~15 measures prior to the election, and another ~15 in the days afterwards. Most of the measures made existings processes more aggressive: e.g. by lowering thresholds, by making penalties more severe, or expanding eligibility for existing measures. Some measures were qualitative: reclassifying certain types of content as violating, which had not been before.

3. I would guess these measures reduced prevalence of violating content by at least 2X. However they had collateral damage (removing and demoting non-violating content), and the episode caused noticeable resentment by Republican Facebook users who feel they are being unfairly targeted.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Instagram Struggles With Fears of Losing Its ‘Pipeline’: Young Users

Facebook knew that an ad intended for a 13-year-old was likely to capture younger children who wanted to mimic their older siblings and friends, one person said. Managers told employees that Facebook did everything it could to stop underage users from joining Instagram, but that it could not be helped if they signed up anyway.

In September 2018, Kevin Systrom and Mike Krieger, Instagram’s founders, left Facebook after clashing with Mr. Zuckerberg. Mr. Mosseri, a longtime Facebook executive, was appointed to helm Instagram.

With the leadership changes, Facebook went all out to turn Instagram into a main attraction for young audiences, four former employees said. That coincided with the realization that Facebook itself, which was grappling with data privacy and other scandals, would never be a teen destination, the people said.

Instagram began concentrating on the “teen time spent” data point, three former employees said. The goal was to drive up the amount of time that teenagers were on the app with features including Instagram Live, a broadcasting tool, and Instagram TV, where people upload videos that run as long as an hour.

Instagram also increased its global marketing budget. In 2018, it allocated $67.2 million to marketing. In 2019, that increased to a planned $127.3 million, then to $186.3 million last year and $390 million this year, according to the internal documents. Most of the budgets were designated to wooing teens, the documents show. Mr. Mosseri approved the budgets, two employees said.

The money was slated for marketing categories like “establishing Instagram as the favorite place for teens to express themselves” and cultural programs for events like the Super Bowl, according to the documents.

Many of the resulting ads were digital, featuring some of the platform’s top influencers, such as Donté Colley, a Canadian dancer and creator. The marketing, when put into action, also targeted parents of teenagers and people up to the age of 34.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

Inside Facebook’s Push to Defend Its Image

The changes have involved Facebook executives from its marketing, communications, policy and integrity teams. Alex Schultz, a 14-year company veteran who was named chief marketing officer last year, has also been influential in the image reshaping effort, said five people who worked with him. But at least one of the decisions was driven by Mr. Zuckerberg, and all were approved by him, three of the people said.

Credit…Tommaso Boddi/Getty Images

Joe Osborne, a Facebook spokesman, denied that the company had changed its approach.

“People deserve to know the steps we’re taking to address the different issues facing our company — and we’re going to share those steps widely,” he said in a statement.

For years, Facebook executives have chafed at how their company appeared to receive more scrutiny than Google and Twitter, said current and former employees. They attributed that attention to Facebook’s leaving itself more exposed with its apologies and providing access to internal data, the people said.

So in January, executives held a virtual meeting and broached the idea of a more aggressive defense, one attendee said. The group discussed using the News Feed to promote positive news about the company, as well as running ads that linked to favorable articles about Facebook. They also debated how to define a pro-Facebook story, two participants said.

That same month, the communications team discussed ways for executives to be less conciliatory when responding to crises and decided there would be less apologizing, said two people with knowledge of the plan.

Mr. Zuckerberg, who had become intertwined with policy issues including the 2020 election, also wanted to recast himself as an innovator, the people said. In January, the communications team circulated a document with a strategy for distancing Mr. Zuckerberg from scandals, partly by focusing his Facebook posts and media appearances on new products, they said.

The Information, a tech news site, previously reported on the document.

The impact was immediate. On Jan. 11, Sheryl Sandberg, Facebook’s chief operating officer — and not Mr. Zuckerberg — told Reuters that the storming of the U.S. Capitol a week earlier had little to do with Facebook. In July, when President Biden said the social network was “killing people” by spreading Covid-19 misinformation, Guy Rosen, Facebook’s vice president for integrity, disputed the characterization in a blog post and pointed out that the White House had missed its coronavirus vaccination goals.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

The Battle for Digital Privacy Is Reshaping the Internet

“The internet is answering a question that it’s been wrestling with for decades, which is: How is the internet going to pay for itself?” he said.

The fallout may hurt brands that relied on targeted ads to get people to buy their goods. It may also initially hurt tech giants like Facebook — but not for long. Instead, businesses that can no longer track people but still need to advertise are likely to spend more with the largest tech platforms, which still have the most data on consumers.

David Cohen, chief executive of the Interactive Advertising Bureau, a trade group, said the changes would continue to “drive money and attention to Google, Facebook, Twitter.”

The shifts are complicated by Google’s and Apple’s opposing views on how much ad tracking should be dialed back. Apple wants its customers, who pay a premium for its iPhones, to have the right to block tracking entirely. But Google executives have suggested that Apple has turned privacy into a privilege for those who can afford its products.

For many people, that means the internet may start looking different depending on the products they use. On Apple gadgets, ads may be only somewhat relevant to a person’s interests, compared with highly targeted promotions inside Google’s web. Website creators may eventually choose sides, so some sites that work well in Google’s browser might not even load in Apple’s browser, said Brendan Eich, a founder of Brave, the private web browser.

“It will be a tale of two internets,” he said.

Businesses that do not keep up with the changes risk getting run over. Increasingly, media publishers and even apps that show the weather are charging subscription fees, in the same way that Netflix levies a monthly fee for video streaming. Some e-commerce sites are considering raising product prices to keep their revenues up.

Consider Seven Sisters Scones, a mail-order pastry shop in Johns Creek, Ga., which relies on Facebook ads to promote its items. Nate Martin, who leads the bakery’s digital marketing, said that after Apple blocked some ad tracking, its digital marketing campaigns on Facebook became less effective. Because Facebook could no longer get as much data on which customers like baked goods, it was harder for the store to find interested buyers online.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<

How Facebook Relies on Accenture to Scrub Toxic Content

In 2010, Accenture scored an accounting contract with Facebook. By 2012, that had expanded to include a deal for moderating content, particularly outside the United States.

That year, Facebook sent employees to Manila and Warsaw to train Accenture workers to sort through posts, two former Facebook employees involved with the trip said. Accenture’s workers were taught to use a Facebook software system and the platform’s guidelines for leaving content up, taking it down or escalating it for review.

What started as a few dozen Accenture moderators grew rapidly.

By 2015, Accenture’s office in the San Francisco Bay Area had set up a team, code-named Honey Badger, just for Facebook’s needs, former employees said. Accenture went from providing about 300 workers in 2015 to about 3,000 in 2016. They are a mix of full-time employees and contractors, depending on the location and task.

The firm soon parlayed its work with Facebook into moderation contracts with YouTube, Twitter, Pinterest and others, executives said. (The digital content moderation industry is projected to reach $8.8 billion next year, according to Everest Group, roughly double the 2020 total.) Facebook also gave Accenture contracts in areas like checking for fake or duplicate user accounts and monitoring celebrity and brand accounts to ensure they were not flooded with abuse.

After federal authorities discovered in 2016 that Russian operatives had used Facebook to spread divisive posts to American voters for the presidential election, the company ramped up the number of moderators. It said it would hire more than 3,000 people — on top of the 4,500 it already had — to police the platform.

“If we’re going to build a safe community, we need to respond quickly,” Mr. Zuckerberg said in a 2017 post.

The next year, Facebook hired Arun Chandra, a former Hewlett Packard Enterprise executive, as vice president of scaled operations to help oversee the relationship with Accenture and others. His division is overseen by Ms. Sandberg.

View Source

>>> Don’t Miss Today’s BEST Amazon Deals! <<<<