Elon Musk had a plan to buy Twitter and undo its content moderation policies. On Tuesday, just a day after reaching his $44 billion deal to buy the company, Mr. Musk was already at work on his agenda. He tweeted that past moderation decisions by a top Twitter lawyer were “obviously incredibly inappropriate.” Later, he shared a meme mocking the lawyer, sparking a torrent of attacks from other Twitter users.
Mr. Musk’s personal critique was a rough reminder of what faces employees who create and enforce Twitter’s complex content moderation policies. His vision for the company would take it right back to where it started, employees said, and force Twitter to relive the last decade.
Twitter executives who created the rules said they had once held views about online speech that were similar to Mr. Musk’s. They believed Twitter’s policies should be limited, mimicking local laws. But more than a decade of grappling with violence, harassment and election tampering changed their minds. Now, many executives at Twitter and other social media companies view their content moderation policies as essential safeguards to protect speech.
The question is whether Mr. Musk, too, will change his mind when confronted with the darkest corners of Twitter.
The tweets must flow. That meant Twitter did little to moderate the conversations on its platform.
Twitter’s founders took their cues from Blogger, the publishing platform, owned by Google, that several of them had helped build. They believed that any reprehensible content would be countered or drowned out by other users, said three employees who worked at Twitter during that time.
“There’s a certain amount of idealistic zeal that you have: ‘If people just embrace it as a platform of self-expression, amazing things will happen,’” said Jason Goldman, who was on Twitter’s founding team and served on its board of directors. “That mission is valuable, but it blinds you to think certain bad things that happen are bugs rather than equally weighted uses of the platform.”
The company typically removed content only if it contained spam, or violated American laws forbidding child exploitation and other criminal acts.
In 2008, Twitter hired Del Harvey, its 25th employee and the first person it assigned the challenge of moderating content full time. The Arab Spring protests started in 2010, and Twitter became a megaphone for activists, reinforcing many employees’ belief that good speech would win out online. But Twitter’s power as a tool for harassment became clear in 2014 when it became the epicenter of Gamergate, a mass harassment campaign that flooded women in the video game industry with death and rape threats.
2,700 fake Twitter profiles and used them to sow discord about the upcoming presidential election between Mr. Trump and Hillary Clinton.
The profiles went undiscovered for months, while complaints about harassment continued. In 2017, Jack Dorsey, the chief executive at the time, declared that policy enforcement would become the company’s top priority. Later that year, women boycotted Twitter during the #MeToo movement, and Mr. Dorsey acknowledged the company was “still not doing enough.”
He announced a list of content that the company would no longer tolerate: nude images shared without the consent of the person pictured, hate symbols and tweets that glorified violence.