SAN FRANCISCO — New Twitter owner Elon Musk said Thursday that he is granting “amnesty” for suspended accounts, which online safety experts predict will spur a rise in harassment, hate speech and misinformation.
The billionaire’s announcement came after he asked in a poll posted to his timeline to vote on reinstatements for accounts that have not “broken the law or engaged in egregious spam.” The yes vote was 72%.
“The people have spoken. Amnesty begins next week. Vox Populi, Vox Dei,” Musk tweeted using a Latin phrase meaning “the voice of the people, the voice of God.”
Musk used the same Latin phrase after posting a similar poll last weekend before reinstating the account of former President Donald Trump, which Twitter had banned for encouraging the Jan. 6, 2021, Capitol insurrection. Trump has said he won’t return to Twitter but has not deleted his account.
Reinstating banned accounts could mean bringing back the “worst offenders” including neo-Nazi trolls, people who maliciously posted intimate images of people without their consent and other accounts that repeatedly violated Twitter’s rules against hate speech, cyberstalking or harassment, said Danielle Citron, a law professor at the University of Virginia.
“It’s a disaster waiting to happen,” said Citron, who is also vice president of the Cyber Civil Rights Initiative and sits on Twitter’s Trust and Safety Council, a group of outside advisers who haven’t met since Musk took over. “It’s crazy because the whole point of the permanent suspension is because these people were so bad they were bad for the business.”
Citron said an “amnesty” plan goes against years of work — supported by then-Twitter CEO Jack Dorsey — to build a platform for healthy online discourse that wouldn’t drive away average users fearful of being harassed. In most cases, Twitter only permanently suspended accounts that didn’t respond to other restrictions, such as temporary suspensions or restricted posts.
“So many people actually learn from suspensions and don’t re-violate,” Citron said. “You have to get pretty bad to get a permanent suspension.”
Another member of the Trust and Safety Council, Alex Holmes, said he is still awaiting feedback on the status of the council, which is due to meet in mid-December.
“With this latest decision, I can’t see this sitting right with the council or indeed what is left of the policy team, whose job it is to create effective policies that keep the platform safe,” Holmes said.
In the month since Musk took over Twitter, groups that monitor the platform for racist, anti-Semitic and other toxic speech say it’s been on the rise on the world’s de facto public square. That has included a surge in racist abuse of World Cup soccer players that Twitter is allegedly failing to act on.
The uptick in harmful content is in large part due to the disorder following Musk’s decision to lay off half the company’s 7,500-person workforce, fire top executives, and then institute a series of ultimatums that prompted hundreds more to quit. Also let go were an untold number of contractors responsible for content moderation. Among those resigning over a lack of faith in Musk’s willingness to keep Twitter from devolving into a chaos of uncontrolled speech were Twitter’s head of trust and safety, Yoel Roth.
Major advertisers have also abandoned the platform.
On Oct. 28, the day after he took control, Musk tweeted that there would be no “major content decisions or account reinstatements” until Twitter formed a “content moderation council” with diverse viewpoints that would consider the cases.
On Tuesday, he said he was reneging on that promise because he’d agreed to it at the insistence of “a large coalition of political-social activists groups” who later ”broke the deal” by urging that advertisers at least temporarily stop giving Twitter their business.
A day earlier, Twitter reinstated the personal account of far-right Rep. Marjorie Taylor Greene, which was banned in January for violating the platform’s COVID misinformation policies.
Musk, meanwhile, has been getting increasingly chummy on Twitter with right-wing figures. Before this month’s U.S. midterm elections he urged “independent-minded” people to vote Republican.
A report from the European Union published Thursday said Twitter took longer to review hateful content and removed less of it this year compared with 2021. The report was based on data collected over the spring — before Musk acquired Twitter — as part of an annual evaluation of online platforms’ compliance with the bloc’s code of conduct on disinformation. It found that Twitter assessed just over half of the notifications it received about illegal hate speech within 24 hours, down from 82% in 2021.