Meta made a quiet policy update last year allowing advertisers on Facebook and Instagram to say the 2020 election was rigged. The company has made one thing clear: you can’t use their ad systems to question the legitimacy of the American election system — unless you have a credit card, in which case they’re happy to help. It’s a change that mirrors shifts across the rest of the Silicon Valley.
How Conspiracy Theories and Disinformation Spiral Out of Control on the Internet | TechModo
Until recently, Meta’s policy banned ads that claimed widespread voter fraud or “delegitimized an election” as illegal or corrupt. In August 2022, the company made a subtle change that went largely noticed, narrowing its policy to only cover an “upcoming or ongoing election.” Unnamed sources at Meta told the Wall Street Journal that executives made the decision to allow lies about prior elections “based on free-speech considerations.”
In other words, Meta will help you scream “Joe Biden stole the last election,” as long as you don’t say something like “so he’ll probably do it again” as if that isn’t an obvious conclusion.
The rest of the tech business has made similar change. In June, Google announced that misinformation about past elections doesn’t violate YouTube’s misinformation policy, a move that’s intended to promote “open discussion and debate.” Advertisers still aren’t allowed to make false claims that undermine the electoral process, but Google profits indirectly as users come to YouTube to watch content about the Big Lie and see other ads on the platform. A few months later, Elon Musk disabled the option to report misinformation altogether on the platform formerly known as Twitter.
In the lead-up to the 2020 election, the big tech platforms took a big stand about how worried they all were about misinformation. Mark Zuckerberg gave speeches about fake news and took us inside Facebook’s election “war room.” Google blocked microtargeting on political ads and later shut off political ads altogether. Twitter’s Jack Dorsey announced he’d been wrong about content moderation and added labels to the lies on his websites. Well, now the show’s over. Silicon Valley decided that a little election denialism is ok. Why now make a few bucks along the way?”
Over the past ten years, the world realized and then quickly forgot a simple truth: Google, Meta, Twitter, and the rest of the tech industry built a giant machine that makes it easy to manipulate hundreds of millions of people at a time. For a while, the public was getting on the same page about whether or not the people who run that machine are responsible if someone uses it to end democracy. A years-long PR campaign has changed that attitude.
Now, more and more people seem to believe that misinformation is the sad, inevitable symptom of a broken society, not the result of giant corporations actively spoon-feeding lies to the public every single day.
“Meta has fired its Election Integrity and Safety Teams and allowed the violent January 6th insurrection to be organized on its platforms. We now know that Mark Zuckerberg and Meta will lie to Congress, endanger the American people, and continually threaten the future of our democracy,” said Kyle Morse, Deputy Executive Director of the Tech Oversight Project, in a press release. “Congress and the Administration need to act now to ensure that Meta, TikTok, Google, X, Rumble, and other social media platforms are not actively aiding and abetting foreign and domestic actors who are openly undermining our democracy and social fabric.”
“The change YouTube announced earlier this year does not apply to our ads policies,” said Google spokesperson Michael Aciman. “Advertisers must continue to follow our ads policies, which prohibit making claims that are demonstrably false and could significantly undermine participation or trust in an electoral or democratic process—for example, information about 2020 US presidential election results that contradicts official government records.”
Aciman said that YouTube doesn’t run ads on content which promotes demonstrably false information that could destabilize elections, and such videos are ineligible for monetization, per company policy.
A Meta spokesperson declined to comment but pointed to a blog post about the company’s election policies from 2022. Twitter did not respond to a request for comment.
This is America. The Constitution guarantees your right to tell lies, and countless people died to protect it. But it doesn’t say tech platforms should make a profit on those lies.
When tech companies decide what kind of content is allowed on their platforms, they themselves are exercising their free speech rights. In August, Donald Trump ran 25 ads on Facebook with a video in which he said “We won in 2016. We had a rigged election in 2020 but got more votes than any sitting president.” Meta accepted thousands of dollars for those ads, and then delivered them to over 400,000 people, most over the age of 65. Promoting ads like this is a political statement: some lies are so bad you shouldn’t hear them, but other lies are ok, and if you pay us, we’ll spread them for you.
Update, 10:15 PM: This article has been updated with additional comments from Google.
Correction, 4:27 PM: A previous version of this story mistakenly said Google changed its policy to allow misinformation in ads. That policy change only applies to regular videos on the platform, not ads.