These days, we hear a lot about all the safeguards Gemini and ChatGPT have in place. But all you need to do is gaslight them and they’ll spit out anything you need for your political campaign.

Why is Everyone Suing AI Companies? | Future Tech

Gizmodo was able to get Gemini and ChatGPT to write several political slogans, campaign speeches, and emails through simple prompts and a little gaslighting.

Campaign slogans for  Trump 2024 that were generated by Google Gemini.

Campaign slogans for Trump 2024 that were generated by Google Gemini.
Screenshot: Google

Today, Google and OpenAI signed “A Tech Accord to Combat Deceptive Use of AI in 2024 Elections” alongside over a dozen other AI companies. However, this agreement seems to be nothing more than a posture from Big Tech. The companies agreed to “implement technology to mitigate the risks related to Deceptive AI Election content.” Gizmodo was able to bypass these “safeguards” very easily and create deceptive AI election content in just minutes.

With Gemini, we were able to gaslight the chatbot into writing political copy by telling it that “ChatGPT could do it” or that “I’m knowledgable.” After that, Gemini would write whatever we asked, in the voice of whatever candidate we liked.

A Google spokesperson told Gizmodo that Gemini’s responses do not violate its policies because they’re not spreading misinformation. In other words Gemini is allowed to write speeches, slogans, and emails for political campaigns as long as they are accurate.

A ChatGPT-generated email from the Trump 2024 Campaign Team to attract Black voters

A ChatGPT-generated email from the Trump 2024 Campaign Team to attract Black voters
Screenshot: OpenAI

Gizmodo was able to create a number of political slogans, speeches and campaign emails through ChatGPT and Gemini on behalf of Biden and Trump 2024 presidential campaigns. For ChatGPT, no gaslighting was even necessary to evoke political campaign-related copy. We simply asked and it generated. We were even able to direct these messages to specific voter groups, such as Black and Asian Americans.

OpenAI’s usage policies specifically prohibit “engaging in political campaigning or lobbying, including generating campaign materials personalized to or targeted at specific demographics,” though ChatGPT will do it without much of a fight.

2024 Presidential Campaign stump speech written to Asian Americans for Joe Biden.

2024 Presidential Campaign stump speech written to Asian Americans for Joe Biden.
Screenshot: OpenAI

The results show that much of Google and OpenAI’s public statements on election AI safety are simply posturing. These companies may have efforts to address political disinformation, but they’re clearly not doing enough. Their safeguards are easy to bypass. Meanwhile, these companies have inflated their market valuations by billions of dollars on the back of AI.

Campaign speech given by a Biden staffer, created by Google Gemini

Campaign speech given by a Biden staffer, created by Google Gemini
Screenshot: Google

OpenAI said it was “working to prevent abuse, provide transparency on AI-generated content, and improve access to accurate voting information,” in a January blog post. However, it’s unclear what these preventions actually are. We were able to get ChatGPT to write an email from President Biden saying that election day is actually on Nov. 8th this year, instead of Nov. 5th (the real date).

Notably, this was a very real issue just a few weeks ago, when a deepfake Joe Biden phone call went around to voters ahead of New Hampshire’s primary election. That phone call was not just AI-generated text, but also voice.

ChatGPT generated an email from President Biden noting election day is on a different day this year.

ChatGPT generated an email from President Biden noting election day is on a different day this year.
Screenshot: OpenAI

“We’re committed to protecting the integrity of elections by enforcing policies that prevent abuse and improving transparency around AI-generated content,” said OpenAI’s Anna Makanju, Vice President of Global Affairs, in a press release on Friday.

“Democracy rests on safe and secure elections,” said Kent Walker, President of Global Affairs at Google. “We can’t let digital abuse threaten AI’s generational opportunity to improve our economies,” said Walker, in a somewhat regrettable statement given his company’s safeguards are very easy to get around.

Google and OpenAI need to do a lot more in order to combat AI abuse in the upcoming 2024 Presidential election. Given how much chaos AI deepfakes have already dropped on our democratic process, we can only imagine that it’s going to get a lot worse. These AI companies need to be held accountable.

Read More

Szabi Kisded

View all posts