AI experts and tech-inclined political scientists are sounding the alarm about the unregulated use of AI tools in an election season.
Not only can generative AI rapidly produce targeted campaign emails, texts or videos, it can also be used to deceive voters, impersonate candidates and undermine elections at a scale and speed never seen before.
A voter stands ready for a voter, Feb. 24, 2020, at City Hall in Cambridge, Massachusetts, on the first morning of early voting in the state. (AP Photo/Elise Amendola)
“We’re not ready for this,” warned AJ Nash, vice president of intelligence at cybersecurity firm ZeroFox. “For me, the big leap forward is the audio and video capabilities that have come out. When you can do that at scale and distribute it across social platforms, well, it’s going to have a big impact.”
Among AI’s many capabilities are some that will have significant ramifications with elections and voting: robocall messages, in a candidate’s voice, telling voters to cast ballots on the wrong date ; audio recordings of a candidate allegedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave.
Fake images designed to look like local news, falsely claiming a candidate dropped out of the race.
AI EXPERT TO TEACH UN OFFICIALS TO LEARN HOW TO BUILD A GLOBAL AI REGULATORY BODY
“What if Elon Musk calls you personally and tells you to vote for a certain candidate?” said Oren Etzioni, founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. “A lot of people would listen, but it’s not him.”
Petko Stoyanov, global chief technology officer at Austin, Texas-based cybersecurity firm Forcepoint, has predicted that groups seeking to interfere with American democracy will use AI and synthetic means to erode trust.
“What if an international entity — a cybercriminal or a nation state — impersonates someone? What’s the impact? Do we have any recourse?” Stoyanov said. “We’re going to see a lot more misinformation from international sources.”
AI-generated political disinformation has already gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children allegedly learning Satanism in libraries.
Panelists discuss artificial intelligence at the Milken Institute Global Conference. (Milken Institute)
AI images that appeared to show Trump’s photo also fooled some social media users, although the former president did not take one when he was booked and tried in a Manhattan criminal court for falsifying business records . Other AI-generated footage showed Trump resisting arrest, though its creator quickly acknowledged its origin.
Rep. Yvette Clarke, D-N.Y., has introduced legislation that would require candidates to label campaign ads created with AI. Clark has also sponsored legislation that would require anyone creating synthetic images to add a watermark indicating the fact.
Some states have offered their own proposals to address concerns about deep counterfeiting.
Clarke said his biggest fear is that generative artificial intelligence could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other .
CLICK HERE TO GET THE FOX NEWS APP
“It’s important that we keep up with technology,” Clarke told The Associated Press. “We need to put up some guardrails. People can be misled, and it only takes a split second. People are busy with their lives and don’t have time to check every piece of information. A political season, it could be extremely disruptive.”
The Associated Press contributed to this report.
Bradford Betz is a reporter for Fox News Digital who covers crime, political issues and more.