AI presents a political danger for 2024 with a threat of voter fraud

6464291ee1c65.preview

WASHINGTON — Computer engineers and tech-inclined political scientists have warned for years that cheap and powerful artificial intelligence tools would soon allow anyone to create fake images, videos and audio that were realistic enough to deceive voters and perhaps influence an election.

The synthetic images that emerged were often crude, unconvincing and expensive to produce, especially when other types of disinformation were so cheap and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.



A booth stands ready for a voter on February 24, 2020 at City Hall in Cambridge, Massachusetts, on the first morning of early voting in the state. Thanks to recent advances in artificial intelligence, tools that can create realistic photos, videos, and audio are now affordable and readily available.


Elise Amendola, Associated Press

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audio in seconds, at minimal cost. When linked to powerful social media algorithms, this fake, digitally created content can spread far and fast and target very specific audiences, potentially taking dirty campaign tricks to a new level.

People also read…

The implications for campaigns and the 2024 election are as great as they are troubling: Not only can generative AI rapidly produce targeted campaign emails, texts or videos, but it can also be used to deceive voters, impersonate candidates and undermine elections at a scale and speed never seen before.

“We’re not ready for this,” warned AJ Nash, vice president of intelligence at cybersecurity firm ZeroFox. “For me, the big leap forward is the audio and video capabilities that have emerged. When you can do that at scale and distribute it across social platforms, well, it’s going to have a big impact.”

AI experts can quickly resolve a number of alarming scenarios in which generative AI is used to create synthetic media aimed at confusing voters, slandering a candidate, or even inciting violence.

Here are some: Automated robocall messages, in a candidate’s voice, telling voters to cast ballots on the wrong date; audio recordings of a candidate allegedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news, falsely claiming a candidate dropped out of the race.

“What if Elon Musk calls you personally and tells you to vote for a certain candidate?” said Oren Etzioni, founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. “A lot of people would listen. But it’s not him.”

Former President Donald Trump, who is running in 2024, has been sharing AI-generated content with his followers on social media. A doctored video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday, which distorted Cooper’s reaction to last week’s CNN town hall with Trump, was created using a cloning tool of AI voice.

Trump arrested?  Putin in jail?  Fake AI images spread online

A dystopian campaign ad released last month by the Republican National Committee offers another glimpse into this digitally manipulated future. The online ad, which came after President Joe Biden announced his re-election campaign, begins with a strange, slightly distorted image of Biden and the text “What if the weakest president we’ve ever had was re-elected? “

The RNC acknowledged its use of AI, but others, including nefarious political campaigns and foreign adversaries, will not, said Petko Stoyanov, global chief technology officer at Forcepoint, an Austin, Texas-based cybersecurity firm. Stoyanov predicted that groups seeking to interfere with American democracy will use AI and synthetic media as a way to erode trust.

AI-generated political disinformation has already gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children allegedly learning Satanism in libraries.

AI images that appeared to show Trump’s photo also fooled some social media users, although the former president did not take one when he was booked and tried in a Manhattan criminal court for falsifying business records . Other AI-generated footage showed Trump resisting arrest, though its creator quickly acknowledged its origin.



Artificial intelligence electoral disinformation

Rep. Yvette Clarke, D-N.Y., flanked by Rep. Robin Kelly, D-Ill., left, and Rep. Mike Doyle, D-Pa., hold a news conference Nov. 4, 2021, at the Capitol of Washington. Clarke has introduced legislation that would require candidates to label campaign ads created with artificial intelligence.


J. Scott Applewhite, Associated Press

Legislation that would require candidates to label campaign ads created with AI has been introduced in the House by Rep. Yvette Clarke, D-N.Y., who has also sponsored legislation that would require anyone creating synthetic images to add a water that indicates the fact.

Some states have offered their own proposals to address concerns about deep counterfeiting.

Clarke said his biggest fear is that generative artificial intelligence could be used before the 2024 election to create a video or audio that incites violence and turns Americans against each other .

“It’s important that we keep up with technology,” Clarke said. “We need to put up some guardrails. People can be fooled and it only takes a split second. People are busy with their lives and don’t have time to check every piece of information. If AI is used as a weapon, in a political season, it could be extremely disruptive.”

Explainer: Threats to US election security grow more complex

Threats, misinformation, and conspiracies plague America’s midterms



Source link

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *