‘Troubling’ AI announcements heighten political drama in 2024

C5ERBO3KBVFK7EFYJPQGTOR5EQ

TALLAHASSEE — The use of artificial intelligence to generate images, text and voices has the potential to “muddy the waters” in political campaigns and deepen distrust among voters, according to communications experts.

Generative artificial intelligence, or AI, allows users to enter prompts that result in generated content that can represent almost anything the user desires. With the 2024 election approaching, political communication experts are preparing for AI-generated images to appear much more frequently in campaign ads.

“You talk about messaging the opposition, it can be created with a click. The advisory is returning information so quickly that we’re going to be inundated as the election cycle really starts to heat up,” Janet Coats, managing director of the University of Florida’s Media and Technology Trust Consortium, told The News Service of Florida in a recent interview.

The ability to manipulate images and voice audio “is moving to a whole different level of sophistication,” Coats said.

“We’ve been down this road for a long time,” Coats said. “One of the first visual ads that could now be called deceptive is the one from 1964, with the famous (attack ad), the girl with the daisy and then the mushroom cloud superimposed on her on the screen that (Lyndon B.) Johnson’s campaign was against Barry Goldwater.”

In the past, “when these manipulations happened, you knew there was a human being manipulating the information,” he added.

For Florida’s two top political figures, who are on a collision course in the 2024 Republican presidential primary, the issue erupted in early June. A Twitter account affiliated with Gov. Ron DeSantis’ presidential campaign tweeted a video that included several AI-generated images of former President Donald Trump hugging Anthony Fauci, Trump’s former chief medical adviser who led the pandemic response the administration

With Trump and DeSantis battling over their respective approaches to the COVID-19 pandemic, the DeSantis camp sought to portray a cozy relationship between Trump and Fauci.

The post containing the video received a “community note” from Twitter, which the social media platform says aims to allow users to “collaboratively add context to potentially misleading tweets.”

“The 3 photos showing Trump hugging Fauci are AI-generated images. The rest of the footage and images in the ad are authentic,” the notice said.

The use of AI-generated images in several political ads by New Zealand’s national party made international headlines in May. Also in May, an attack ad on President Joe Biden’s re-election campaign from the Republican National Committee appeared, showing a vision of a bleak future under a second term for Biden using AI-generated imagery.

The video is posted on the GOP’s official YouTube channel with a description that explicitly lets viewers know it features AI footage.

“An AI-generated look at the country’s possible future if Joe Biden is re-elected in 2024,” the description read.

Stay up-to-date with Tampa Bay’s top headlines

Sign up for our free DayStarter newsletter

We’ll bring you the latest news and information you need to know every morning.

You are all registered!

Want more of our free weekly newsletters in your inbox? Let’s get started.

Explore all your options

But not all AI images will be so easily identified.

Coats noted the ease with which AI-generated images can be created by almost anyone with an Internet connection, and the potential difficulty in identifying their source.

“It’s a low barrier to entry, to do it. You don’t have to hire a big, expensive tool. Tools are readily available. You don’t need to have particularly specialized knowledge to use them. The more sophisticated the message, the higher the quality of the output. But it’s not rocket science,” Coats said.

Steve Vancore, a longtime political consultant and pollster, said generative AI could become commonplace in an era when the volume of political ads and other communications being put in front of voters has steadily increased.

“In general, what should be of concern: The public already has an inherent distrust of political communications. And as a result of that, we’ve seen an increasing arms race in the amount of communications in races,” Vancore told the News Service.

As the volume of political ads increases, the use of AI-generated images, voices, and text is likely to increase.

“There is so much at stake, the people who run these campaigns will only use it to raise more money and to use more of it. And so it will be an unfortunate arms race that will create a greater degree of distrust on the part of the public,” Vancore said.

Vancore, which has been involved in more than 250 campaigns over its decades-long career, said its advice to candidates about using generative AI technology in ads depends on how it would be used.

“My standard for political attack ads, negative ads is: It’s truthful, it’s verifiable, it’s relevant,” Vancore said.

Vancore used an example of a candidate using the text-generating AI tool ChatGPT to create emails for his constituents.

“To say, ‘Hey, I want a series of emails that talk about my program to have after-school counseling for kids.’ … That’s a perfectly acceptable use of artificial intelligence,” Vancore said. “What is not an acceptable use of artificial intelligence is, ‘Hey, I want it to generate some footage of my opponent hanging out with underage girls.’

Whether AI-generated ads can boost candidates’ credibility also depends on how they’re used, Vancore said, adding that other uses of the technology could be more subtle.

“One of the raps about Joe Biden is that he’s old. Maybe it’s not an unfair rap. It’s a legitimate concern that the most powerful person on earth, or one of them, might be getting old, right? What if the campaign of Joe Biden did it age him a little bit? It showed him walking a little more cautiously, responding a little faster,” Vancore said.

Trump, Vancore said, “has the same problem.”

“You can see that (Trump) is getting older, it probably has a lot to do with what’s going on in his life. But if somebody took him out a little bit … Would the press even do that? Even the press will he pick it up, and will it recede?” he said

Coats also noted the possibility of AI being used to “clean” candidate images.

“There’s the possibility of muddying the waters, not just to create attack ads or disinformation about your opponent, but to try to clean yourself up. It’s an octopus. There are so many ways that I don’t think we’ve even thought about how it could be deployed,” Coats said.

According to Jay Hmielowski, UF associate professor of public relations, candidates on the receiving end of ads that use AI-generated images don’t have to use new methods to combat attacks.

For example, candidates can use programs designed to detect the use of AI-generated images, said Hmielowski, who specializes in political communications.

“You can use that and say, look, we ran it through that detector, and that clearly shows that this is not our candidate saying that. Also, here’s the actual video of what happened at that event ” he said. “So you’d be doing the same things you’ve always done. Back it up with, here’s what really happened, here’s the facts about it. And then you hope that gets to the population of people who are willing to hear things beyond their kind of political bubbles,” he said.

By Ryan Dailey



Source link

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *