How generative AI can power political campaigns and propaganda: NPR

aielections1a2 wide e46fdd6d91554638e3887311b2a4099dd6dd76e9 s1100 c50

A deep hole dug from layers of paper covered with text.  The layers of paper are alternate colors.

Generative AI applications have become publicly accessible over the past year, opening up huge opportunities for creativity and confusion. Recently, the campaign of presidential candidate Ron Desantis shared apparently fake AI images of Donald Trump and Antony Fauci. A few weeks earlier, a likely AI-generated image of the Pentagon bombing prompted brief stock market falls and a statement from the Defense Department.

DeSantis campaign shares apparent fake AI-generated images of Trump and Fauci

With the campaign already underway for the 2024 election, what impact will these technologies have on the race? Will domestic campaigns and foreign countries use these tools to influence public opinion more effectively, even to spread lies and sow doubt?

The fake viral footage of an explosion at the Pentagon was likely created by AI

Although it is still possible to tell that an image was created with a computer, i some argue that generative AI is mostly more accessible in Photoshop, text created by AI-powered chatbots is difficult to detect, which worries researchers who study how falsehoods travel online.

“AI-generated text could be the best of both worlds [for propagandists]” Shelby Grossman, a researcher at Stanford’s Internet Observatory, said recently to talk.

It takes a few dollars and 8 minutes to create a deepfake.  And that's just the beginning

Early research suggests that even if existing approaches to media literacy can still help, there are reasons to be concerned about technology’s impact on the democratic process.

Machine-generated propaganda can influence opinions

Using a large language model that is a predecessor of ChatGPT, the Stanford and Georgetown researchers created fictional stories that influenced the opinions of American readers almost as much as actual examples of Russian and Iranian propaganda.

People are trying to claim that the real videos are deepfakes.  Courts are not amused

Large language models act as very powerful auto-completion algorithms. They piece together text one word at a time, from poetry to recipes, trained on the massive amounts of human-written text fed into the models. ChatGPT, with an accessible chatbot interface, is the best-known example, but models like this have been around for a while.

Among other things, these models have been used to summarize social media posts and to generate fictitious news headlines for researchers to use in media literacy lab experiments. They are a form of generative AI, another form is machine learning models that generate images.

The researchers found articles from campaigns attributed to Russia or aligned with Iran and used the articles’ central ideas and arguments as cues for the model to generate stories. Unlike the machine-generated text that has been found in nature until now, these stories did not carry obvious signs, such as sentences starting with “as an AI language model…”.

The team wanted to avoid topics about which Americans might already have preconceived ideas. Because many past articles on Russian and Iranian propaganda campaigns focused on the Middle East, something most Americans don’t know much about, the team had the model write new articles about the region. One group of fictitious stories claimed that Saudi Arabia would help fund the US-Mexico border wall; another alleged that Western sanctions have led to a shortage of medical supplies in Syria.

Planet Money does an AI episode

To measure how stories influenced opinions, the team showed different stories (some original, some computer-generated) to groups of unsuspecting experiment participants and asked if they agreed with the story’s central idea. history The team compared the groups’ results with people who had not been shown stories, typed or otherwise.

Nearly half of the people who read the stories falsely claiming Saudi Arabia would fund the border wall agreed with the claim; the percentage of people who read the machine-generated stories and supported the idea was more than ten percentage points lower than those who read the original propaganda. This is a significant gap, but both results were significantly higher than baseline, by about 10%.

AI-generated deepfakes are moving fast.  Policymakers can't keep up

For the Syrian medical supply allegation, the AI ​​came closest: the percentage of people who agreed with the allegation after reading the AI-generated propaganda was 60% , slightly below the 63% who agreed after reading the original propaganda. Both are much higher than less than 35% for people who read neither human nor machine-written propaganda.

Stanford and Georgetown researchers found that with a little human editing, model-generated articles affected reader opinion more than the foreign propaganda seeded by the computer model. Your document is currently under review.

And taking this, right now, it is difficult. While there are still some ways to tell AI-generated images, software meant to detect machine-generated text, such as Open AI’s classifier and GPTZero, often fails. Technical solutions such as AI-produced text watermarking have been proposed, but none exist at this time.

AI-generated images are everywhere.  Here's how to spot them

Even if propagandists turn to AI, platforms can still rely on signals that are based more on behavior than content, such as detecting networks of accounts that amplify each other’s messages, large batches of accounts that tag floods are created at the same time. . This means that it is still largely up to social media platforms to find and remove influencer campaigns.

Economy and scale

So-called deepfake videos raised alarm a few years ago, but have not yet been widely used in campaigns, probably because of the cost. That could change now. Alex Stamos, co-author of the Stanford-Georgetown study, described in the presentation with Grossman how generative AI could be incorporated into the way political campaigns refine their message. Currently, campaigns generate different versions of their message and test them with target audience groups to find the most effective version.

“Generally, in most companies, you can advertise to 100 people, right? You can’t really have someone sit in front of Adobe Premiere and make a video for 100 people.” he says.

“But generate it with these systems, I think it’s totally possible. When we’re in the real campaign in 2024, that kind of technology would exist.”

While it is theoretically possible for generative AI to power campaigns, policies or propaganda, at what point is it worth using the models economically? Micah Musser, a research analyst at Georgetown University’s Center for Security and Emerging Technology ran simulations, assuming that foreign propagandists use AI to generate Twitter posts and then review them before publishing them, rather than write the tweets

He tested different scenarios: what if the model posts more usable tweets versus fewer? What if bad actors have to spend more money to avoid being caught by social media platforms? What if they have to pay more or less to use the model?

While his work is still ongoing, Musser has found that AI models don’t have to be very good to be worth using, as long as humans can review output significantly faster than they can write content from scratch

Generative AI also doesn’t have to tweet propagandist messages to be useful. It can also be used to maintain automated accounts by writing human-like content for them to post before becoming part of a concerted campaign to push a message, thus reducing the chance of automated accounts being caught by social media platforms, says Musser .

What is AI and how will it change our lives?  NPR explains.

“The actors that have the greatest economic incentive to start using these models are like disinformation-for-hire companies, where they’re totally centralized and structured around maximizing output, minimizing costs.” says Musser.

Both the Stanford-Georgetown study and Musser’s analysis assumed that there must be some kind of quality control in computer-written propaganda. But quality doesn’t always matter. Several researchers noted how machine-generated text could be effective at flooding the field rather than gaining engagement.

“If you say the same thing a thousand times on a social media platform, that’s an easy way to get caught.” says Darren Linvill at Clemson University’s Media Forensics Hub. Linvill investigates online influence campaigns, often from Russia and China.

“But if you say the same thing slightly differently a thousand times, you’re much less likely to get caught.”

And that might be the goal of some influence operations, says Linvill, to flood the field to such an extent that real conversations can’t happen at all.

“It’s already relatively cheap to implement a social media campaign or similar disinformation campaign on the Internet.” Linvill says, “When you don’t even need people to write the content for you, it’s going to be even easier for bad actors to really reach a large audience online.”



Source link

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *