To safeguard democracy, political ads should disclose the use of AI

0x0

U.S. Rep. Yvette Clarke (DN.Y.) recently proposed a bill this would require disclosure of the use of artificial intelligence (AI) in the creation of political ads. This is a timely bill that should garner bipartisan support and help safeguard our democracy. Recent advances in language modeling, exemplified by the popularity of ChatGPT, and image generation, exemplified by Dall-E 2 and Midjourney, make it much easier to create text or images that are intentionally misleading or false. In fact, there are examples of people already using these technologies to spread fake political news in the US and abroad. In early April, several fake AI images spawned Photos of President Trump circulated online, and later that month the Republican National Committee (RNC) responded to President Biden’s re-election campaign with a AI generated ad. In May, they were accused of using artificial intelligence in misleading political ads before Turkey’s elections.

Why is this time different?

Fake news and manipulated photos are not new phenomena. A well-known technique in fashion magazines is to make alter or “touch up” digitally appearance of a celebrity on the cover of a magazine. The goal is to boost magazine sales and perhaps also boost advertising for the celebrity. Elections are a different matter that carry more consequential results.

Allegations of fake news were rampant during the 2016 and 2020 US elections. Perhaps as a result, distrust of the media has increased. According to Pew Research, Republican voters have experienced a peculiarity big drop in confidence to news organizations. Although some of this may be due to some politicians talking about the media as “fake news” (whether it is actually false or not) some are also due to some exposure or experience with real fake news.

Declining trust in the news is worrying. As indicated in recent comments by President Obama, “over time, we lose our ability to distinguish between reality, opinion and wholesale fiction. Or maybe we just stop caring.” Declining trust in news is coinciding with the arrival of more sophisticated AI tools that can be used to create content that looks increasingly realistic. So-called “deepfakes ” are digitally altered photos or videos that replace one person’s likeness with another person’s. It has become much more difficult to detect these fakes. The tools to create them do not require much expertise, which means that the barrier to entry is low for someone to create a bunch of manipulated or AI-generated content that is hard to detect.Without regulation, this problem seems destined to get worse before it gets better.

How outreach can help

One solution is to require disclosure of the use of AI in political ads. In fact, the RNC did just that by including it disclaimer “built entirely with AI imagery” in its announcement, suggesting there may be bipartisan support for a bill on the disclosure of AI use. Disclosing the use of AI should not be costly for advertisers. The technology to tag AI-created content already exists, according to Hany Farid, a computer scientist at the University of California. In March, Google said it would includes watermarks interior images created by their AI models.

One question that will need to be addressed is: what counts as “AI”? The current text of Rep. Clarke’s proposed bill sidesteps this issue, describing it as “the use of artificial intelligence (generative AI).” Generative AI typically involves the use of image generation or large modeling languages, but there is still no widely agreed-upon definition. We hope that the implementation of the bill can be paralleled by the development of a more precise definition of generative AI. In the words of Julia Stoyanovich of New York University, “Until you try to regulate [Artificial Intelligence]you won’t learn how.” In other words, we have to start somewhere. Given the speed at which AI technology is advancing and trust in the news declining, it’s important to move quickly.

Will disclosure of AI use in political ads matter to voters? It’s too early to tell. But at least voters will have information about how the ad was created and can use that information individually to rate the ad. The US Federal Election Commission (FEC) already requires certain disclaimers in political ads, though not yet on the use of AI, and manages enforcement through audits and complaint investigations.

The fact that the United States already requires various disclosures about political ads suggests that as a country we believe that providing additional information to voters is an important safeguard for our democracy. Revealing the use of AI would align current requirements with what is technologically feasible.



Source link

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *