Artificial intelligence enters the political arena

AI Hearings 05242023 0

In April 2023, after much speculation, President Biden officially launched his re-election campaign through video announcement The same day, the Republican National Committee (RNC) responded with its own thirty-two announcement, which predicted four more years under President Biden with more crime, open borders, war with China and economic collapse. It looks like a run-of-the-mill political attack at first glance, but in reality, it’s the first national campaign ad made up of images. completely generated by artificial intelligence (AI). And while the RNC has been transparent about its use of AI, it has nevertheless dragged the electorate into a new era of political advertising, with few guardrails and serious potential implications for disinformation and disinformation.

In his 2018 Foreign Affairs article, “Deepfakes and the New Disinformation War,” Robert Chesney and Danielle Citron predicted that social media’s “information cascade,” declining trust in traditional media, and the growing credibility of deep fakes would create a firestorm perfect for spreading misinformation and disinformation. . His predictions have already begun to come true. In January, a deep fake video circulated on Twitter that appeared to show President Biden announcing that he had reintroduced the draft and would send Americans to fight in Ukraine. The clip initially shown caption describing it as an AI “imagination,” but quickly lost the disclaimer through circulation, showing how easily even the use of transparently shared AI can become in disinformation

More about:

Technology and Innovation

Robots and Artificial Intelligence

Female political leadership

Elections and voting

Although Chesney and Citron focused on the geopolitical threats of deep fakes and large learning models (in the hands of Russia or terrorist organizations), it is not difficult to imagine how these same elements could go off the rails with the political advertising Even without AI-generated images, there has been a bit of a race to the bottom to produce the most provocative campaign ads This isn’t the first use of digitally enhanced images in campaign ads either. In 2015, researchers Found that the McCain campaign used images of then-candidate Barack Obama in attack ads that “appear to have been manipulated and/or selected in a way that produces a darker complexion for Obama.”

CFR experts investigate the impact of information and communication technologies on security, privacy and international affairs. 2-4 times a week.

Digital and Cyberspace Policy Program updates on cybersecurity, digital commerce, internet governance and online privacy. bimonthly

A roundup of global news with CFR analysis delivered to your inbox every morning. Most weekdays.

A weekly roundup of CFR’s latest on the week’s top foreign policy stories, with summaries, opinions and explanations. every friday

By entering your email and clicking subscribe, you agree to receive CFR advertisements about our products and services, as well as invitations to CFR events. You also agree to our Privacy Policy and Terms of Use.

As we’ve discussed in previous articles, these emerging technologies are likely to be used more effectively against vulnerable populations, such as women, people of color, and members of the LGBTQI+ community who are running for office. In a study of the 2020 congressional election cycle, a report from the Center for Democracy and Technology found that women candidates of color were twice as likely to be targeted by online misinformation and disinformation campaigns. In India, deepfake technology has been armed against women politicians and journalists, and many reported that their photos have been placed in pornographic images and videos and distributed across the Internet. AI-generated imagery and deep fakes in political ads could easily be used to sexualize female politicians, opinion makers and other leaders, which research has shown can undermine. of women credibility in campaigns

Also arises the risk that Citron and Chesney named the “liar’s dividend”. Increasingly realistic fake videos, audios and photos could allow politicians to avoid responsibility for any problematic sound or video by claiming that it should have been obvious to viewers all along that such materials were generated by AI or a deepfake. In an age when politicians can shirk responsibility because of negative partisanshipthe addition of the liar’s dividend could to provide the ultimate “get out of jail free” card.

Social media platforms have begun implementing new policies to address AI-generated content and deepfakes, but have struggled to integrate these rules with existing policies on political content. Meta has forbidden deepfakes on its platforms, but stands firm in its policy of no fact checking politicians TikTok has banned deepfakes from all private figures, but only bans them from public figures if specifically endorse products or violate other terms of the App (such as promoting hate speech). However, deep spoofs of public figures for “artistic or educational content” purposes are allowed.

In response to the RNC ad, Rep. Yvette Clark of New York introduced the “REAL Political Ads Act” requiring disclosures for any use of AI-generated content in political ads. Meanwhile, the Biden administration hosted tech CEOs at the White House earlier this month and released an action plan to “foster responsible innovation in AI”. Last week, the Senate Judiciary, Technology and Privacy Subcommittee held a hearing on possible supervision of AI technology. Although many have done it he lamented that there hasn’t been a bigger government response to regulate potential AI threats more broadly, with another election cycle already underway, and AI’s foray into politicians’ backyards could light a needed fire .

More about:

Technology and Innovation

Robots and Artificial Intelligence

Female political leadership

Elections and voting

Alexandra Dent, Research Associate at the Council on Foreign Relations, contributed to the development of this blog post.



Source link

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *