AI is making politics easier, cheaper and more dangerous | news

b87268c4 7873 11e7 af9a f3a04545878a

WASHINGTON — It’s a jarring political ad: Images of a Chinese attack on Taiwan lead to scenes of ransacked banks and armed soldiers enforcing martial law in San Francisco. A narrator hints that it’s all happening under President Joe Biden’s watch.

These images from the Republican National Committee ad are not real and the scenarios are quite obviously fictitious. But thanks to the work of artificial intelligence, the images look like real life. Just days after the ad was posted online in April, Rep. Yvette Clarke, D-New York, introduced legislation to require disclosure of AI-generated content in political ads.

“This is going too far,” he said in an interview. Small type on the RNC ad says, “Built entirely with AI imagery.” Clarke’s bill is going nowhere in a Republican-controlled Legislature, but it illustrates how rapidly advancing artificial intelligence has put Washington on its back.

Voters in the United States and around the world are already inundated with AI-generated political content. Click on an email asking for donations, for example, and you might be reading a message written by a so-called big language model, political consultants say — the technology behind ChatGPT, the startup’s wildly popular chatbot OpenAI. Politicians are also increasingly using artificial intelligence to speed up mundane but critical tasks such as analyzing censuses, assembling mailing lists and even writing speeches.

As in many industries, AI is poised to increase the productivity of political workers and likely eliminate more than a few of their jobs. It’s hard to say how many, but the business of politics is full of the kinds of roles that researchers believe are most vulnerable to disruption by generative AI, such as legal professionals and administrative workers.

But even more ominously, AI has the potential to empower the spread of disinformation in political campaigns. The technology is capable of rapidly creating so-called “deepfakes,” fake images and videos that some political operatives predict will soon be indistinguishable from the real thing, allowing miscreants to literally put words in their opponents’ mouths.

Deepfakes have plagued politics for years, but with AI, clever editing skills are no longer required to create them.

At its best, AI could improve political communications. For example, cash-strapped start-up campaigns could use technology to produce campaign materials cheaply with fewer staff. Some political consultants who traditionally work only with presidential and Senate campaigns are making plans to work with smaller campaigns that use AI to provide more services at a lower price.

And the tech industry is trying to combat deepfakes. Companies like Microsoft Corp. have pledged to embed digital watermarks into images created with their AI tools to distinguish them as fake.

“knife fight”

In June, Florida Gov. Ron DeSantis’ presidential campaign ran an online ad featuring AI-generated images of then-President Donald Trump hugging and kissing Anthony Fauci. The former director of the National Institute of Allergy and Infectious Diseases is a pariah among Republicans for his public health recommendations during the pandemic.

A fact-check note was attached to the DeSantis campaign’s tweet saying the images, mixed between actual images and videos of Trump, were created by artificial intelligence. The DeSantis campaign did not initially identify them as fake.

In Germany, a far-right party recently distributed AI-generated images of angry immigrants without telling viewers they were not real photographs. This one was also flagged on Twitter, but the incident shows how quickly technology is being adopted for political messaging and the inherent risks, said Juri Schnoller, CEO of Cosmonauts & Kings, a German political communications firm. .

“AI can save or destroy democracy. It’s like a knife fight, right? You can kill somebody, or you can make the best dinner,” Schnoller said.

Russian and Chinese disinformation mills are mingling and concerns are becoming even more acute, disinformation experts say. Trolls and hackers from these nations are already spreading propaganda within their own borders and in countries around the world.

Graphika, a US-based disinformation monitoring firm, found in February a pro-Chinese influence operation spreading AI-generated video footage of fake news anchors promoting the interests of the Chinese Communist Party.

Rob Joyce, director of cybersecurity at the National Security Agency, said both state actors and cybercriminals have begun experimenting with generating ChatGPT-like text to trick people online.

“That Russian hacker who doesn’t speak English well is no longer going to create a crap email to your employees,” Joyce said earlier this year. “It will be native English, it will make sense, it will pass the sniff test.”

In March, an anonymous Twitter user posted an altered video that went viral, purporting to show Biden verbally attacking transgender people. Another, widely circulated by a US expert, appeared to show Biden ordering a nuclear attack on Russia and sending troops to Ukraine.

falling behind

Washington is bad at keeping up with emerging technology, let alone regulating it. Despite broadly agreeing that Big Tech is too powerful, the two parties have been unable to pass comprehensive legislation to control the industry for years. Between 2021 and 2022, Congress held more than 150 hearings on technology, with little to show for it.

In June, there was a Senate briefing called “What is AI?”

The United States has no federal privacy law and has not updated antitrust laws to account for the growing concentration of the technology industry. Lawmakers have been unable to agree on whether, or how, to regulate online speech.

Last month, the Federal Election Commission deadlocked 3-3 on a request to develop rules for AI-generated political ads. Republicans on the panel, which is evenly split between parties and routinely deadlocks on controversial issues, said the agency lacked explicit authority for the regulations.

Other countries are moving forward with regulation, driven by the ChatGPT craze. On June 14, the European Parliament voted to restrict the most anxious uses of nascent technology, such as biometric surveillance, artificial intelligence that can identify people based on their faces or bodies. The law, still under debate, could also require companies to disclose more information about datasets used to train chatbots.

European officials are separately lobbying companies such as Alphabet Inc.’s Google. and Meta Platforms Inc. to tag AI-generated content and images to help combat disinformation from adversaries like Russia.

Chinese regulators are aggressively imposing new rules on tech companies to ensure the Communist Party’s control over AI and related information available in the country. Each AI model must be submitted for government review before being introduced to the market, and synthetically generated content must carry “visible labels,” according to a paper from the Carnegie Endowment for International Peace this week .

Cheaper campaigns

At best, AI could make U.S. political campaigns “much cheaper,” said Martin Kurucz, the chief executive of Sterling Data Company, which works with Democrats.

The technology is already being used to help write first drafts of speeches and op-eds, create ads, craft lobbying campaigns and more, according to lobbyists, congressional and campaign staff and political consultants. Art generators like Midjourney, an AI program that generates hyper-realistic images based on text prompts, have the potential to increase productivity or even replace the work of creative teams that can cost thousands of dollars.

While the RNC has already made an announcement about attacking generative artificial intelligence, the Democratic National Committee is still experimenting with the technology. A spokesperson said the committee has sent automated AI fundraising emails and is considering how to expand its use of AI in the future.

On Capitol Hill, the House Chief Administrative Officer’s Office of Digital Services in April awarded 40 licenses for ChatGPT Plus, which House offices have used to help write emails, investigative reports and and all draft legislation. Writing full invoices is still too complicated a task for generative AI. Last month, the House created new rules restricting the use of ChatGPT in Congress, clarifying that staff cannot put confidential information into the chatbot.

There are some signs that lawmakers are taking the threat of AI more seriously than earlier technologies that were poised to upend politics.

After it became clear that social media would play a vital role in politics, for example, lawmakers let a decade pass before calling Mark Zuckerberg to testify at a hearing.

OpenAI CEO Sam Altman testified on the Hill in May, less than a year after ChatGPT opened to the public. He told lawmakers that his industry desperately needs regulation and that he is concerned about the nefarious uses of artificial intelligence.

“He won’t know the truth”

OpenAI has noticed an increase in the use of ChatGPT for political purposes, an OpenAI spokesman said, and has tried to preempt concerns that its product could be used to deceive voters.

The company issued new guidelines in March that prohibit “political campaigning or lobbying” through ChatGPT, including generating campaign materials aimed at particular demographics or producing “high volumes” of materials. OpenAI’s trust and security teams are trying to identify political uses of the chatbot that violate company policies, the spokesperson said.

The American Association of Political Consultants last month condemned the use of misleading generative AI in political ads, calling it a “threat to democracy.” The group said it plans to condemn and potentially fine members who develop “deepfake” ads.

But in a society where access to AI tools is widespread and has little cost, the worst actors are unlikely to be members of a professional association. Frank Luntz, a veteran Republican strategist, said he fears AI technology will fuel voter confusion in the 2024 US presidential race.

“In politics, the truth is already in short supply,” he said. “Thanks to artificial intelligence, even those who care about the truth will not know the truth.”



Source link

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *