The use of AI in elections causes a fight for the rails

25ai campaigns homeless full facebookJumbo

In Toronto, a candidate in this week’s mayoral election who is pledging to clean up homeless encampments released a set of AI-illustrated campaign promises, including fake dystopian imagery of people camping on a street in the city center and a fabricated image of tents set up in a park. .

In New Zealand, a political party published a realistic-looking representation on Instagram of fake thieves breaking into a jewelry store.

In Chicago, the runner-up in April’s mayoral vote complained that a Twitter account masquerading as a news outlet had used artificial intelligence to clone his voice in a way that suggested he condoned the police brutality

What started a few months ago as a slow trickle of AI-composed fundraising emails and promotional images for political campaigns has turned into a steady stream of campaign materials created by technology, rewriting the political playbook for to democratic elections around the world.

Increasingly, political consultants, election researchers and lawmakers are saying that putting in place new guardrails, such as legislation to control synthetically generated ads, should be an urgent priority. Existing defenses, such as social media rules and services that seek to detect AI content, have done little to stem the tide.

As the 2024 US presidential race begins to heat up, some campaigns are already testing the technology. The Republican National Committee released a video featuring artificially generated images of doomsday scenes after President Biden announced his re-election bid, while Florida Gov. Ron DeSantis released fake images of former President Donald Trump J. Trump with Dr. Anthony Fauci, former health. official The Democratic Party experimented with AI-written fundraising messages in the spring and found that they were often more effective at driving engagement and donations than copy written entirely by humans.

Some politicians see AI as a way to help lower campaign costs, using it to create instant responses to debate questions or attack ads, or to analyze data that would otherwise require expensive experts .

At the same time, technology has the potential to spread misinformation to a wide audience. Experts say an unflattering fake video, an email blast filled with computer-generated fake narratives or a fabricated image of urban decay can reinforce prejudice and widen the partisan divide by showing voters what they expect to see.

The technology is already much more powerful than manual manipulation, it’s not perfect, but it’s improving quickly and it’s easy to learn. In May, OpenAI CEO Sam Altman, whose company helped start an artificial intelligence boom last year with its popular chatbot ChatGPT, told a Senate subcommittee that he was nervous for the election season.

He said the technology’s ability “to manipulate, to persuade, to provide a kind of individualized interactive disinformation” was “a major area of ​​concern”.

Representative Yvette D. Clarke, D-New York, said in a statement last month that the 2024 election cycle “is poised to be the first election dominated by AI-generated content.” She and other Democrats in Congress, including Sen. Amy Klobuchar of Minnesota, have introduced legislation that would require political ads that use artificially generated material to carry a disclaimer. A similar bill was recently signed into law in Washington state.

The American Association of Political Consultants recently condemned the use of fake content in political campaigns as a violation of its code of ethics.

“People will be tempted to push the envelope and see where they can take things,” said Larry Huynh, the group’s new president. “As with any tool, there can be misuses and wrongdoing by using it to lie to voters, to deceive voters, to create a belief in something that doesn’t exist.”

The recent intrusion of technology into politics surprised Toronto, a city that supports a thriving ecosystem of artificial intelligence research and start-ups. The mayoral elections take place on Monday.

A conservative candidate in the race, Anthony Furey, a former news columnist, recently presented his platform a a document which was dozens of pages long and filled with synthetically generated content to help him make his tough-on-crime stance.

A closer look clearly showed that many of the images were not real: a laboratory scene featured scientists who looked like alien blobs. A woman in another depiction wore a pin in her cardigan with illegible letters on it; similar marks appeared in an image of caution tape at a construction site. The campaign of Mr. Furey also used a synthetic portrait of a seated woman with two arms crossed and a third arm touching her chin.

The other candidates pulled this image for laughs in a debate this month: “We’re actually using real pictures,” said Josh Matlow, who showed a photo of his family and added that “nobody in our pictures has three arms.”

Even so, sloppy interpretations were used to amplify Mr. Fury. He gained enough momentum to become one of the most recognizable names in an election with more than 100 candidates. In the same debate, he acknowledged the use of technology in his campaign, adding that “we’re going to have a couple of laughs here as we move forward with more information about AI.”

Political experts fear that artificial intelligence, when misused, can have a corrosive effect on the democratic process. Misinformation is a constant risk; one of the rivals of Mr. Furey said in a debate that while his staff members used ChatGPT, they always fact-checked it.

“If someone can create noise, generate uncertainty, or develop false narratives, that could be an effective way to sway voters and win the race,” wrote Darrell M. West, a senior fellow at the Brookings Institution. in a report last month. “Since the 2024 presidential election may come down to tens of thousands of voters in a few states, anything that can sway people in one direction or another could end up being decisive.”

Increasingly sophisticated AI content is appearing more frequently on social networks that have been unwilling or unable to police it, said Ben Colman, chief executive of Reality Defender, a company that provides services to detect AI. Weak supervision allows unlabeled synthetic content. “irreversible damage” before addressing it, he said.

“Explaining to millions of users that the content they already saw and shared was fake, long after the fact, is too little, too late,” Colman said.

For several days this month, a Twitch live stream has waged an endless, unsafe-for-work debate between synthetic versions of Mr. Biden and Mr. Trump. Both were clearly identified as fake “AI entities,” but if an organized political campaign created such content and disseminated it widely without any disclosure, it could easily degrade the value of the real material, disinformation experts said.

Politicians could ignore responsibility and claim that authentic images of compromising actions were not real, a phenomenon known as the liar’s dividend. Ordinary citizens could make their own falsifications, while others could entrench themselves deeper in polarized information bubbles, believing only the sources they chose to believe.

“If people can’t trust their eyes and ears, they can only say, ‘Who knows?'” Josh A. Goldstein, a researcher at Georgetown University’s Center for Security and Emerging Technology, wrote in an email . “This could encourage a move from a healthy skepticism that encourages good habits (such as lateral reading and seeking reliable sources) to an unhealthy skepticism that it is impossible to know what is true.”





Source link

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *