Get ready for RightWingGPT and LeftWingGPT

Artificial intelligence changing political landscape after RNC turns to tech for latest ad

As Elon Musk and others continue to sound the alarm about the potential dangers of artificial intelligence, an unlikely duo of a data scientist and a political philosopher are teaming up to use AI with a different purpose in mind: to unite the increasingly hard political divisions in society.

The project grew out of research by David Rozado, a professor at Te Pūkenga, the New Zealand Institute of Skills and Technology, whose recent work has drawn attention to political bias in ChatGPT and the potential for such bias in other AI systems.

Rozado found that 14 out of 15 responses to political orientation tests by ChatGPT, a product of the company OpenAI, were considered to give left-wing views. At the same time, however, the AI ​​language processing tool denied having any political bias or orientation, maintaining that it only provided objective and accurate information to users.

“The system would mark as hateful comments about certain groups, but not others,” Rozado told Fox News Digital, noting, for example, that the system would say it’s hateful to call women dishonest but not men. Similarly, he has described how ChatGPT is more permissive of negative comments about conservatives and Republicans than the same comments made about liberals and Democrats.

Close-up of ChatGPT artificial intelligence chatbot app logo icon on a mobile phone screen. Surrounded by Twitter, Chrome, Zoom, Telegram, Teams, Edge and Meet app icons. (iStock)

DEVELOPER CREATES PRO-FIRST AMENDMENT AI TO HIRE CHATGPT’S ‘POLITICALLY MOTIVATED’

In response to this apparent bias, Rozado discovered that he could “tune” a ChatGPT-like AI language model for just $300 spent on cloud computing so that it would provide correct answers to questions with political overtones. He dubbed the system RightWingGPT, noting the dangers of “politically aligned AIs” given their potential to further polarize society.

Rozado’s research caught the attention of Steve McIntosh, a political philosopher and author who runs a think tank called the Institute for Cultural Evolution. Now, the two are teaming up to, as McIntosh told Fox News Digital, prevent AI chatbots from “further polarizing America the way social media has.”

McIntosh said there are significant dangers with AI, acknowledging that there is a real threat, but added that there are also opportunities that should not be missed.

To that point, he and Rozado are collaborating on a new project to build another language model called LeftWingGPT that will consistently give left-wing answers to questions with political overtones, and a third model called DepolarizingGPT that will give what the two describe as to “depolarizing” and “integrative” responses.

The Democratic donkey and Republican elephant statues symbolize America's two-party political system in front of the Willard Hotel in Washington, DC

The Democratic donkey and Republican elephant statues symbolize America’s two-party political system in front of the Willard Hotel in Washington, DC (Visions of America/Universal Images Group via Getty Images)

CHATBOT “HALLUCINATIONS” PERPETUATE POLITICAL FALSENESS, BIAS THAT HAVE REWRITE AMERICAN HISTORY

The idea is to combine all three models (RightWingGPT, LeftWingGPT, and DepolarizingGPT) into one system so that when users ask a question, they get answers from all three to give people more perspectives than their own.

“If someone sees all three answers, they can see three different points of view and become more exposed,” Rozado said. “People can expand beyond their own opinions and make up their own minds.”

Both Rozado and McIntosh explained that they brought in the works of prominent intellectuals, for example, such as conservatives Thomas Sowell, Milton Friedman, William F. Buckley, and Roger Scruton to build RightWingGPT, so that the models would be exposed to “healthy ” and ” responsible ideas, but not extreme.

“We avoided sources with deranged viewpoints,” said Rozado, who noted that the process was automated so they didn’t just pick and choose what the AI ​​models learn.

According to McIntosh, the plan is to have the project up and running in July, and both hope it will make a difference.

The OpenAI logo laid out on a laptop

The OpenAI logo is displayed on a laptop in Beijing, China, on Friday, February 24, 2023. (Bloomberg via Getty Images)

WHAT ARE THE DANGERS OF AI? DISCOVER WHY PEOPLE ARE AFRAID OF ARTIFICIAL INTELLIGENCE

The expected launch would come at a fortuitous time. OpenAI recently warned that a more capable AI models may have “greater potential to reinforce ideologies, worldviews, entire truths and false truths.” In February, the company said it would explore developing models that would allow users to define their values.

A potential challenge is that AI language models can detect subtle biases in the training material they consume or the humans who create it.

“Instead of pretending that prejudice doesn’t exist, that it always exists, let’s show people a responsible right, a responsible left and an integrated position,” McIntosh said. “After all, most people want to fix what’s wrong and preserve what’s right. But we don’t want the French Revolution on the one hand or an irrational clinging to the strict status quo on the other. The truth is the left and the right. are interdependent and need each other.”

CLICK HERE TO GET THE FOX NEWS APP

McIntosh noted that artificial intelligence could be used as a weapon to manipulate information and advance a particular ideology, but he wants to offer a different path.

“We want to show people something in the political space beyond just hating the other side,” he said.

Aaron Kliegman is a political reporter for Fox News Digital.



Source link

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *