What is the political agenda of artificial intelligence? | opinions

GettyImages 1206796363

“The hand mill gives you society with the feudal lord; the steam mill society with the industrial capitalist,” Karl Marx once said. And he was right. We have seen time and again throughout history how technological inventions determine the dominant mode of production and with it the type of ‘political authority present in a society.

So what will artificial intelligence give us? Who will take advantage of this new technology, which is not only becoming a dominant productive force in our societies (just as the hand mill and the steam mill once were), but, as we keep reading in the news, it also seems to “run away quickly”. our control”?

Could AI take on a life of its own, as many seem to believe it will, and decide the course of our history on its own? Or will it end up being just another technological invention that serves a particular agenda and benefits a certain subset of humans?

Recently, examples of hyper-realistic content generated by artificial intelligence, such as an “interview” with former Formula 1 world champion Michael Schumacher, who has not been able to speak to the press since a devastating skiing accident in 2013; “photographs” showing former President Donald Trump being arrested in New York; and apparently authentic student essays “written” by OpenAI’s famous chatbot, ChatGPT, have raised serious concerns among intellectuals, politicians and academics about the dangers this new technology may pose to our societies.

In March, these concerns led Apple co-founder Steve Wozniak, AI heavyweight Yoshua Bengio and Tesla/Twitter CEO Elon Musk, among many others, to sign an open agreement. letter accusing AI labs of being “stuck in an out-of-control race to develop and deploy increasingly powerful digital minds that no one, not even their creators, can reliably understand, predict or control” and asking AI developers pausing their work. More recently, Geoffrey Hinton, known as one of the three “Godfathers of AI” sign out of Google “speak freely about the dangers of AI” and said he at least partly regrets his contributions to the field.

We accept that AI, like all era-defining technology, comes with considerable pitfalls and dangers, but unlike Wozniak, Bengio, Hinton and others, we do not believe that it can determine the course of history on its own, without input or guide of mankind. We do not share these concerns because we know that, as with all our technological devices and systems, our political, social and cultural agendas are also embedded in AI technologies. As the philosopher Donna Haraway explained, “Technology is not neutral. We are inside what we do, and it is inside us.”

Before we explain further why we are not afraid of the so-called AI takeover, we need to define and explain what AI actually is, like what we are dealing with now. This is a difficult task, not only because of the complexity of the product at hand, but also because of the media’s mythologizing of AI.

What is being persistently communicated to the public today is that the conscious machine is (almost) here, that our everyday world will soon resemble the one in movies like 2001: A Space Odyssey, Blade Runner and The Matrix.

This is a false narrative. While we are certainly building increasingly capable computers and calculators, there is no indication that we have created, or are close to creating, a digital mind that can actually “think.”

Noam Chomsky recently argued (with Ian Roberts and Jeffrey Watumull) in a New York Times article that “we know from the science of linguistics and the philosophy of knowledge that [machine learning programmes like ChatGPT] they differ profoundly from how humans reason and use language”. Despite its incredibly convincing answers to a variety of questions from humans, ChatGPT is “a heavy statistical engine for pattern matching, dealing with hundreds of terabytes of data and extrapolating the most likely conversational response or to a scientific question”. Imitating the German philosopher Martin Heidegger (and risking igniting the age-old battle between continental and analytical philosophers), we could say: “AI does not think. Just calculate.”

Federico Faggin, the inventor of the first commercial microprocessor, the legendary Intel 4004, explained it clearly in his 2022 book Irriducibile (Irreducible): “There is a clear distinction between symbolic machine ‘knowledge’… and the human semantic knowledge. The first is objective information that can be copied and shared; the latter is a subjective and private experience that occurs in the privacy of the conscious being”.

Interpreting the latest theories of Quantum Physics, Faggin appears to have produced a philosophical conclusion that fits curiously well within ancient Neoplatonism, a feat that may ensure he is forever considered a heretic in scientific circles despite his incredible achievements as an inventor.

But what does all this mean for our future? If our super-intelligent centaur Chiron cannot “think” (and thus emerge as an independent force that can determine the course of human history), who exactly will he benefit and give political authority? In other words, what values ​​will your decisions be based on?

Chomsky and his colleagues asked a similar question in ChatGPT.

“As an artificial intelligence, I have no moral beliefs or the ability to make moral judgments, so I cannot be considered immoral or moral,” the chatbot told them. “My lack of moral beliefs is simply the result of my nature as a machine learning model.”

Where have we heard of this position before? Isn’t that eerily similar to the ethically neutral view of hard liberalism?

Liberalism aspires to confine in the private individual sphere all the religious, civil and political values ​​that proved so dangerous and destructive in the 16th and 17th centuries. He wants all aspects of society to be regulated by a particular – and somewhat mysterious – form of rationality: the market.

Artificial intelligence seems to promote the same brand of mysterious rationality. The truth is that it is emerging as the next global “big business” innovation that will steal jobs from humans, making workers, doctors, lawyers, journalists and many others redundant. The moral values ​​of the new robots are identical to those of the market. It’s hard to imagine all the possible developments now, but a scary scenario is emerging.

David Krueger, assistant professor of machine learning at the University of Cambridge, recently commented to New Scientist: “Essentially every AI researcher (including me) has received funding from big tech. At some point, society may stop believing in the tranquility of people with such strong conflicts of interest and conclude, as I do, that their dismissal [of warnings about AI] betrays illusions rather than good counterarguments.”

If society stands up to AI and its promoters, it could prove Marx wrong and prevent the leading technological development of the current era from determining who holds political authority.

But for now, it looks like AI is here to stay. And their political agenda is fully in sync with that of free market capitalism, whose main (unstated) goal is to destroy any form of social solidarity and community.

The danger of AI is not that it is an uncontrollable digital intelligence that could destroy our sense of self and truth through the “fake” images, essays, news and stories it generates. The danger is that this undeniably monumental invention seems to base all of its decisions and actions on the same destructive and dangerous values ​​that drive predatory capitalism.

The views expressed in this article are those of the authors and do not necessarily reflect the editorial position of Al Jazeera.



Source link

You May Also Like

Leave a Reply

Your email address will not be published. Required fields are marked *