Montag, 15. Dezember 2025

 Splendor and Banality of Artificial Intelligence (AI)


Which in reality is neither the one nor the other !.

It is not intelligence in the proper sense of the term because (and let's start with the etymology that confirms the common meaning): "intelligere" means to understand, comprehend, to place the new within the framework of the known, broadening its dimensions. That is, to add to existing knowledge and/or to clarify it, expand it, recognize logical connections between what was previously known and what is learned.
All of this, however, depends on the existential situation of the person using the tool at that moment, the period of their life, historical era, environment, country, social class, and network of social relationships.

No machine can and will ever be able to do the same thing, because even if emotions could be digitized and added to the algorithms and calculation programs that underlie the system called AI (with all its derivatives like Chat, gpt/cpt, etc.), since a machine feels neither pleasure nor pain, nor likes nor dislikes, and above all, it does not possess free will, it executes programs, it can even program itself (always based on a program!), but it does not... think.

The illusion of machines that can replace the human mind in thinking is as old as the hills.

But machines can only perform tasks assigned to them using tools and energy that humans have in limited quantities: Thus, an engine can propel a truck with energy no human possesses, and an electronic calculator can perform the operations for which it was built by man in short times and on enormous scales that no human could even imagine equaling. But the speed of calculation is not intelligence, just as the speed of a train is not intelligence.

Certainly, the ability of so-called AI to perform translations and even interpreting in ever-shorter times and with ever-increasing precision is not due to an internal "intelligence" of machines, but rather to the computing power that allows them to compare existing translations of millions or billions of texts and choose according to parameters set by human programmers.

As a professional translator, however, I know that performing the same translation at different times and in different emotional situations will result in varying results. My choice of synonyms and syntactic constructions
reflect my cognitive and emotional relationship with the text. Indeed, every translator, especially of literary texts, essentially absorbs the meaning of the text (in all its dimensions: current meaning, reference to the text's era, emotions it evokes, personal tastes, and many other influences) and effectively rewrites it in the target language.

It's clear that AI will eventually make many current activities superfluous: the invention of printing made copyists superfluous, just as robots in industry made assembly lines unnecessary, replacing humans who would otherwise be condemned to lifelong monotonous labor with machines.

It's also true that AI, in its expansions (e.g., Chat gtp), can write novels and poems, compose music, and write theses. But these are copies of existing material, selected based on statistical analyses of the enormous volume of texts the system was supplied with.

But just as photographs will never replace the artistic genius of a painter, no artistic artifact produced by AI can ever be considered a work of art, a literary masterpiece, or a brilliant musical composition.

I recently had the opportunity to attend seminars and conferences on LLM "Large Language Models," i.e., research on linguistic structures conducted on a huge volume of texts. I found nothing a good linguist wouldn't already know using the modest amount of texts
normally possessed by a scholar.
And once again, Chomsky's theory of language acquisition is confirmed: whether it's an innate ability or simply an application of each individual's basic logical capacity, learning the rules of language occurs on the basis of a modest, limited amount of data (I don't like the term "input," but that's when it's found in the texts of the theory in question).

So, after a laborious use of computer programs, the results in linguistic research appear modest: nothing is gained that wasn't already known.
In fact, text interpretation models aren't even capable of deciphering irony and double entendres: they take literally what they're given to process. And this inability to recognize irony is precisely a distinctive trait of autism: a pathology that is already difficult to treat in humans but definitively impossible to "cure" in AI programs.

Finally: is AI useful? Certainly, to spare humanity jobs that machines can easily perform, with some danger to translations, however, precisely because of the "autistic" nature of machines.

The greatest danger, however, comes from its future use (and is already being partially implemented) in educational institutions: the burden of searching for texts and sources can rightly be utilized, but remembering that AI operates according to algorithms, procedures, and statistical calculations programmed by humans and will therefore always present results ideologically influenced by those who wrote the programs.
The greatest danger, however, is the erasure of data: an encyclopedia reports knowledge according to the level of knowledge at the time it was published. Information can be corrected, modified, and supplemented as research advances, but since research can also move in the wrong direction, if, as in the case of AI, the past is erased and replaced by information from the present, the possibility of control is lost. In other words, AI risks erasing the errors of the past but also continuing to steer us down the wrong path, lacking "historical memory."

This isn't a purely theoretical issue, given that history is being rewritten these days, and due to increasingly subtle (but all-encompassing) censorship, reactions to these falsifications are ignored and suppressed. And for now, I won't delve into the topic of medicine or climate change, because even with the cautious pessimism of an eighty-year-old, I have some hope that real facts are about to emerge in these fields too, and that the prevailing opinion, with or without AI, will ultimately be correct. While real or, more often, self-styled scientists can be wrong, the stubbornness of facts has always been, for me, a safeguard for any assessment.

Keine Kommentare:

Kommentar veröffentlichen