The emergence of artificial intelligence has sparked various reactions from tech leaders, politicians, and the public. While some enthusiastically view AI technology like ChatGPT as a beneficial tool with the potential to revolutionize society, others are concerned that any tool labeled “intelligent” could surpass humanity.
The University of Cincinnati’s Anthony Chemero, a professor of philosophy and psychology in the UC College of Arts and Sciences, argues that the understanding of AI is clouded by language: Although AI is indeed intelligent, it cannot possess human-like intelligence, even if it can deceive and fabricate like its creator.
Chemero explains in a co-authored paper published in the journal Nature Human Behaviour that, based on our everyday usage of the term, AI is undoubtedly intelligent. However, intelligent computers have existed for years. The paper asserts that ChatGPT and other AI systems are large language models (LLM) trained on vast amounts of internet data, much of which reflects the biases of those who contribute the data.
“LLMs generate impressive text but often fabricate information,” he states. “They learn to produce grammatically correct sentences, but require significantly more training than humans. They do not truly understand the meanings of the things they say,” he adds. “LLMs differ from human cognition because they lack embodiment.”
Chemero suggests that those who create LLMs refer to their fabricated output as “hallucinating,” but he argues that “bullsh*tting” would be a more accurate term. LLMs simply generate sentences by repeatedly selecting the statistically most likely next word, without knowing or caring about the truthfulness of their statements.
Moreover, with some manipulation, AI tools can be made to express “nasty things that are racist, sexist, and otherwise biased,” according to Chemero.
The purpose of Chemero’s paper is to emphasize that LLMs lack the intelligence humans possess because humans are embodied beings constantly interacting with other humans and their physical and cultural surroundings.
“This connection to the world and our concern for our own survival sets us apart,” he notes, highlighting that LLMs are not truly part of the world and do not possess any form of care or concern.
In conclusion, Chemero states that LLMs are not intelligent in the same way that humans are because they “don’t give a damn.” He adds, “Things matter to us. We are committed to our survival. We care about the world we live in.”