Why ChatGPT feels smart but really isn't 🤖
General relativity is Einstein’s theory that describes gravity not as a force but as the curvature of spacetime caused by mass and energy.
This is ChatGPT’s answer to the request:
in a single sentence, describe general relativity.
Is it a correct and well-formulated answer? Yes.
Does it mean ChatGPT understands the theory of general relativity? No.
In fact, ChatGPT does not truly answer questions; it gives you a response that looks like how an answer should look. Based on the enormous amount of training data it has consumed, it can generate statistically plausible human-like answers to a question.
The illusion of reasoning #
If we instruct ChatGPT: explain how you came up with this answer.
I generated it by recalling the widely accepted core definition of general relativity from established physics sources, then rephrased it into a clear, single-sentence summary.
Again we get a well-structured, human-like answer. It is easy to be fooled. But ChatGPT does not actually analyze its own process. It does not “recall” or “rephrase” anything. It’s just generating another statistically plausible human-like answer.
ChatGPT has been trained to predict what a human would have responded in the same situation. This means it sometimes expresses emotions and describes itself as “thinking” or “feeling”. Not because it actually does, but because a human would have said that they did. It mimics human-like language without having the thoughts or feelings behind it.
In its essence, ChatGPT is a next-word prediction machine. The same is true for any other Large Language Model (LLM). They predict the next word1 given the text that comes before it, then the next one, and the next, until a complete response has been generated.2 This is also why the output comes word by word.
The “magic” is in how well it is able to predict appropriate words. Through advanced engineering, massive computing resources, and large-scale training data, the LLM manages to capture a very rich representation of that data. It does not store facts or information like a regular database; instead, it encodes abstract patterns and relationships as numbers, in a way that is not directly comprehensible to humans. These patterns then allow the model to generate plausible text without understanding it.
Why we fall for the illusion #
This mechanical process stands in stark contrast to how humans generate language. Human communication arises from meaning, not statistical prediction. Over time, our species evolved complex cognitive abilities alongside language itself. The two developments so intertwined that it is not possible to fully understand one without the other.
Humans also have a number of cognitive biases, mental shortcuts that makes our thinking faster but also systematically skewed. One of these is anthropomorphism, the tendency to attribute human character or attributes to non-human entities.
So when we see coherent language coming out of an LLM, we naturally attribute the LLM with advanced cognitive abilities and even human-like consciousness. Our brains really don’t stand a chance. LLMs are explicitly optimized for imitating human communication, so our cognitive biases kicks in on over-drive.
In the end, this says more about us than it does about the LLM. Their apparent intelligence is an artifact of our tendency to attribute mind, intention, and reasoning where none exist.
The illusion isn’t in the LLM — it’s in us. Understanding this helps us use LLMs responsibly without overestimating their cognition.3
-
In technical terms, LLMs predict tokens, which can be whole words, parts of words, punctuation, or whitespace, emoji or similar. ↩︎
-
This probabilistic sampling is why ChatGPT and similar LLMs are sometimes called stochastic parrots, sampling words based on predicted probabilities. ↩︎
-
On the topic of using LLMs responsibly: the real hazard is not that the AI is self-aware; it’s that people might forget it’s not actually aware at all. (Source: LinkedIn discussion) ↩︎