this post was submitted on 17 Sep 2025
91 points (96.9% liked)
Fuck AI
4038 readers
525 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Apart from the fact that these hallucinations just cannot be fixed, that doesn't even seem to be the only major problem atm: ChatGPT 5, for example, often seems to live in the past and is regularly unable to assess reliability when a question needs to be answered based on current data.
For example, when I ask who the US president is, I regularly get the answer that it is Joe Biden. When I ask who the current German chancellor is, I get the answer that it is Olaf Scholz. This raises the question of what LLMs can be used for if they cannot even answer these very basic questions correctly.
The error rate simply seems far too high for use by the general public—and that's without even considering hallucinations, but simply based on answers to questions that are based on outdated or unreliable data.
And that, in my opinion, is the fundamental problem that also causes LLMs to hallucinate: They are unable to understand either the question or their own output—it is merely a probability calculation based on repetitive patterns—but LLMs are fundamentally incapable of understanding the logic behind these patterns; they only recognize the pattern itself, but not at all the underlying logic of the word order in a sentence. So they do not have a concept of right or wrong but only a statistical model based on the sequence of words in sentences—the meaning of a sentence cannot be captured fully in this way, which is why LLMs can only somewhat deal with sarcasm, for example, if the majority of sarcastic sentences in their training data have /s written after them so that this can be interpreted as an indicator for sarcasm (this way they can at least identify a sarcastic question if it contains/s).
Of course, this does not mean that there are no use cases for LLMs, but it does show how excessively oversold AI is.