this post was submitted on 05 Aug 2025
96 points (94.4% liked)
Fuck AI
3642 readers
760 users here now
"We did it, Patrick! We made a technological breakthrough!"
A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That's because "hallucinating" isn't a bug, it's the core feature of llms. That tech bros have figured out how kludge on a way to get it to sometimes recite accessible data doesn't change the fact that the central purpose for these algorithms is to manufacture text from nothing (well, technically from random noise). The "hallucination" is the failure of the tech bros to hide that function.
It's not an add-on feature. The LLM produces something with the best score it can. Things that increase the score:
So that includes:
If it has no relevant facts then it will maximise the others to get a good score. Hence you get confidently wrong statements because sounding like it knows what it's talking about scores higher than actually giving correct information.
This process is inherent to machine learning at its current level though. It's like a "fake it until you make it" person, who will never admit they're wrong.