this post was submitted on 08 Aug 2025
37 points (100.0% liked)
Hacker News
2602 readers
769 users here now
Posts from the RSS Feed of HackerNews.
The feed sometimes contains ads and posts that have been removed by the mod team at HN.
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
No shit.
They talk about artificial "intelligence", "reasoning" models, "semantic" supplementation, all that babble, but it's all a way to distract you that large language models do not think. Their output does not show signs of reasoning, unless you're a disingenuous (or worse, dumb) fuck who cherry picks the "hallucinations" out of the equation.
And even this idiotic "hallucinations" analogy is a way to distract you from the fact that LLMs do not think. It's there to imply their reasoning is mostly correct, but suddenly it "brainfarts"; no, that is not what happens, the so-called hallucinations are the result of the exact same process as any other output.